Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the performance in x2 model #20

Open
NJU-Jet opened this issue Oct 4, 2020 · 1 comment
Open

About the performance in x2 model #20

NJU-Jet opened this issue Oct 4, 2020 · 1 comment

Comments

@NJU-Jet
Copy link

NJU-Jet commented Oct 4, 2020

Thanks for your excellent work.
But I hava a doublt about the results on x2 model.
I can't get the results as you report in the article, so should I use L2 loss after 1000 epochs?
Thanks!

@Zheng222
Copy link
Owner

@NJU-Jet
Hello, it doesn't need to use L2 loss. I recommend that training more iterations on larger learning rates, like 2e-4 and 1e-4. The performance improvement is mainly in these two learning rates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants