Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug report (serious)] All types of autoencoder and VAE have linear activation. #53

Open
ncble opened this issue Jun 30, 2019 · 1 comment

Comments

@ncble
Copy link
Collaborator

ncble commented Jun 30, 2019

Describe the bug
A normal autoencoder or VAE for images should have either sigmoid or tanh activation, but in the current master version, we have linear activation (i.e. no activation). This bug seriously affects all the previous work (results and the conclusion). (especially the paper: https://arxiv.org/abs/1901.08651 and maybe the other works built on the toolbox, etc. ) By fixing this issue, I have noticed substantial improvements and more coherent results. Some other issues may be related to this bug, e.g. #24

Code example
For both autoencoder and VAE (CNN version):
https://github.com/araffin/srl-zoo/blob/438a05ab625a2c5ada573b47f73469d92de82132/models/models.py#L65-L83

For MLP version:
https://github.com/araffin/srl-zoo/blob/438a05ab625a2c5ada573b47f73469d92de82132/models/vae.py#L22-L28

The above examples are not exhaustive.

P.S. I have confirmed this issue with @kalifou. My solution has already committed and pushed (c.f. my pull request)

@araffin
Copy link
Owner

araffin commented Jul 1, 2019

As discussed, that's ok when using "image net" normalisation, however, you should get better results with activation when using "tf" normalisation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants