Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

log_sample after the batch training #82

Open
amasawa opened this issue Jun 19, 2024 · 1 comment
Open

log_sample after the batch training #82

amasawa opened this issue Jun 19, 2024 · 1 comment

Comments

@amasawa
Copy link

amasawa commented Jun 19, 2024

Thanks for your code for the project!

I need to find out the log_sample() after the batch training.

Is it used to reconstruct the sample at each end of the training epoch? And then, put the reconstructed samples into checkpoint/name/sample folder?

@Saranga7
Copy link

Saranga7 commented Jul 1, 2024

I am also exploring the log_sample() method of the code. Yes, it is used to reconstruct the sample at each end of the training epoch and both the reconstrucred samples are put into checkpoint/name/sample. On tensorboard, these samples are also logged along with the respective batch of real images.

I understand that autoencoding is being performed on a sampled batch of images. I am, however, unsure where those original images are being used to condition the generation. In other words, I was expecting cond = encoder(x_start) to be used as a condition to the DDIM, but I don't see it happening, and yet somehow, the generated images are reconstructions of of a sampled real image batch.

I am not sure if I have explained the problem clearly, but can anyone please help clarify it to me??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants