Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the inference time. #65

Open
Ma-Weijian opened this issue Jul 4, 2023 · 3 comments
Open

Question about the inference time. #65

Ma-Weijian opened this issue Jul 4, 2023 · 3 comments

Comments

@Ma-Weijian
Copy link

Hello.

Fantastic work on pointcloud self-supervised learning.

However, I'm quite confused about the inference time during dvae pretraining. It seems that the inference time is far longer than the training time. (For 2x NVIDIA-A100 40GB, the training time for one epoch is ~3 minues while the inference time is ~10 minutes.) The nvidia-smi command shows that the GPU utilization is ~1% during inference, while the CPUs are running at full utilization.

When I dig deep into the code, I discovered that the inference batch size is 1. When I adjust the inference batch size to the same as the training batch size, the inference time is still far longer than training and the utilization of CPUs and GPUs barely change.

I just wonder whether changing the inference batch size would affect the performance. Also, why does inference take far longer than training?

@Ma-Weijian
Copy link
Author

Anything helps. Thanks a lot!

@yuxumin
Copy link
Collaborator

yuxumin commented Jul 22, 2023

hi, see https://github.com/lulutang0608/Point-BERT/issues/51. Sry for the late reply

@Ma-Weijian
Copy link
Author

I see. Thanks for your reply.

I still have some questions about choosing dataparallel in training dvae. I commented in #51

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants