Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow Training Speed Issue #92

Open
langy888 opened this issue Jun 18, 2024 · 1 comment
Open

Slow Training Speed Issue #92

langy888 opened this issue Jun 18, 2024 · 1 comment

Comments

@langy888
Copy link

Hi,

I have followed the steps provided in the repository to run the training for gluestick. However, I have noticed that the training speed is extremely slow. According to the paper, it took ten days to train on two 2080 GPUs. Currently, I'm using two 3090 GPUs, and one epoch takes more than 22 hours. Is this normal? If not, are there any suggestions or improvements that could help speed up the training process?

Thank you!

@rpautrat
Copy link
Member

Hi, usually the extraction of the point and line features is what takes most of the time. So I would suggest either caching the features to avoid re-computing them each time, or try to parallelize more this step. GlueStick is indeed unfortunately quite slow to train, but we are working on a faster version now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants