Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How much VRAM size is required for fast_captioner inference? #26

Open
WuTao-CS opened this issue Jul 15, 2024 · 0 comments
Open

How much VRAM size is required for fast_captioner inference? #26

WuTao-CS opened this issue Jul 15, 2024 · 0 comments

Comments

@WuTao-CS
Copy link

Thank you for your excellent work!
When I use the fast_captioner mode of ShareGPT4VideoCaptioner to perform inference on 3090 device (24G), CUDA out of memory appears. Is there any way to reduce the memory overhead? For example, deepspeed inference or reduce the input size?
How can I achieve this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant