-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method #8893
Labels
usage
How to use vllm
Comments
my code
|
You can try commenting out or deleting : 'device = "cuda" if torch.cuda.is_available() else "cpu" |
Thanks, it works.
|
Can you show the full stack trace? |
Here:
|
Can you run |
|
Hothan01
changed the title
[Usage]: RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
[Bug]: RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Sep 29, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I have updated to the latest version and used the “spawn” method,
export VLLM_WORKER_MULTIPROC_METHOD=spawn
but the error still persists. Could you please help me?
The text was updated successfully, but these errors were encountered: