Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The number of required GPUs exceeds the total number of available GPUs in the placement group. #42

Open
JosenJin opened this issue Sep 21, 2024 · 3 comments

Comments

@JosenJin
Copy link

2024-09-21 18:15:28,304 INFO worker.py:1786 -- Started a local Ray instance.
2024-09-21 18:15:28,399 INFO worker.py:1786 -- Started a local Ray instance.
Process Process-2:
Traceback (most recent call last):
File "D:\ai\anaconda3\envs\vita\Lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "D:\ai\anaconda3\envs\vita\Lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "D:\ai\VITA\web_demo\web_interactive_demo.py", line 127, in load_model
llm = AsyncLLMEngine.from_engine_args(engine_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ai\anaconda3\envs\vita\Lib\site-packages\vllm\engine\async_llm_engine.py", line 570, in from_engine_args
executor_class = cls._get_executor_cls(engine_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ai\anaconda3\envs\vita\Lib\site-packages\vllm\engine\async_llm_engine.py", line 546, in _get_executor_cls
initialize_ray_cluster(engine_config.parallel_config)
File "D:\ai\anaconda3\envs\vita\Lib\site-packages\vllm\executor\ray_utils.py", line 265, in initialize_ray_cluster
raise ValueError(
ValueError: The number of required GPUs exceeds the total number of available GPUs in the placement group.
Process Process-3:
Traceback (most recent call last):
File "D:\ai\anaconda3\envs\vita\Lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "D:\ai\anaconda3\envs\vita\Lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "D:\ai\VITA\web_demo\web_interactive_demo.py", line 127, in load_model
llm = AsyncLLMEngine.from_engine_args(engine_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ai\anaconda3\envs\vita\Lib\site-packages\vllm\engine\async_llm_engine.py", line 570, in from_engine_args
executor_class = cls._get_executor_cls(engine_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ai\anaconda3\envs\vita\Lib\site-packages\vllm\engine\async_llm_engine.py", line 546, in _get_executor_cls
initialize_ray_cluster(engine_config.parallel_config)
File "D:\ai\anaconda3\envs\vita\Lib\site-packages\vllm\executor\ray_utils.py", line 265, in initialize_ray_cluster
raise ValueError(
ValueError: The number of required GPUs exceeds the total number of available GPUs in the placement group.

@longzw1997
Copy link
Collaborator

May I ask what your specific configuration is?

@JosenJin
Copy link
Author

2080ti 22G

@longzw1997
Copy link
Collaborator

We haven't tried it on the 2080ti. You can try using 8 * 2080ti 22G, or use GPUs with larger memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants