You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/test/vllm_0.7.2_colocate.py", line 112, in <module>
runner.run()
File "/test/vllm_0.7.2_colocate.py", line 32, in run
num_prompts_per_worker = len(batch) // len(self.workers)
File "/data/miniconda3/envs/vllm7/lib/python3.10/functools.py", line 981, in __get__
val = self.func(instance)
File "/test/vllm_0.7.2_colocate.py", line 57, in workers
ray.remote(MyLLM).options(
File "/data/miniconda3/envs/vllm7/lib/python3.10/site-packages/ray/actor.py", line 869, in remote
return actor_cls._remote(args=args, kwargs=kwargs, **updated_options)
File "/data/miniconda3/envs/vllm7/lib/python3.10/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
return fn(*args, **kwargs)
File "/data/miniconda3/envs/vllm7/lib/python3.10/site-packages/ray/util/tracing/tracing_helper.py", line 384, in _invocation_actor_class_remote_span
return method(self, args, kwargs, *_args, **_kwargs)
File "/data/miniconda3/envs/vllm7/lib/python3.10/site-packages/ray/actor.py", line 1142, in _remote
placement_group = _configure_placement_group_based_on_context(
File "/data/miniconda3/envs/vllm7/lib/python3.10/site-packages/ray/util/placement_group.py", line 547, in _configure_placement_group_based_on_context
check_placement_group_index(placement_group, bundle_index)
File "/data/miniconda3/envs/vllm7/lib/python3.10/site-packages/ray/util/placement_group.py", line 335, in check_placement_group_index
elif bundle_index >= placement_group.bundle_count or bundle_index < -1:
TypeError: '>=' not supported between instances of 'list' and 'int'
Here bundle_index is [0,1,2,3].
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
Delete placement_group_bundle_index=bundle_ids in my example the error is gone. However, the program triggers an ImportError about the undefined symbol in triton. I am testing the example with the latest docker image.
Note that vLLM has a bug for v0.7.0-0.7.2 that prevents you from using ray as the backend, if you specify tensor parallel size to 1, which is fix by #12934
Your current environment
The output of `python collect_env.py`
Driver Version : 525.125.06
CUDA Version : 12.0
Attached GPUs : 8
GPU 00000000:26:00.0
Product Name : NVIDIA A800-SXM4-80GB
🐛 Describe the bug
Error message:

I tried to use a similar method to run LLM:
This leads to this error:
Here
bundle_index
is[0,1,2,3]
.Before submitting a new issue...
The text was updated successfully, but these errors were encountered: