-
-
Notifications
You must be signed in to change notification settings - Fork 6.2k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
[New Model]: Qwen2.5-VL #12486
[New Model]: Qwen2.5-VL #12486
Comments
@fyabc is your team planning to open a PR for this? |
modular_transformers makes it a bit difficult to see the difference between Qwen2-VL and Qwen2.5-VL. Here is a diff of transformers implementation with some reshuffling, modeling part only: The differences in processing, image_processing and config, is minimal. |
So, will it be supported in the future? |
It is already supported in vLLM, please update your vLLM version. |
I've updated the latest vllm (0.7.3), transformers, so on, and find that it will get error when I use the openai api format, the details are here: |
The model to consider.
new model family dropped
https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5
The closest model vllm already supports.
https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d
What's your difficulty of supporting the model you want?
No response
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: