Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Support Pixtral-Large HF by using llava multimodal_projector_bias config #12710

Merged

Conversation

mgoin
Copy link
Member

@mgoin mgoin commented Feb 3, 2025

Thanks to @shubhra for finding this issue. We need to port the addition of multimodal_projector_bias in most llava-style configs from huggingface/transformers#34801 to support loading the HF-version of Pixtral-Large

Copy link

github-actions bot commented Feb 3, 2025

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for the fix!

Copy link
Contributor

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, good catch

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Feb 4, 2025
@mgoin mgoin mentioned this pull request Feb 4, 2025
6 tasks
@DarkLight1337 DarkLight1337 merged commit 5d98d56 into vllm-project:main Feb 4, 2025
60 checks passed
fxmarty-amd pushed a commit to fxmarty-amd/vllm that referenced this pull request Feb 7, 2025
…fig (vllm-project#12710)

Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: Felix Marty <felmarty@amd.com>
ShangmingCai pushed a commit to ShangmingCai/vllm that referenced this pull request Feb 10, 2025
panf2333 pushed a commit to yottalabsai/vllm that referenced this pull request Feb 18, 2025
kerthcet pushed a commit to kerthcet/vllm that referenced this pull request Feb 21, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Mar 5, 2025
…fig (vllm-project#12710)

Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: Linkun Chen <github@lkchen.net>
Said-Akbar pushed a commit to Said-Akbar/vllm-rocm that referenced this pull request Mar 7, 2025
…fig (vllm-project#12710)

Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: saeediy <saidakbarp@gmail.com>
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants