Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[V1] Update doc and examples for H2O-VL #13349

Merged
merged 3 commits into from
Feb 16, 2025
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/source/models/supported_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -726,7 +726,7 @@ See [this page](#generative-models) for more information on how to use generativ
* `h2oai/h2ovl-mississippi-800m`, `h2oai/h2ovl-mississippi-2b`, etc.
*
* ✅︎
* \*
* ✅︎\*
- * `Idefics3ForConditionalGeneration`
* Idefics3
* T + I
Expand Down Expand Up @@ -869,7 +869,7 @@ See [this page](#generative-models) for more information on how to use generativ
<sup>+</sup> Multiple items can be inputted per text prompt for this modality.

:::{note}
H2O-VL series models will be available in V1 once we support backends other than FlashAttention.
`h2oai/h2ovl-mississippi-2b` will be available in V1 once we support backends other than FlashAttention.
:::

:::{note}
Expand Down
4 changes: 2 additions & 2 deletions examples/offline_inference/vision_language.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ def run_glm4v(question: str, modality: str):
def run_h2ovl(question: str, modality: str):
assert modality == "image"

model_name = "h2oai/h2ovl-mississippi-2b"
model_name = "h2oai/h2ovl-mississippi-800m"

llm = LLM(
model=model_name,
Expand All @@ -136,7 +136,7 @@ def run_h2ovl(question: str, modality: str):
add_generation_prompt=True)

# Stop tokens for H2OVL-Mississippi
# https://huggingface.co/h2oai/h2ovl-mississippi-2b
# https://huggingface.co/h2oai/h2ovl-mississippi-800m
stop_token_ids = [tokenizer.eos_token_id]
return llm, prompt, stop_token_ids

Expand Down
4 changes: 2 additions & 2 deletions examples/offline_inference/vision_language_multi_image.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def load_deepseek_vl2(question: str, image_urls: List[str]):


def load_h2ovl(question: str, image_urls: List[str]) -> ModelRequestData:
model_name = "h2oai/h2ovl-mississippi-2b"
model_name = "h2oai/h2ovl-mississippi-800m"

llm = LLM(
model=model_name,
Expand All @@ -99,7 +99,7 @@ def load_h2ovl(question: str, image_urls: List[str]) -> ModelRequestData:
add_generation_prompt=True)

# Stop tokens for H2OVL-Mississippi
# https://huggingface.co/h2oai/h2ovl-mississippi-2b
# https://huggingface.co/h2oai/h2ovl-mississippi-800m
stop_token_ids = [tokenizer.eos_token_id]

return ModelRequestData(
Expand Down