Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Parse string values for add_special_tokens in vLLM #598

Merged

Conversation

eldarkurtic
Copy link
Contributor

When trying to evaluate reasoning models on e.g. AIME task, we would run evals like this:

MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=data/evals/$MODEL

# AIME 2024
TASK=aime24
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
    --custom-tasks src/open_r1/evaluate.py \
    --use-chat-template \
    --output-dir $OUTPUT_DIR

As part of MODEL_ARGS we would like to be able to specify value for add_special_tokens to control whether BOS token is added or not by tokenizer. At the moment this is not possible because the given value will be interpreted as string whereas the codebase relies on boolean. This PR enables this by parsing add_special_tokens like other vLLM params are parsed in the class VLLMModel.

@lewtun lewtun requested a review from NathanHB March 27, 2025 10:29
Copy link
Member

@lewtun lewtun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @eldarkurtic! This LGTM but I'll let @NathanHB comment on whether to merge or not

@lewtun
Copy link
Member

lewtun commented Mar 27, 2025

For viz @NathanHB this is related to an issue in vLLM where double BOS tokens are added if one uses the generate() method with strings instead of token IDs.

@HuggingFaceDocBuilderDev
Copy link
Collaborator

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@clefourrier
Copy link
Member

clefourrier commented Mar 27, 2025

LGTM, but I'm not fond of the nested logic there, I would actually expand it into a

var = False
if config_var is not None:
   if isinstance(config_var, bool)
   elif isinstance(config_var, str)

for legibility

Or even add an utils function to parse bool strings to bool (which I thought we had?)

@NathanHB
Copy link
Member

NathanHB commented Apr 2, 2025

Hey ! Thanks for the PR, there is a PR open to use Pydantic as model configs and will remove the need for converting the strings to bool, good to merge though !

@NathanHB NathanHB merged commit cc95ff2 into huggingface:main Apr 8, 2025
3 checks passed
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants