-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
[vulnerability-fix] Huggingface - Server side template injection #2949
Labels
bug
Something isn't working
Comments
I believe we can do this similarly to how transformers handles this by running jinja inside an immutablesandboxenvironment
|
PR - #2941 |
krrishdholakia
changed the title
[Bug]: Huggingface - Server side template injection
Huggingface - Server side template injection
Apr 11, 2024
ishaan-jaff
changed the title
Huggingface - Server side template injection
[vulnerability-fix] Huggingface - Server side template injection
Apr 11, 2024
PR here - confirmed it works for me: #2941 |
merged into main + queued a new release |
Fix should be live soon in v |
Closing as fix is now live in v pypi - https://pypi.org/project/litellm/1.34.42/ |
# for free
to join this conversation on GitHub.
Already have an account?
# to comment
Links:
GHSA-46cm-pfwv-cgf8
PR to fix the issue: #2941
Status
✅ Resolved
What happened?
Huggingface chat template is stored on huggingface hub in the tokenizer_config.json
e.g. - https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/blob/41b61a33a2483885c981aa79e0df6b32407ed873/tokenizer_config.json#L42
This is called when a hf model is called, to get it's chat template.
We need to add a check to ensure the chat template is sanitized correctly (it's a prompt template, not malicious code).
Update: PR is now live - #2941
The text was updated successfully, but these errors were encountered: