-
Notifications
You must be signed in to change notification settings - Fork 632
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Add support for Grammar/Tools + TGI-based specs in InferenceClient #2237
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the quantity of auto-generated code I couldn't do a perfect review, but I haven't seen anything shocking in the PR. I'm of the opinion of "ship it and eventually fix if issues arise", the code looks clean enough :)
Thanks for the review @LysandreJik! Agree with you about "I'm of the opinion of ship it and eventually fix if issues arise," 😄 |
From an idea mentioned by @OlivierDehaene and @drbh in #579 and [slack thread](https://huggingface.slack.com/archives/C05CFK1HM0T/p1711360022125399) (internal). This PR adds a script `inference-tgi-import.ts` to generate the `text-generation` and `chat_completion` specs from the auto-generated TGI [specifications](https://huggingface.github.io/text-generation-inference/). The goal is to keep in sync TGI improvements with @huggingface/tasks and therefore have a consistent "single source of truth". The converted specs are then compatible with our tooling to generate JS/Python code. This PR changes quite a lot of naming in the generated JS/Python types. Luckily I don't think they were yet used in the JS ecosystem so better to do that now than later. I also opened huggingface/huggingface_hub#2237 to include these changes in `huggingface_hub`. **TODO:** - [ ] fix lint errors. How to deal with a parsed json to avoid using `any`? - [ ] CI workflow to open a PR each time TGI is updated? => can be done in a future PR --------- Co-authored-by: SBrandeis <simon@huggingface.co>
Sorry for the huge PR 😬 It goes hand-in-hand with huggingface/huggingface.js#629 (and to a lesser extent huggingface/text-generation-inference#1798). Most of the changes to review are hopefully documentation / auto-generated stuff.
What's in this PR?
text_generation
andchat_completion
have been updated based on TGI specs (see associated PR Generate specs from TGI openapi.json huggingface.js#629)text_generation
taskchat_completion
taskMODEL_KWARGS_NOT_USED_REGEX
.text_generation
, only send non-None parameters in payload.check_inference_input_params.py
that checks task parameters are correctly used and documented, to be consistent with the generated types. For now, raises an error when not consistent but doesn't provide an auto-fix (see TODOs in script).generate_async_inference_client.py
to generate KO package reference correctly.What to review
Mostly:
src/huggingface_hub/inference/_client.py
=> lots of docs update (not interesting) + few tweaks in code (to review)src/huggingface_hub/inference/_common.py
=> only small tweakstests/test_inference_client.py
=> to check "it works"tests/test_inference_text_generation.py
=> to check "it works"utils/check_inference_input_params.py
=> new script to check input parametersThe rest is either auto-generated stuff or low-level scripts.
Generated docs:
What's not this PR?
=> will be handled in a follow-up PR