Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[Bot] Update inference types #2688

Merged
merged 21 commits into from
Dec 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
15d8612
Update inference types (automated commit)
Wauplin Nov 19, 2024
bad1919
Merge branch 'main' into update-inference-types-automated-pr
hanouticelina Nov 20, 2024
b9b3421
fix quality after merging main
hanouticelina Nov 20, 2024
b66ba40
another fix
hanouticelina Nov 20, 2024
690decf
fix tests
hanouticelina Nov 20, 2024
05df70f
Update inference types (automated commit)
Wauplin Nov 21, 2024
d274efb
Merge branch 'update-inference-types-automated-pr' of github.com:hugg…
hanouticelina Nov 21, 2024
a6e1cd2
Update inference types (automated commit)
Wauplin Nov 22, 2024
33840be
Merge branch 'main' into update-inference-types-automated-pr
hanouticelina Nov 22, 2024
817dafb
Merge branch 'update-inference-types-automated-pr' of github.com:hugg…
hanouticelina Nov 22, 2024
c63d566
fix quality
hanouticelina Nov 22, 2024
911c175
Update inference types (automated commit)
Wauplin Nov 24, 2024
69f20bc
Merge branch 'update-inference-types-automated-pr' of github.com:hugg…
hanouticelina Nov 24, 2024
08d3f65
Update inference types (automated commit)
Wauplin Nov 28, 2024
4dc3c30
Merge branch 'main' into update-inference-types-automated-pr
hanouticelina Nov 28, 2024
6aa17d6
Merge branch 'update-inference-types-automated-pr' of github.com:hugg…
hanouticelina Nov 28, 2024
5a0772e
Update inference types (automated commit)
Wauplin Dec 3, 2024
295f1f5
Merge branch 'update-inference-types-automated-pr' of github.com:hugg…
hanouticelina Dec 3, 2024
675a6ca
fix client
hanouticelina Dec 3, 2024
8834ad4
activate automatic update for table-question-answering
hanouticelina Dec 3, 2024
0d8b5f9
fix
hanouticelina Dec 3, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/package_reference/inference_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,6 +239,8 @@ This part of the lib is still under development and will be improved in future r

[[autodoc]] huggingface_hub.TableQuestionAnsweringOutputElement

[[autodoc]] huggingface_hub.TableQuestionAnsweringParameters



## text2text_generation
Expand Down
2 changes: 2 additions & 0 deletions docs/source/ko/package_reference/inference_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,6 +238,8 @@ rendered properly in your Markdown viewer.

[[autodoc]] huggingface_hub.TableQuestionAnsweringOutputElement

[[autodoc]] huggingface_hub.TableQuestionAnsweringParameters



## text2text_generation[[huggingface_hub.Text2TextGenerationInput]]
Expand Down
4 changes: 4 additions & 0 deletions src/huggingface_hub/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -351,6 +351,7 @@
"ObjectDetectionInput",
"ObjectDetectionOutputElement",
"ObjectDetectionParameters",
"Padding",
"QuestionAnsweringInput",
"QuestionAnsweringInputData",
"QuestionAnsweringOutputElement",
Expand All @@ -364,6 +365,7 @@
"TableQuestionAnsweringInput",
"TableQuestionAnsweringInputData",
"TableQuestionAnsweringOutputElement",
"TableQuestionAnsweringParameters",
"Text2TextGenerationInput",
"Text2TextGenerationOutput",
"Text2TextGenerationParameters",
Expand Down Expand Up @@ -880,6 +882,7 @@ def __dir__():
ObjectDetectionInput, # noqa: F401
ObjectDetectionOutputElement, # noqa: F401
ObjectDetectionParameters, # noqa: F401
Padding, # noqa: F401
QuestionAnsweringInput, # noqa: F401
QuestionAnsweringInputData, # noqa: F401
QuestionAnsweringOutputElement, # noqa: F401
Expand All @@ -893,6 +896,7 @@ def __dir__():
TableQuestionAnsweringInput, # noqa: F401
TableQuestionAnsweringInputData, # noqa: F401
TableQuestionAnsweringOutputElement, # noqa: F401
TableQuestionAnsweringParameters, # noqa: F401
Text2TextGenerationInput, # noqa: F401
Text2TextGenerationOutput, # noqa: F401
Text2TextGenerationParameters, # noqa: F401
Expand Down
20 changes: 17 additions & 3 deletions src/huggingface_hub/inference/_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@
ImageToImageTargetSize,
ImageToTextOutput,
ObjectDetectionOutputElement,
Padding,
QuestionAnsweringOutputElement,
SummarizationOutput,
SummarizationTruncationStrategy,
Expand Down Expand Up @@ -1654,7 +1655,9 @@ def table_question_answering(
query: str,
*,
model: Optional[str] = None,
parameters: Optional[Dict[str, Any]] = None,
padding: Optional["Padding"] = None,
sequential: Optional[bool] = None,
truncation: Optional[bool] = None,
) -> TableQuestionAnsweringOutputElement:
"""
Retrieve the answer to a question from information given in a table.
Expand All @@ -1668,8 +1671,14 @@ def table_question_answering(
model (`str`):
The model to use for the table-question-answering task. Can be a model ID hosted on the Hugging Face
Hub or a URL to a deployed Inference Endpoint.
parameters (`Dict[str, Any]`, *optional*):
Additional inference parameters. Defaults to None.
Comment on lines -1671 to -1672
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: we will have to mention this breaking change in the release notes

padding (`"Padding"`, *optional*):
Activates and controls padding.
sequential (`bool`, *optional*):
Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the
inference to be done sequentially to extract relations within sequences, given their conversational
nature.
truncation (`bool`, *optional*):
Activates and controls truncation.

Returns:
[`TableQuestionAnsweringOutputElement`]: a table question answering output containing the answer, coordinates, cells and the aggregator used.
Expand All @@ -1690,6 +1699,11 @@ def table_question_answering(
TableQuestionAnsweringOutputElement(answer='36542', coordinates=[[0, 1]], cells=['36542'], aggregator='AVERAGE')
```
"""
parameters = {
"padding": padding,
"sequential": sequential,
"truncation": truncation,
}
inputs = {
"query": query,
"table": table,
Expand Down
20 changes: 17 additions & 3 deletions src/huggingface_hub/inference/_generated/_async_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@
ImageToImageTargetSize,
ImageToTextOutput,
ObjectDetectionOutputElement,
Padding,
QuestionAnsweringOutputElement,
SummarizationOutput,
SummarizationTruncationStrategy,
Expand Down Expand Up @@ -1713,7 +1714,9 @@ async def table_question_answering(
query: str,
*,
model: Optional[str] = None,
parameters: Optional[Dict[str, Any]] = None,
padding: Optional["Padding"] = None,
sequential: Optional[bool] = None,
truncation: Optional[bool] = None,
) -> TableQuestionAnsweringOutputElement:
"""
Retrieve the answer to a question from information given in a table.
Expand All @@ -1727,8 +1730,14 @@ async def table_question_answering(
model (`str`):
The model to use for the table-question-answering task. Can be a model ID hosted on the Hugging Face
Hub or a URL to a deployed Inference Endpoint.
parameters (`Dict[str, Any]`, *optional*):
Additional inference parameters. Defaults to None.
padding (`"Padding"`, *optional*):
Activates and controls padding.
sequential (`bool`, *optional*):
Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the
inference to be done sequentially to extract relations within sequences, given their conversational
nature.
truncation (`bool`, *optional*):
Activates and controls truncation.

Returns:
[`TableQuestionAnsweringOutputElement`]: a table question answering output containing the answer, coordinates, cells and the aggregator used.
Expand All @@ -1750,6 +1759,11 @@ async def table_question_answering(
TableQuestionAnsweringOutputElement(answer='36542', coordinates=[[0, 1]], cells=['36542'], aggregator='AVERAGE')
```
"""
parameters = {
"padding": padding,
"sequential": sequential,
"truncation": truncation,
}
inputs = {
"query": query,
"table": table,
Expand Down
2 changes: 2 additions & 0 deletions src/huggingface_hub/inference/_generated/types/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,9 +101,11 @@
SummarizationTruncationStrategy,
)
from .table_question_answering import (
Padding,
TableQuestionAnsweringInput,
TableQuestionAnsweringInputData,
TableQuestionAnsweringOutputElement,
TableQuestionAnsweringParameters,
)
from .text2text_generation import (
Text2TextGenerationInput,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# - script: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-codegen.ts
# - specs: https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks.
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
from typing import Dict, List, Literal, Optional

from .base import BaseInferenceType

Expand All @@ -19,13 +19,31 @@ class TableQuestionAnsweringInputData(BaseInferenceType):
"""The table to serve as context for the questions"""


Padding = Literal["do_not_pad", "longest", "max_length"]


@dataclass
class TableQuestionAnsweringParameters(BaseInferenceType):
"""Additional inference parameters for Table Question Answering"""

padding: Optional["Padding"] = None
"""Activates and controls padding."""
sequential: Optional[bool] = None
"""Whether to do inference sequentially or as a batch. Batching is faster, but models like
SQA require the inference to be done sequentially to extract relations within sequences,
given their conversational nature.
"""
truncation: Optional[bool] = None
"""Activates and controls truncation."""


@dataclass
class TableQuestionAnsweringInput(BaseInferenceType):
"""Inputs for Table Question Answering inference"""

inputs: TableQuestionAnsweringInputData
"""One (table, question) pair to answer"""
parameters: Optional[Dict[str, Any]] = None
parameters: Optional[TableQuestionAnsweringParameters] = None
"""Additional inference parameters for Table Question Answering"""


Expand Down
1 change: 0 additions & 1 deletion utils/check_task_parameters.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,6 @@
"audio_to_audio",
"feature_extraction",
"sentence_similarity",
"table_question_answering",
"automatic_speech_recognition",
"image_to_text",
]
Expand Down
Loading