diff --git a/README.md b/README.md
index 67388fb7a..8618e496e 100644
--- a/README.md
+++ b/README.md
@@ -150,8 +150,8 @@ To stay up-to-date on our [public roadmap](https://github.com/orgs/i-am-bee/proj
BeeAI framework is open-source and we ❤️ contributions.
To help build BeeAI, take a look at our:
-- [Python contribution guidelines](/python/docs/CONTRIBUTING.md)
-- [TypeScript contribution guidelines](/typescript/docs/CONTRIBUTING.md)
+- [Python contribution guidelines](/python/CONTRIBUTING.md)
+- [TypeScript contribution guidelines](/typescript/CONTRIBUTING.md)
## Bugs
diff --git a/python/.env.example b/python/.env.example
index dd242c28d..870bebb08 100644
--- a/python/.env.example
+++ b/python/.env.example
@@ -38,3 +38,20 @@ BEEAI_LOG_LEVEL=INFO
# XAI_API_KEY=your-xai-api-key
# XAI_CHAT_MODEL=grok-2
+
+########################
+### Vertex AI specific configuration
+########################
+
+# GOOGLE_VERTEX_CHAT_MODEL=gemini-2.0-flash-lite-001
+# GOOGLE_VERTEX_PROJECT=""
+# GOOGLE_VERTEX_ENDPOINT=""
+
+########################
+### Amazon Bedrock specific configuration
+########################
+
+# AWS_ACCESS_KEY_ID=
+# AWS_SECRET_ACCESS_KEY=
+# AWS_REGION_NAME=
+# AWS_CHAT_MODEL=
diff --git a/python/CHANGELOG.md b/python/CHANGELOG.md
index b430b3b1d..9e1fc32a7 100644
--- a/python/CHANGELOG.md
+++ b/python/CHANGELOG.md
@@ -1,3 +1,37 @@
+## python_v0.1.4 (2025-03-06)
+
+### Refactor
+
+- rename Bee agent to ReAct agent (#505)
+- move logger to the root (#504)
+- update user-facing event data to all be dict and add docs (#431)
+- **agents**: remove Bee branding from BaseAgent (#440)
+
+### Bug Fixes
+
+- improve decorated tool output (#499)
+- **backend**: correctly merge inference parameters (#496)
+- **backend**: tool calling, unify message content (#475)
+- **backend**: correctly merge inference parameters (#486)
+- **tools**: make emitter required (#461)
+- **workflows**: handle relative steps (#463)
+
+### Features
+
+- **adapters**: add Amazon Bedrock support (#466)
+- **examples**: adds logger examples and updates docs (#494)
+- **internals**: construct Pydantic model from JSON Schema (#502)
+- **adapters**: Add Google VertexAI support (#469)
+- **tools**: add MCP tool (#481)
+- langchain tool (#474)
+- **examples**: templates examples ts parity (#480)
+- **examples**: adds error examples and updates error docs (#490)
+- **agents**: simplify variable usage in prompt templates (#484)
+- improve PromptTemplate.render API (#476)
+- **examples**: Add custom_agent and bee_advanced examples (#462)
+- **agents**: handle message formatting (#470)
+- **adapters**: Add xAI backend (#445) (#446)
+
## python_v0.1.3 (2025-03-03)
### Features
diff --git a/python/README.md b/python/README.md
index bd5c89c97..98c1b13fb 100644
--- a/python/README.md
+++ b/python/README.md
@@ -73,7 +73,7 @@ import traceback
from pydantic import ValidationError
-from beeai_framework.agents.bee.agent import AgentExecutionConfig
+from beeai_framework.agents.react.agent import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel
from beeai_framework.backend.message import UserMessage
from beeai_framework.memory import UnconstrainedMemory
diff --git a/python/beeai_framework/__init__.py b/python/beeai_framework/__init__.py
index 061477e9a..dbb8bd4c2 100644
--- a/python/beeai_framework/__init__.py
+++ b/python/beeai_framework/__init__.py
@@ -14,7 +14,7 @@
from beeai_framework.agents import BaseAgent
-from beeai_framework.agents.bee.agent import BeeAgent
+from beeai_framework.agents.react.agent import ReActAgent
from beeai_framework.backend import (
AssistantMessage,
CustomMessage,
@@ -40,13 +40,13 @@
"AssistantMessage",
"BaseAgent",
"BaseMemory",
- "BeeAgent",
"CustomMessage",
"LoggerError",
"Message",
"OpenMeteoTool",
"Prompt",
"PromptTemplateError",
+ "ReActAgent",
"ReadOnlyMemory",
"Role",
"Serializable",
diff --git a/python/beeai_framework/agents/bee/__init__.py b/python/beeai_framework/adapters/amazon_bedrock/__init__.py
similarity index 88%
rename from python/beeai_framework/agents/bee/__init__.py
rename to python/beeai_framework/adapters/amazon_bedrock/__init__.py
index 7d9c878e1..84fdda152 100644
--- a/python/beeai_framework/agents/bee/__init__.py
+++ b/python/beeai_framework/adapters/amazon_bedrock/__init__.py
@@ -13,6 +13,3 @@
# limitations under the License.
-from beeai_framework.agents.bee.agent import BeeAgent
-
-__all__ = ["BeeAgent"]
diff --git a/python/beeai_framework/adapters/amazon_bedrock/backend/README.md b/python/beeai_framework/adapters/amazon_bedrock/backend/README.md
new file mode 100644
index 000000000..185baf145
--- /dev/null
+++ b/python/beeai_framework/adapters/amazon_bedrock/backend/README.md
@@ -0,0 +1,54 @@
+# Amazon Bedrock
+
+## Configuration
+
+Set the following environment variables
+
+* AWS_ACCESS_KEY_ID
+* AWS_SECRET_ACCESS_KEY
+* AWS_REGION_NAME
+
+## Tested Models
+
+Only Meta, Mistral, & Amazon Titan serverless models have been tested.
+
+Other models should work, as beeai_framework uses LiteLLM. See [docs](https://docs.litellm.ai/docs/providers/bedrock) for more information
+
+## Known Issues with tool use and structured output
+
+The following models report Tool use not supported:
+
+```text
+litellm.llms.bedrock.common_utils.BedrockError: {"message":"This model doesn't support tool use."}
+```
+
+* `meta.llama3-70b-instruct-v1:0`
+* `meta.llama3-8b-instruct-v1:0`
+
+The following fail to return structured output with beeai_framework. Initial investigation indicates that these models are not responding with structured JSON output when requested
+
+* `amazon.titan-text-express-v1`
+* `amazon.titan-text-lite-v1`
+* `mistral.mistral-7b-instruct-v0:2`
+* `mistral.mixtral-8x7b-instruct-v0:1`
+* `mistral.mistral-large-2402-v1:0`
+
+The following models fail with an exception:
+
+```text
+litellm.exceptions.BadRequestError: litellm.BadRequestError: BedrockException - {"message":"This model doesn't support the toolConfig.toolChoice.tool field. Remove toolConfig.toolChoice.tool and try again."}
+```
+
+* `mistral.mistral-large-2402-v1:0`
+
+## Quota limits
+
+Default quota limits on Amazon Bedrock are low, and can cause even
+simple examples to fail with:
+
+```text
+litellm.exceptions.RateLimitError: litellm.RateLimitError: BedrockException - {"message":"Too many requests, please wait before trying again."}
+```
+
+To increase quota limits, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing/) and
+[Amazon Bedrock quotas](https://docs.aws.amazon.com/bedrock/latest/userguide/quotas.html).
\ No newline at end of file
diff --git a/python/beeai_framework/adapters/amazon_bedrock/backend/__init__.py b/python/beeai_framework/adapters/amazon_bedrock/backend/__init__.py
new file mode 100644
index 000000000..84fdda152
--- /dev/null
+++ b/python/beeai_framework/adapters/amazon_bedrock/backend/__init__.py
@@ -0,0 +1,15 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
diff --git a/python/beeai_framework/adapters/amazon_bedrock/backend/chat.py b/python/beeai_framework/adapters/amazon_bedrock/backend/chat.py
new file mode 100644
index 000000000..e08501b72
--- /dev/null
+++ b/python/beeai_framework/adapters/amazon_bedrock/backend/chat.py
@@ -0,0 +1,63 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+
+from beeai_framework.adapters.litellm.chat import LiteLLMChatModel
+from beeai_framework.backend.constants import ProviderName
+from beeai_framework.logger import Logger
+
+logger = Logger(__name__)
+
+
+class AmazonBedrockChatModel(LiteLLMChatModel):
+ @property
+ def provider_id(self) -> ProviderName:
+ return "amazon_bedrock"
+
+ def __init__(self, model_id: str | None = None, settings: dict | None = None) -> None:
+ _settings = settings.copy() if settings is not None else {}
+
+ aws_access_key_id = _settings.get("aws_access_key_id", os.getenv("AWS_ACCESS_KEY_ID"))
+ if not aws_access_key_id:
+ raise ValueError(
+ "Access key is required for Amazon Bedrock model. Specify *aws_access_key_id* "
+ + "or set AWS_ACCESS_KEY_ID environment variable"
+ )
+
+ aws_secret_access_key = _settings.get("aws_secret_access_key", os.getenv("AWS_SECRET_ACCESS_KEY"))
+ if not aws_secret_access_key:
+ raise ValueError(
+ "Secret key is required for Amazon Bedrock model. Specify *aws_secret_access_key* "
+ + "or set AWS_SECRET_ACCESS_KEY environment variable"
+ )
+
+ aws_region_name = _settings.get("aws_region_name", os.getenv("AWS_REGION_NAME"))
+ if not aws_region_name:
+ raise ValueError(
+ "Region is required for Amazon Bedrock model. Specify *aws_region_name* "
+ + "or set AWS_REGION_NAME environment variable"
+ )
+
+ super().__init__(
+ (model_id if model_id else os.getenv("AWS_CHAT_MODEL", "llama-3.1-8b-instant")),
+ provider_id="bedrock",
+ settings=_settings
+ | {
+ "aws_access_key_id": aws_access_key_id,
+ "aws_secret_access_key": aws_secret_access_key,
+ "aws_region_name": aws_region_name,
+ },
+ )
diff --git a/python/beeai_framework/adapters/groq/backend/chat.py b/python/beeai_framework/adapters/groq/backend/chat.py
index 9f93b3010..9fd621257 100644
--- a/python/beeai_framework/adapters/groq/backend/chat.py
+++ b/python/beeai_framework/adapters/groq/backend/chat.py
@@ -17,9 +17,9 @@
from beeai_framework.adapters.litellm.chat import LiteLLMChatModel
from beeai_framework.backend.constants import ProviderName
-from beeai_framework.utils.custom_logger import BeeLogger
+from beeai_framework.logger import Logger
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
class GroqChatModel(LiteLLMChatModel):
diff --git a/python/beeai_framework/adapters/langchain/__init__.py b/python/beeai_framework/adapters/langchain/__init__.py
new file mode 100644
index 000000000..84fdda152
--- /dev/null
+++ b/python/beeai_framework/adapters/langchain/__init__.py
@@ -0,0 +1,15 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
diff --git a/python/beeai_framework/adapters/langchain/tools.py b/python/beeai_framework/adapters/langchain/tools.py
new file mode 100644
index 000000000..fcada6fe7
--- /dev/null
+++ b/python/beeai_framework/adapters/langchain/tools.py
@@ -0,0 +1,78 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from typing import Any, TypeVar
+
+from langchain_core.callbacks import AsyncCallbackManagerForToolRun
+from langchain_core.runnables import RunnableConfig
+from langchain_core.tools import StructuredTool
+from langchain_core.tools import Tool as LangChainSimpleTool
+from pydantic import BaseModel, ConfigDict
+
+from beeai_framework.context import RunContext
+from beeai_framework.emitter.emitter import Emitter
+from beeai_framework.tools.tool import StringToolOutput, Tool, ToolRunOptions
+from beeai_framework.utils.strings import to_safe_word
+
+
+class LangChainToolRunOptions(ToolRunOptions):
+ langchain_runnable_config: RunnableConfig | None = None
+ model_config = ConfigDict(extra="allow", arbitrary_types_allowed=True)
+
+
+T = TypeVar("T", bound=BaseModel)
+
+
+class LangChainTool(Tool[T, LangChainToolRunOptions, StringToolOutput]):
+ @property
+ def name(self) -> str:
+ return self._tool.name
+
+ @property
+ def description(self) -> str:
+ return self._tool.description
+
+ @property
+ def input_schema(self) -> type[T]:
+ return self._tool.input_schema
+
+ def _create_emitter(self) -> Emitter:
+ return Emitter.root().child(
+ namespace=["tool", "langchain", to_safe_word(self._tool.name)],
+ creator=self,
+ )
+
+ def __init__(self, tool: StructuredTool | LangChainSimpleTool, options: dict[str, Any] | None = None) -> None:
+ super().__init__(options)
+ self._tool = tool
+
+ async def _run(self, input: T, options: LangChainToolRunOptions | None, context: RunContext) -> StringToolOutput:
+ langchain_runnable_config = options.langchain_runnable_config or {} if options else {}
+ args = (
+ input if isinstance(input, dict) else input.model_dump(),
+ {
+ **langchain_runnable_config,
+ "signal": context.signal or None if context else None,
+ },
+ )
+ is_async = (isinstance(self._tool, StructuredTool) and self._tool.coroutine) or (
+ isinstance(args[0].get("run_manager"), AsyncCallbackManagerForToolRun)
+ )
+ if is_async:
+ response = await self._tool.ainvoke(*args)
+ else:
+ response = self._tool.invoke(*args)
+
+ return StringToolOutput(result=str(response))
diff --git a/python/beeai_framework/adapters/litellm/chat.py b/python/beeai_framework/adapters/litellm/chat.py
index 62c6fa070..410b40406 100644
--- a/python/beeai_framework/adapters/litellm/chat.py
+++ b/python/beeai_framework/adapters/litellm/chat.py
@@ -16,7 +16,6 @@
import logging
from abc import ABC
from collections.abc import AsyncGenerator
-from typing import Any
import litellm
from litellm import (
@@ -26,7 +25,6 @@
get_supported_openai_params,
)
from litellm.types.utils import StreamingChoices
-from pydantic import BaseModel, ConfigDict
from beeai_framework.backend.chat import (
ChatModel,
@@ -43,18 +41,10 @@
)
from beeai_framework.backend.utils import parse_broken_json
from beeai_framework.context import RunContext
-from beeai_framework.utils.custom_logger import BeeLogger
+from beeai_framework.logger import Logger
+from beeai_framework.utils.dicts import exclude_keys, exclude_none, include_keys
-logger = BeeLogger(__name__)
-
-
-class LiteLLMParameters(BaseModel):
- model: str
- messages: list[dict[str, Any]]
- tools: list[dict[str, Any]] | None = None
- response_format: dict[str, Any] | type[BaseModel] | None = None
-
- model_config = ConfigDict(extra="allow", arbitrary_types_allowed=True)
+logger = Logger(__name__)
class LiteLLMChatModel(ChatModel, ABC):
@@ -85,18 +75,15 @@ async def _create(
input: ChatModelInput,
run: RunContext,
) -> ChatModelOutput:
- litellm_input = self._transform_input(input)
- response = await acompletion(**litellm_input.model_dump())
+ litellm_input = self._transform_input(input) | {"stream": False}
+ response = await acompletion(**litellm_input)
response_output = self._transform_output(response)
logger.debug(f"Inference response output:\n{response_output}")
return response_output
async def _create_stream(self, input: ChatModelInput, _: RunContext) -> AsyncGenerator[ChatModelOutput]:
- # TODO: handle tool calling for streaming
- litellm_input = self._transform_input(input)
- parameters = litellm_input.model_dump()
- parameters["stream"] = True
- response = await acompletion(**parameters)
+ litellm_input = self._transform_input(input) | {"stream": True}
+ response = await acompletion(**litellm_input)
is_empty = True
async for chunk in response:
@@ -128,7 +115,7 @@ async def _create_structure(self, input: ChatModelStructureInput, run: RunContex
# TODO: validate result matches expected schema
return ChatModelStructureOutput(object=result)
- def _transform_input(self, input: ChatModelInput) -> LiteLLMParameters:
+ def _transform_input(self, input: ChatModelInput) -> dict:
messages: list[dict] = []
for message in input.messages:
if isinstance(message, ToolMessage):
@@ -160,16 +147,22 @@ def _transform_input(self, input: ChatModelInput) -> LiteLLMParameters:
else None
)
- params = (
- self._settings
- | self.parameters.model_dump(exclude_unset=True)
- | input.model_dump(exclude={"model", "messages", "tools"})
+ settings = exclude_keys(
+ self._settings | input.model_dump(exclude_unset=True),
+ {*self.supported_params, "abort_signal", "model", "messages", "tools"},
)
- return LiteLLMParameters(
- model=f"{self._litellm_provider_id}/{self.model_id}",
- messages=messages,
- tools=tools,
- **params,
+ params = include_keys(
+ input.model_dump(exclude_none=True) # get all parameters with default values
+ | self._settings # get constructor overrides
+ | self.parameters.model_dump(exclude_unset=True) # get default parameters
+ | input.model_dump(exclude_none=True, exclude_unset=True), # get custom manually set parameters
+ set(self.supported_params),
+ )
+
+ return (
+ exclude_none(settings)
+ | exclude_none(params)
+ | {"model": f"{self._litellm_provider_id}/{self.model_id}", "messages": messages, "tools": tools}
)
def _transform_output(self, chunk: ModelResponse | ModelResponseStream) -> ChatModelOutput:
@@ -179,20 +172,24 @@ def _transform_output(self, chunk: ModelResponse | ModelResponseStream) -> ChatM
update = choice.delta if isinstance(choice, StreamingChoices) else choice.message
return ChatModelOutput(
- messages=[
- AssistantMessage(
- [
- MessageToolCallContent(
- id=call.id or "dummy_id", tool_name=call.function.name, args=call.function.arguments
+ messages=(
+ [
+ (
+ AssistantMessage(
+ [
+ MessageToolCallContent(
+ id=call.id or "dummy_id", tool_name=call.function.name, args=call.function.arguments
+ )
+ for call in update.tool_calls
+ ]
)
- for call in update.tool_calls
- ]
- )
- if update.tool_calls
- else AssistantMessage(update.content)
- ]
- if update.model_dump(exclude_none=True)
- else [],
+ if update.tool_calls
+ else AssistantMessage(update.content)
+ )
+ ]
+ if update.model_dump(exclude_none=True)
+ else []
+ ),
finish_reason=finish_reason,
usage=usage,
)
diff --git a/python/beeai_framework/adapters/ollama/backend/chat.py b/python/beeai_framework/adapters/ollama/backend/chat.py
index b5985132a..3ec3eb329 100644
--- a/python/beeai_framework/adapters/ollama/backend/chat.py
+++ b/python/beeai_framework/adapters/ollama/backend/chat.py
@@ -17,9 +17,9 @@
from beeai_framework.adapters.litellm.chat import LiteLLMChatModel
from beeai_framework.backend.constants import ProviderName
-from beeai_framework.utils.custom_logger import BeeLogger
+from beeai_framework.logger import Logger
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
class OllamaChatModel(LiteLLMChatModel):
diff --git a/python/beeai_framework/adapters/openai/backend/chat.py b/python/beeai_framework/adapters/openai/backend/chat.py
index e2be97677..f15c3f73b 100644
--- a/python/beeai_framework/adapters/openai/backend/chat.py
+++ b/python/beeai_framework/adapters/openai/backend/chat.py
@@ -17,9 +17,9 @@
from beeai_framework.adapters.litellm.chat import LiteLLMChatModel
from beeai_framework.backend.constants import ProviderName
-from beeai_framework.utils.custom_logger import BeeLogger
+from beeai_framework.logger import Logger
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
class OpenAIChatModel(LiteLLMChatModel):
diff --git a/python/beeai_framework/adapters/vertexai/__init__.py b/python/beeai_framework/adapters/vertexai/__init__.py
new file mode 100644
index 000000000..84fdda152
--- /dev/null
+++ b/python/beeai_framework/adapters/vertexai/__init__.py
@@ -0,0 +1,15 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
diff --git a/python/beeai_framework/adapters/vertexai/backend/__init__.py b/python/beeai_framework/adapters/vertexai/backend/__init__.py
new file mode 100644
index 000000000..84fdda152
--- /dev/null
+++ b/python/beeai_framework/adapters/vertexai/backend/__init__.py
@@ -0,0 +1,15 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
diff --git a/python/beeai_framework/adapters/vertexai/backend/chat.py b/python/beeai_framework/adapters/vertexai/backend/chat.py
new file mode 100644
index 000000000..278636bbe
--- /dev/null
+++ b/python/beeai_framework/adapters/vertexai/backend/chat.py
@@ -0,0 +1,50 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+
+from beeai_framework.adapters.litellm.chat import LiteLLMChatModel
+from beeai_framework.backend.constants import ProviderName
+from beeai_framework.logger import Logger
+
+logger = Logger(__name__)
+
+
+class VertexAIChatModel(LiteLLMChatModel):
+ @property
+ def provider_id(self) -> ProviderName:
+ return "vertexai"
+
+ def __init__(self, model_id: str | None = None, settings: dict | None = None) -> None:
+ _settings = settings.copy() if settings is not None else {}
+
+ vertexai_project = _settings.get("vertexai_project", os.getenv("VERTEXAI_PROJECT"))
+ if not vertexai_project:
+ raise ValueError(
+ "Project ID is required for Vertex AI model. Specify *vertexai_project* "
+ + "or set VERTEXAI_PROJECT environment variable"
+ )
+
+ # Ensure standard google auth credentials are available
+ # Set GOOGLE_APPLICATION_CREDENTIALS / GOOGLE_CREDENTIALS / GOOGLE_APPLICATION_CREDENTIALS_JSON
+
+ super().__init__(
+ model_id if model_id else os.getenv("VERTEXAI_CHAT_MODEL", "geminid-2.0-flash-lite-001"),
+ provider_id="vertex_ai",
+ settings=_settings
+ | {
+ "vertex_project": vertexai_project,
+ },
+ )
diff --git a/python/beeai_framework/adapters/watsonx/backend/chat.py b/python/beeai_framework/adapters/watsonx/backend/chat.py
index 27d724d04..6812f04b5 100644
--- a/python/beeai_framework/adapters/watsonx/backend/chat.py
+++ b/python/beeai_framework/adapters/watsonx/backend/chat.py
@@ -15,13 +15,11 @@
import os
-from beeai_framework.adapters.litellm.chat import LiteLLMChatModel, LiteLLMParameters
-from beeai_framework.backend.chat import ChatModelInput
+from beeai_framework.adapters.litellm.chat import LiteLLMChatModel
from beeai_framework.backend.constants import ProviderName
-from beeai_framework.backend.message import ToolMessage
-from beeai_framework.utils.custom_logger import BeeLogger
+from beeai_framework.logger import Logger
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
class WatsonxChatModel(LiteLLMChatModel):
@@ -37,26 +35,3 @@ def __init__(self, model_id: str | None = None, settings: dict | None = None) ->
provider_id="watsonx",
settings=_settings,
)
-
- def _transform_input(self, input: ChatModelInput) -> LiteLLMParameters:
- params = super()._transform_input(input)
-
- messages_list = []
- for message in input.messages:
- if isinstance(message, ToolMessage):
- messages_list.extend(
- [
- {
- "role": "tool",
- "name": content.tool_name,
- "content": content.result,
- "tool_call_id": content.tool_call_id,
- }
- for content in message.content
- ]
- )
- else:
- messages_list.append(message.to_plain())
-
- params.messages = messages_list
- return params
diff --git a/python/beeai_framework/adapters/xai/backend/chat.py b/python/beeai_framework/adapters/xai/backend/chat.py
index 824ef95cf..cf372e8b1 100644
--- a/python/beeai_framework/adapters/xai/backend/chat.py
+++ b/python/beeai_framework/adapters/xai/backend/chat.py
@@ -17,9 +17,9 @@
from beeai_framework.adapters.litellm.chat import LiteLLMChatModel
from beeai_framework.backend.constants import ProviderName
-from beeai_framework.utils.custom_logger import BeeLogger
+from beeai_framework.logger import Logger
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
class XAIChatModel(LiteLLMChatModel):
diff --git a/python/beeai_framework/agents/__init__.py b/python/beeai_framework/agents/__init__.py
index 55b780337..02c50d53f 100644
--- a/python/beeai_framework/agents/__init__.py
+++ b/python/beeai_framework/agents/__init__.py
@@ -15,5 +15,6 @@
from beeai_framework.agents.base import BaseAgent
from beeai_framework.agents.errors import AgentError
+from beeai_framework.agents.types import AgentExecutionConfig, AgentMeta
-__all__ = ["AgentError", "BaseAgent"]
+__all__ = ["AgentError", "AgentExecutionConfig", "AgentMeta", "BaseAgent"]
diff --git a/python/beeai_framework/agents/react/__init__.py b/python/beeai_framework/agents/react/__init__.py
new file mode 100644
index 000000000..5e68b5ed0
--- /dev/null
+++ b/python/beeai_framework/agents/react/__init__.py
@@ -0,0 +1,18 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from beeai_framework.agents.react.agent import ReActAgent, ReActAgentInput, ReActAgentRunInput
+
+__all__ = ["ReActAgent", "ReActAgentInput", "ReActAgentRunInput"]
diff --git a/python/beeai_framework/agents/bee/agent.py b/python/beeai_framework/agents/react/agent.py
similarity index 80%
rename from python/beeai_framework/agents/bee/agent.py
rename to python/beeai_framework/agents/react/agent.py
index 804cb8e06..3d4fea442 100644
--- a/python/beeai_framework/agents/bee/agent.py
+++ b/python/beeai_framework/agents/react/agent.py
@@ -17,24 +17,26 @@
from datetime import UTC, datetime
from beeai_framework.agents.base import BaseAgent
-from beeai_framework.agents.runners.base import (
+from beeai_framework.agents.react.runners.base import (
BaseRunner,
- BeeRunnerToolInput,
- BeeRunnerToolResult,
- RunnerIteration,
+ ReActAgentRunnerIteration,
+ ReActAgentRunnerToolInput,
+ ReActAgentRunnerToolResult,
+)
+from beeai_framework.agents.react.runners.default.runner import DefaultRunner
+from beeai_framework.agents.react.runners.granite.runner import GraniteRunner
+from beeai_framework.agents.react.types import (
+ ModelKeysType,
+ ReActAgentInput,
+ ReActAgentRunInput,
+ ReActAgentRunOptions,
+ ReActAgentRunOutput,
+ ReActAgentTemplateFactory,
+ ReActAgentTemplates,
)
-from beeai_framework.agents.runners.default.runner import DefaultRunner
-from beeai_framework.agents.runners.granite.runner import GraniteRunner
from beeai_framework.agents.types import (
AgentExecutionConfig,
AgentMeta,
- BeeAgentTemplates,
- BeeInput,
- BeeRunInput,
- BeeRunOptions,
- BeeRunOutput,
- BeeTemplateFactory,
- ModelKeysType,
)
from beeai_framework.backend import Message
from beeai_framework.backend.chat import ChatModel
@@ -46,7 +48,7 @@
from beeai_framework.utils.models import ModelLike, to_model, to_model_optional
-class BeeAgent(BaseAgent[BeeRunOutput]):
+class ReActAgent(BaseAgent[ReActAgentRunOutput]):
runner: Callable[..., BaseRunner]
def __init__(
@@ -55,11 +57,11 @@ def __init__(
tools: list[Tool],
memory: BaseMemory,
meta: AgentMeta | None = None,
- templates: dict[ModelKeysType, BeeAgentTemplates | BeeTemplateFactory] | None = None,
+ templates: dict[ModelKeysType, ReActAgentTemplates | ReActAgentTemplateFactory] | None = None,
execution: AgentExecutionConfig | None = None,
stream: bool | None = None,
) -> None:
- self.input = BeeInput(
+ self.input = ReActAgentInput(
llm=llm, tools=tools, memory=memory, meta=meta, templates=templates, execution=execution, stream=stream
)
if "granite" in self.input.llm.model_id:
@@ -67,7 +69,7 @@ def __init__(
else:
self.runner = DefaultRunner
self.emitter = Emitter.root().child(
- namespace=["agent", "bee"],
+ namespace=["agent", "react"],
creator=self,
)
@@ -96,7 +98,7 @@ def meta(self) -> AgentMeta:
extra_description.append(f"Tool ${tool.name}': ${tool.description}.")
return AgentMeta(
- name="Bee",
+ name="ReAct",
tools=tools,
description="The BeeAI framework demonstrates its ability to auto-correct and adapt in real-time, improving"
" the overall reliability and resilience of the system.",
@@ -104,17 +106,20 @@ def meta(self) -> AgentMeta:
)
async def _run(
- self, run_input: ModelLike[BeeRunInput], options: ModelLike[BeeRunOptions] | None, context: RunContext
- ) -> BeeRunOutput:
- run_input = to_model(BeeRunInput, run_input)
- options = to_model_optional(BeeRunOptions, options)
+ self,
+ run_input: ModelLike[ReActAgentRunInput],
+ options: ModelLike[ReActAgentRunOptions] | None,
+ context: RunContext,
+ ) -> ReActAgentRunOutput:
+ run_input = to_model(ReActAgentRunInput, run_input)
+ options = to_model_optional(ReActAgentRunOptions, options)
runner = self.runner(
self.input,
(
options
if options
- else BeeRunOptions(
+ else ReActAgentRunOptions(
execution=self.input.execution
or (options.execution if options is not None else None)
or AgentExecutionConfig(
@@ -131,13 +136,13 @@ async def _run(
final_message: Message | None = None
while not final_message:
- iteration: RunnerIteration = await runner.create_iteration()
+ iteration: ReActAgentRunnerIteration = await runner.create_iteration()
if iteration.state.tool_name and iteration.state.tool_input is not None:
iteration.state.final_answer = None
- tool_result: BeeRunnerToolResult = await runner.tool(
- input=BeeRunnerToolInput(
+ tool_result: ReActAgentRunnerToolResult = await runner.tool(
+ input=ReActAgentRunnerToolInput(
state=iteration.state,
emitter=iteration.emitter,
meta=iteration.meta,
@@ -193,4 +198,4 @@ async def _run(
await self.input.memory.add(final_message)
- return BeeRunOutput(result=final_message, iterations=runner.iterations, memory=runner.memory)
+ return ReActAgentRunOutput(result=final_message, iterations=runner.iterations, memory=runner.memory)
diff --git a/python/beeai_framework/agents/react/runners/__init__.py b/python/beeai_framework/agents/react/runners/__init__.py
new file mode 100644
index 000000000..1857180f7
--- /dev/null
+++ b/python/beeai_framework/agents/react/runners/__init__.py
@@ -0,0 +1,14 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/python/beeai_framework/agents/runners/base.py b/python/beeai_framework/agents/react/runners/base.py
similarity index 64%
rename from python/beeai_framework/agents/runners/base.py
rename to python/beeai_framework/agents/react/runners/base.py
index 45610ffd3..59c968a38 100644
--- a/python/beeai_framework/agents/runners/base.py
+++ b/python/beeai_framework/agents/react/runners/base.py
@@ -17,15 +17,15 @@
from dataclasses import dataclass
from beeai_framework.agents import AgentError
-from beeai_framework.agents.types import (
- BeeAgentRunIteration,
- BeeAgentTemplates,
- BeeInput,
- BeeIterationResult,
- BeeMeta,
- BeeRunInput,
- BeeRunOptions,
- BeeTemplateFactory,
+from beeai_framework.agents.react.types import (
+ ReActAgentInput,
+ ReActAgentIterationMeta,
+ ReActAgentIterationResult,
+ ReActAgentRunInput,
+ ReActAgentRunIteration,
+ ReActAgentRunOptions,
+ ReActAgentTemplateFactory,
+ ReActAgentTemplates,
)
from beeai_framework.cancellation import AbortSignal
from beeai_framework.context import RunContext
@@ -37,40 +37,40 @@
@dataclass
-class BeeRunnerLLMInput:
- meta: BeeMeta
+class ReActAgentRunnerLLMInput:
+ meta: ReActAgentIterationMeta
signal: AbortSignal
emitter: Emitter
@dataclass
-class RunnerIteration:
+class ReActAgentRunnerIteration:
emitter: Emitter
- state: BeeIterationResult
- meta: BeeMeta
+ state: ReActAgentIterationResult
+ meta: ReActAgentIterationMeta
signal: AbortSignal
@dataclass
-class BeeRunnerToolResult:
+class ReActAgentRunnerToolResult:
output: ToolOutput
success: bool
@dataclass
-class BeeRunnerToolInput:
- state: BeeIterationResult # TODO BeeIterationToolResult
- meta: BeeMeta
+class ReActAgentRunnerToolInput:
+ state: ReActAgentIterationResult
+ meta: ReActAgentIterationMeta
signal: AbortSignal
emitter: Emitter
class BaseRunner(ABC):
- def __init__(self, input: BeeInput, options: BeeRunOptions, run: RunContext) -> None:
+ def __init__(self, input: ReActAgentInput, options: ReActAgentRunOptions, run: RunContext) -> None:
self._input = input
self._options = options
self._memory: BaseMemory | None = None
- self._iterations: list[BeeAgentRunIteration] = []
+ self._iterations: list[ReActAgentRunIteration] = []
self._failed_attempts_counter: RetryCounter = RetryCounter(
error_type=AgentError,
max_retries=(
@@ -85,7 +85,7 @@ def __init__(self, input: BeeInput, options: BeeRunOptions, run: RunContext) ->
self._run = run
@property
- def iterations(self) -> list[BeeAgentRunIteration]:
+ def iterations(self) -> list[ReActAgentRunIteration]:
return self._iterations
@property
@@ -94,8 +94,8 @@ def memory(self) -> BaseMemory:
return self._memory
raise Exception("Memory has not been initialized.")
- async def create_iteration(self) -> RunnerIteration:
- meta: BeeMeta = BeeMeta(iteration=len(self._iterations) + 1)
+ async def create_iteration(self) -> ReActAgentRunnerIteration:
+ meta: ReActAgentIterationMeta = ReActAgentIterationMeta(iteration=len(self._iterations) + 1)
max_iterations = (
self._options.execution.max_iterations
if self._options.execution and self._options.execution.max_iterations
@@ -106,40 +106,40 @@ async def create_iteration(self) -> RunnerIteration:
raise AgentError(f"Agent was not able to resolve the task in {max_iterations} iterations.")
emitter = self._run.emitter.child(group_id=f"`iteration-{meta.iteration}")
- iteration = await self.llm(BeeRunnerLLMInput(emitter=emitter, signal=self._run.signal, meta=meta))
+ iteration = await self.llm(ReActAgentRunnerLLMInput(emitter=emitter, signal=self._run.signal, meta=meta))
self._iterations.append(iteration)
- return RunnerIteration(emitter=emitter, state=iteration.state, meta=meta, signal=self._run.signal)
+ return ReActAgentRunnerIteration(emitter=emitter, state=iteration.state, meta=meta, signal=self._run.signal)
- async def init(self, input: BeeRunInput) -> None:
+ async def init(self, input: ReActAgentRunInput) -> None:
self._memory = await self.init_memory(input)
@abstractmethod
- async def llm(self, input: BeeRunnerLLMInput) -> BeeAgentRunIteration:
+ async def llm(self, input: ReActAgentRunnerLLMInput) -> ReActAgentRunIteration:
pass
@abstractmethod
- async def tool(self, input: BeeRunnerToolInput) -> BeeRunnerToolResult:
+ async def tool(self, input: ReActAgentRunnerToolInput) -> ReActAgentRunnerToolResult:
pass
@abstractmethod
- def default_templates(self) -> BeeAgentTemplates:
+ def default_templates(self) -> ReActAgentTemplates:
pass
@abstractmethod
- async def init_memory(self, input: BeeRunInput) -> BaseMemory:
+ async def init_memory(self, input: ReActAgentRunInput) -> BaseMemory:
pass
@property
- def templates(self) -> BeeAgentTemplates:
+ def templates(self) -> ReActAgentTemplates:
overrides = self._input.templates or {}
templates = {}
for key, default_template in self.default_templates().model_dump().items():
- override: PromptTemplate | BeeTemplateFactory = overrides.get(key) or default_template
+ override: PromptTemplate | ReActAgentTemplateFactory = overrides.get(key) or default_template
if isinstance(override, PromptTemplate):
templates[key] = override
continue
templates[key] = override(default_template) or default_template
- return BeeAgentTemplates(**templates)
+ return ReActAgentTemplates(**templates)
# TODO: Serialization
diff --git a/python/beeai_framework/agents/react/runners/default/__init__.py b/python/beeai_framework/agents/react/runners/default/__init__.py
new file mode 100644
index 000000000..1857180f7
--- /dev/null
+++ b/python/beeai_framework/agents/react/runners/default/__init__.py
@@ -0,0 +1,14 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/python/beeai_framework/agents/runners/default/prompts.py b/python/beeai_framework/agents/react/runners/default/prompts.py
similarity index 100%
rename from python/beeai_framework/agents/runners/default/prompts.py
rename to python/beeai_framework/agents/react/runners/default/prompts.py
diff --git a/python/beeai_framework/agents/runners/default/runner.py b/python/beeai_framework/agents/react/runners/default/runner.py
similarity index 91%
rename from python/beeai_framework/agents/runners/default/runner.py
rename to python/beeai_framework/agents/react/runners/default/runner.py
index 0f933a3b9..b7ace2f47 100644
--- a/python/beeai_framework/agents/runners/default/runner.py
+++ b/python/beeai_framework/agents/react/runners/default/runner.py
@@ -17,13 +17,13 @@
from pydantic import BaseModel
-from beeai_framework.agents.runners.base import (
+from beeai_framework.agents.react.runners.base import (
BaseRunner,
- BeeRunnerLLMInput,
- BeeRunnerToolInput,
- BeeRunnerToolResult,
+ ReActAgentRunnerLLMInput,
+ ReActAgentRunnerToolInput,
+ ReActAgentRunnerToolResult,
)
-from beeai_framework.agents.runners.default.prompts import (
+from beeai_framework.agents.react.runners.default.prompts import (
AssistantPromptTemplate,
SchemaErrorTemplate,
SchemaErrorTemplateInput,
@@ -39,11 +39,11 @@
UserPromptTemplate,
UserPromptTemplateInput,
)
-from beeai_framework.agents.types import (
- BeeAgentRunIteration,
- BeeAgentTemplates,
- BeeIterationResult,
- BeeRunInput,
+from beeai_framework.agents.react.types import (
+ ReActAgentIterationResult,
+ ReActAgentRunInput,
+ ReActAgentRunIteration,
+ ReActAgentTemplates,
)
from beeai_framework.backend.chat import ChatModelOutput
from beeai_framework.backend.message import AssistantMessage, SystemMessage, UserMessage
@@ -68,8 +68,8 @@
class DefaultRunner(BaseRunner):
use_native_tool_calling: bool = False
- def default_templates(self) -> BeeAgentTemplates:
- return BeeAgentTemplates(
+ def default_templates(self) -> ReActAgentTemplates:
+ return ReActAgentTemplates(
system=SystemPromptTemplate,
assistant=AssistantPromptTemplate,
user=UserPromptTemplate,
@@ -120,7 +120,7 @@ def create_parser(self) -> LinePrefixParser:
),
)
- async def llm(self, input: BeeRunnerLLMInput) -> BeeAgentRunIteration:
+ async def llm(self, input: ReActAgentRunnerLLMInput) -> ReActAgentRunIteration:
async def on_retry(ctx: RetryableContext, last_error: Exception) -> None:
await input.emitter.emit("retry", {"meta": input.meta})
@@ -135,7 +135,7 @@ async def on_error(error: Exception, _: RetryableContext) -> None:
schema_error_prompt: str = self.templates.schema_error.render(SchemaErrorTemplateInput())
await self.memory.add(UserMessage(schema_error_prompt, {"tempMessage": True}))
- async def executor(_: RetryableContext) -> BeeAgentRunIteration:
+ async def executor(_: RetryableContext) -> ReActAgentRunIteration:
await input.emitter.emit("start", {"meta": input.meta, "tools": self._input.tools, "memory": self.memory})
parser = self.create_parser()
@@ -204,8 +204,8 @@ async def on_new_token(data: dict[str, Any], event: EventMeta) -> None:
]
)
- return BeeAgentRunIteration(
- raw=output, state=BeeIterationResult.model_validate(parser.final_state, strict=False)
+ return ReActAgentRunIteration(
+ raw=output, state=ReActAgentIterationResult.model_validate(parser.final_state, strict=False)
)
if self._options and self._options.execution and self._options.execution.max_retries_per_step:
@@ -223,7 +223,7 @@ async def on_new_token(data: dict[str, Any], event: EventMeta) -> None:
)
).get()
- async def tool(self, input: BeeRunnerToolInput) -> BeeRunnerToolResult:
+ async def tool(self, input: ReActAgentRunnerToolInput) -> ReActAgentRunnerToolResult:
tool: Tool | None = next(
(
tool
@@ -238,7 +238,7 @@ async def tool(self, input: BeeRunnerToolInput) -> BeeRunnerToolResult:
Exception(f"Agent was trying to use non-existing tool '${input.state.tool_name}'")
)
- return BeeRunnerToolResult(
+ return ReActAgentRunnerToolResult(
success=False,
output=StringToolOutput(
self.templates.tool_not_found_error.render(
@@ -265,7 +265,7 @@ async def on_error(error: Exception, _: RetryableContext) -> None:
)
self._failed_attempts_counter.use(error)
- async def executor(_: RetryableContext) -> BeeRunnerToolResult:
+ async def executor(_: RetryableContext) -> ReActAgentRunnerToolResult:
try:
tool_output: ToolOutput = await tool.run(input.state.tool_input, options={}) # TODO: pass tool options
output = (
@@ -273,20 +273,20 @@ async def executor(_: RetryableContext) -> BeeRunnerToolResult:
if not tool_output.is_empty()
else StringToolOutput(self.templates.tool_no_result_error.render({}))
)
- return BeeRunnerToolResult(
+ return ReActAgentRunnerToolResult(
output=output,
success=True,
)
except ToolInputValidationError as e:
self._failed_attempts_counter.use(e)
- return BeeRunnerToolResult(
+ return ReActAgentRunnerToolResult(
success=False,
output=StringToolOutput(self.templates.tool_input_error.render({"reason": e.explain()})),
)
except Exception as e:
err = ToolError.ensure(e)
self._failed_attempts_counter.use(err)
- return BeeRunnerToolResult(
+ return ReActAgentRunnerToolResult(
success=False,
output=StringToolOutput(self.templates.tool_error.render({"reason": err.explain()})),
)
@@ -304,7 +304,7 @@ async def executor(_: RetryableContext) -> BeeRunnerToolResult:
)
).get()
- async def init_memory(self, input: BeeRunInput) -> BaseMemory:
+ async def init_memory(self, input: ReActAgentRunInput) -> BaseMemory:
memory = TokenMemory(
capacity_threshold=0.85, sync_threshold=0.5, llm=self._input.llm
) # TODO handlers needs to be fixed
diff --git a/python/beeai_framework/agents/react/runners/granite/__init__.py b/python/beeai_framework/agents/react/runners/granite/__init__.py
new file mode 100644
index 000000000..1857180f7
--- /dev/null
+++ b/python/beeai_framework/agents/react/runners/granite/__init__.py
@@ -0,0 +1,14 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/python/beeai_framework/agents/runners/granite/prompts.py b/python/beeai_framework/agents/react/runners/granite/prompts.py
similarity index 98%
rename from python/beeai_framework/agents/runners/granite/prompts.py
rename to python/beeai_framework/agents/react/runners/granite/prompts.py
index 4ddf34fe5..8e26557cf 100644
--- a/python/beeai_framework/agents/runners/granite/prompts.py
+++ b/python/beeai_framework/agents/react/runners/granite/prompts.py
@@ -14,7 +14,7 @@
from datetime import UTC, datetime
-from beeai_framework.agents.runners.default.prompts import (
+from beeai_framework.agents.react.runners.default.prompts import (
AssistantPromptTemplateInput,
SchemaErrorTemplateInput,
SystemPromptTemplateInput,
diff --git a/python/beeai_framework/agents/runners/granite/runner.py b/python/beeai_framework/agents/react/runners/granite/runner.py
similarity index 88%
rename from python/beeai_framework/agents/runners/granite/runner.py
rename to python/beeai_framework/agents/react/runners/granite/runner.py
index 747c84abd..3773530bc 100644
--- a/python/beeai_framework/agents/runners/granite/runner.py
+++ b/python/beeai_framework/agents/react/runners/granite/runner.py
@@ -12,9 +12,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from beeai_framework.agents.runners.default.prompts import ToolNoResultsTemplate, UserEmptyPromptTemplate
-from beeai_framework.agents.runners.default.runner import DefaultRunner
-from beeai_framework.agents.runners.granite.prompts import (
+from beeai_framework.agents.react.runners.default.prompts import ToolNoResultsTemplate, UserEmptyPromptTemplate
+from beeai_framework.agents.react.runners.default.runner import DefaultRunner
+from beeai_framework.agents.react.runners.granite.prompts import (
GraniteAssistantPromptTemplate,
GraniteSchemaErrorTemplate,
GraniteSystemPromptTemplate,
@@ -23,7 +23,7 @@
GraniteToolNotFoundErrorTemplate,
GraniteUserPromptTemplate,
)
-from beeai_framework.agents.types import BeeAgentTemplates, BeeInput, BeeRunOptions
+from beeai_framework.agents.react.types import ReActAgentInput, ReActAgentRunOptions, ReActAgentTemplates
from beeai_framework.backend.message import MessageToolResultContent, ToolMessage
from beeai_framework.context import RunContext
from beeai_framework.emitter import EmitterOptions, EventMeta
@@ -37,7 +37,7 @@
class GraniteRunner(DefaultRunner):
use_native_tool_calling: bool = True
- def __init__(self, input: BeeInput, options: BeeRunOptions, run: RunContext) -> None:
+ def __init__(self, input: ReActAgentInput, options: ReActAgentRunOptions, run: RunContext) -> None:
super().__init__(input, options, run)
async def on_update(data: dict, event: EventMeta) -> None:
@@ -98,8 +98,8 @@ def create_parser(self) -> LinePrefixParser:
),
)
- def default_templates(self) -> BeeAgentTemplates:
- return BeeAgentTemplates(
+ def default_templates(self) -> ReActAgentTemplates:
+ return ReActAgentTemplates(
system=GraniteSystemPromptTemplate,
assistant=GraniteAssistantPromptTemplate,
user=GraniteUserPromptTemplate,
diff --git a/python/beeai_framework/agents/react/types.py b/python/beeai_framework/agents/react/types.py
new file mode 100644
index 000000000..4de3908b0
--- /dev/null
+++ b/python/beeai_framework/agents/react/types.py
@@ -0,0 +1,94 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from collections.abc import Callable
+from typing import Annotated
+
+from pydantic import BaseModel, InstanceOf
+
+from beeai_framework.agents.types import AgentExecutionConfig, AgentMeta
+from beeai_framework.backend.chat import ChatModel, ChatModelOutput
+from beeai_framework.backend.message import Message
+from beeai_framework.cancellation import AbortSignal
+from beeai_framework.memory.base_memory import BaseMemory
+from beeai_framework.template import PromptTemplate
+from beeai_framework.tools.tool import AnyTool
+from beeai_framework.utils.strings import to_json
+
+
+class ReActAgentRunInput(BaseModel):
+ prompt: str | None = None
+
+
+class ReActAgentIterationMeta(BaseModel):
+ iteration: int
+
+
+class ReActAgentRunOptions(BaseModel):
+ signal: AbortSignal | None = None
+ execution: AgentExecutionConfig | None = None
+
+
+class ReActAgentIterationResult(BaseModel):
+ thought: str | None = None
+ tool_name: str | None = None
+ tool_input: dict | None = None
+ tool_output: str | None = None
+ final_answer: str | None = None
+
+ def to_template(self) -> dict:
+ return {
+ "thought": self.thought or "",
+ "tool_name": self.tool_name or "",
+ "tool_input": to_json(self.tool_input) if self.tool_input else "",
+ "tool_output": self.tool_output or "",
+ "final_answer": self.final_answer or "",
+ }
+
+
+class ReActAgentRunIteration(BaseModel):
+ raw: InstanceOf[ChatModelOutput]
+ state: ReActAgentIterationResult
+
+
+class ReActAgentRunOutput(BaseModel):
+ result: InstanceOf[Message]
+ iterations: list[ReActAgentRunIteration]
+ memory: InstanceOf[BaseMemory]
+
+
+class ReActAgentTemplates(BaseModel):
+ system: InstanceOf[PromptTemplate] # TODO proper template subtypes
+ assistant: InstanceOf[PromptTemplate]
+ user: InstanceOf[PromptTemplate]
+ user_empty: InstanceOf[PromptTemplate]
+ tool_error: InstanceOf[PromptTemplate]
+ tool_input_error: InstanceOf[PromptTemplate]
+ tool_no_result_error: InstanceOf[PromptTemplate]
+ tool_not_found_error: InstanceOf[PromptTemplate]
+ schema_error: InstanceOf[PromptTemplate]
+
+
+ReActAgentTemplateFactory = Callable[[InstanceOf[PromptTemplate]], InstanceOf[PromptTemplate]]
+ModelKeysType = Annotated[str, lambda v: v in ReActAgentTemplates.model_fields]
+
+
+class ReActAgentInput(BaseModel):
+ llm: InstanceOf[ChatModel]
+ tools: list[InstanceOf[AnyTool]]
+ memory: InstanceOf[BaseMemory]
+ meta: InstanceOf[AgentMeta] | None = None
+ templates: dict[ModelKeysType, InstanceOf[PromptTemplate] | ReActAgentTemplateFactory] | None = None
+ execution: AgentExecutionConfig | None = None
+ stream: bool | None = None
diff --git a/python/beeai_framework/agents/types.py b/python/beeai_framework/agents/types.py
index 7a748511f..334c29304 100644
--- a/python/beeai_framework/agents/types.py
+++ b/python/beeai_framework/agents/types.py
@@ -12,26 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from collections.abc import Callable
-from typing import Annotated
from pydantic import BaseModel, InstanceOf
-from beeai_framework.backend.chat import ChatModel, ChatModelOutput
-from beeai_framework.backend.message import Message
-from beeai_framework.cancellation import AbortSignal
-from beeai_framework.memory.base_memory import BaseMemory
-from beeai_framework.template import PromptTemplate
from beeai_framework.tools.tool import Tool
-from beeai_framework.utils.strings import to_json
-
-
-class BeeRunInput(BaseModel):
- prompt: str | None = None
-
-
-class BeeMeta(BaseModel):
- iteration: int
class AgentExecutionConfig(BaseModel):
@@ -40,67 +24,8 @@ class AgentExecutionConfig(BaseModel):
max_iterations: int | None = None
-class BeeRunOptions(BaseModel):
- signal: AbortSignal | None = None
- execution: AgentExecutionConfig | None = None
-
-
-class BeeIterationResult(BaseModel):
- thought: str | None = None
- tool_name: str | None = None
- tool_input: dict | None = None
- tool_output: str | None = None
- final_answer: str | None = None
-
- def to_template(self) -> dict:
- return {
- "thought": self.thought or "",
- "tool_name": self.tool_name or "",
- "tool_input": to_json(self.tool_input) if self.tool_input else "",
- "tool_output": self.tool_output or "",
- "final_answer": self.final_answer or "",
- }
-
-
-class BeeAgentRunIteration(BaseModel):
- raw: InstanceOf[ChatModelOutput]
- state: BeeIterationResult
-
-
-class BeeRunOutput(BaseModel):
- result: InstanceOf[Message]
- iterations: list[BeeAgentRunIteration]
- memory: InstanceOf[BaseMemory]
-
-
-class BeeAgentTemplates(BaseModel):
- system: InstanceOf[PromptTemplate] # TODO proper template subtypes
- assistant: InstanceOf[PromptTemplate]
- user: InstanceOf[PromptTemplate]
- user_empty: InstanceOf[PromptTemplate]
- tool_error: InstanceOf[PromptTemplate]
- tool_input_error: InstanceOf[PromptTemplate]
- tool_no_result_error: InstanceOf[PromptTemplate]
- tool_not_found_error: InstanceOf[PromptTemplate]
- schema_error: InstanceOf[PromptTemplate]
-
-
class AgentMeta(BaseModel):
name: str
description: str
tools: list[InstanceOf[Tool]]
extra_description: str | None = None
-
-
-BeeTemplateFactory = Callable[[InstanceOf[PromptTemplate]], InstanceOf[PromptTemplate]]
-ModelKeysType = Annotated[str, lambda v: v in BeeAgentTemplates.model_fields]
-
-
-class BeeInput(BaseModel):
- llm: InstanceOf[ChatModel]
- tools: list[InstanceOf[Tool]]
- memory: InstanceOf[BaseMemory]
- meta: InstanceOf[AgentMeta] | None = None
- templates: dict[ModelKeysType, InstanceOf[PromptTemplate] | BeeTemplateFactory] | None = None
- execution: AgentExecutionConfig | None = None
- stream: bool | None = None
diff --git a/python/beeai_framework/backend/chat.py b/python/beeai_framework/backend/chat.py
index 8a1fb8092..59833e525 100644
--- a/python/beeai_framework/backend/chat.py
+++ b/python/beeai_framework/backend/chat.py
@@ -21,21 +21,22 @@
from beeai_framework.backend.constants import ProviderName
from beeai_framework.backend.errors import ChatModelError
-from beeai_framework.backend.message import AssistantMessage, Message, SystemMessage
+from beeai_framework.backend.message import AssistantMessage, Message, MessageToolCallContent, SystemMessage
from beeai_framework.backend.utils import load_model, parse_broken_json, parse_model
from beeai_framework.cancellation import AbortController, AbortSignal
from beeai_framework.context import Run, RunContext, RunContextInput, RunInstance
from beeai_framework.emitter import Emitter
+from beeai_framework.logger import Logger
from beeai_framework.retryable import Retryable, RetryableConfig, RetryableContext, RetryableInput
from beeai_framework.template import PromptTemplate, PromptTemplateInput
from beeai_framework.tools.tool import Tool
-from beeai_framework.utils.custom_logger import BeeLogger
+from beeai_framework.utils.lists import flatten
from beeai_framework.utils.models import ModelLike
from beeai_framework.utils.strings import to_json
T = TypeVar("T", bound=BaseModel)
ChatModelFinishReason: Literal["stop", "length", "function_call", "content_filter", "null"]
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
class ChatModelParameters(BaseModel):
@@ -105,7 +106,7 @@ def merge(self, other: Self) -> None:
self.finish_reason = other.finish_reason
if self.usage and other.usage:
merged_usage = self.usage.model_copy()
- if other.usage.get("total_tokens"):
+ if other.usage.total_tokens:
merged_usage.total_tokens = max(self.usage.total_tokens, other.usage.total_tokens)
merged_usage.prompt_tokens = max(self.usage.prompt_tokens, other.usage.prompt_tokens)
merged_usage.completion_tokens = max(self.usage.completion_tokens, other.usage.completion_tokens)
@@ -113,6 +114,10 @@ def merge(self, other: Self) -> None:
elif other.usage:
self.usage = other.usage.model_copy()
+ def get_tool_calls(self) -> list[MessageToolCallContent]:
+ assistant_message = [msg for msg in self.messages if isinstance(msg, AssistantMessage)]
+ return flatten([x.get_tool_calls() for x in assistant_message])
+
def get_text_content(self) -> str:
return "".join([x.text for x in list(filter(lambda x: isinstance(x, AssistantMessage), self.messages))])
diff --git a/python/beeai_framework/backend/constants.py b/python/beeai_framework/backend/constants.py
index a4e35fdbf..434740d4e 100644
--- a/python/beeai_framework/backend/constants.py
+++ b/python/beeai_framework/backend/constants.py
@@ -17,8 +17,8 @@
from pydantic import BaseModel
-ProviderName = Literal["ollama", "openai", "watsonx", "groq", "xai"]
-ProviderHumanName = Literal["Ollama", "OpenAI", "Watsonx", "Groq", "XAI"]
+ProviderName = Literal["ollama", "openai", "watsonx", "groq", "xai", "vertexai", "amazon_bedrock"]
+ProviderHumanName = Literal["Ollama", "OpenAI", "Watsonx", "Groq", "XAI", "VertexAI", "AmazonBedrock"]
class ProviderDef(BaseModel):
@@ -39,4 +39,10 @@ class ProviderModelDef(BaseModel):
"watsonx": ProviderDef(name="Watsonx", module="watsonx", aliases=["watsonx", "ibm"]),
"Groq": ProviderDef(name="Groq", module="groq", aliases=["groq"]),
"xAI": ProviderDef(name="XAI", module="xai", aliases=["xai", "grok"]),
+ "vertexAI": ProviderDef(name="VertexAI", module="vertexai", aliases=["vertexai", "google"]),
+ "AmazonBedrock": ProviderDef(
+ name="AmazonBedrock",
+ module="amazon_bedrock",
+ aliases=["amazon_bedrock", "amazon", "bedrock"],
+ ),
}
diff --git a/python/beeai_framework/backend/message.py b/python/beeai_framework/backend/message.py
index 973c371fd..be2f86a69 100644
--- a/python/beeai_framework/backend/message.py
+++ b/python/beeai_framework/backend/message.py
@@ -96,7 +96,7 @@ def from_string(self, text: str) -> T:
pass
def get_texts(self) -> list[MessageTextContent]:
- return list(filter(lambda x: isinstance(x, MessageTextContent), self.content))
+ return [cont for cont in self.content if isinstance(cont, MessageTextContent)]
def to_plain(self) -> dict[str, Any]:
return {
@@ -120,7 +120,7 @@ def from_string(self, text: str) -> MessageTextContent:
return MessageTextContent(text=text)
def get_tool_calls(self) -> list[MessageToolCallContent]:
- return list(filter(lambda x: isinstance(x, MessageToolCallContent), self.content))
+ return [cont for cont in self.content if isinstance(cont, MessageToolCallContent)]
def _models(self) -> Sequence[type[MessageToolCallContent] | type[MessageTextContent]]:
return [MessageToolCallContent, MessageTextContent]
diff --git a/python/beeai_framework/cancellation.py b/python/beeai_framework/cancellation.py
index 6eb44d3ab..375694c27 100644
--- a/python/beeai_framework/cancellation.py
+++ b/python/beeai_framework/cancellation.py
@@ -20,9 +20,9 @@
from pydantic import BaseModel
from beeai_framework.errors import AbortError
-from beeai_framework.utils.custom_logger import BeeLogger
+from beeai_framework.logger import Logger
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
T = TypeVar("T")
diff --git a/python/beeai_framework/context.py b/python/beeai_framework/context.py
index 98af83069..9bb98e371 100644
--- a/python/beeai_framework/context.py
+++ b/python/beeai_framework/context.py
@@ -27,12 +27,12 @@
from beeai_framework.cancellation import AbortController, AbortSignal, register_signals
from beeai_framework.emitter import Emitter, EventTrace
from beeai_framework.errors import AbortError, FrameworkError
+from beeai_framework.logger import Logger
from beeai_framework.utils.asynchronous import ensure_async
-from beeai_framework.utils.custom_logger import BeeLogger
-R = TypeVar("R", bound=BaseModel)
+R = TypeVar("R")
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
@dataclass
diff --git a/python/beeai_framework/utils/custom_logger.py b/python/beeai_framework/logger.py
similarity index 95%
rename from python/beeai_framework/utils/custom_logger.py
rename to python/beeai_framework/logger.py
index 0ceb0eaf2..e4721a18e 100644
--- a/python/beeai_framework/utils/custom_logger.py
+++ b/python/beeai_framework/logger.py
@@ -16,6 +16,7 @@
import logging
import sys
from logging import Formatter
+from typing import TYPE_CHECKING
from beeai_framework.errors import FrameworkError
from beeai_framework.utils.config import CONFIG
@@ -29,7 +30,7 @@ def __init__(self, message: str = "Logger error", *, cause: Exception | None = N
super().__init__(message, is_fatal=True, is_retryable=False, cause=cause)
-class BeeLoggerFormatter(Formatter):
+class LoggerFormatter(Formatter):
def format(self, record: logging.LogRecord) -> str:
if hasattr(record, "is_event_message") and record.is_event_message:
return logging.Formatter(
@@ -45,14 +46,17 @@ def format(self, record: logging.LogRecord) -> str:
).format(record)
-class BeeLogger(logging.Logger):
+class Logger(logging.Logger):
+ if TYPE_CHECKING:
+ trace = logging.Logger.debug
+
def __init__(self, name: str, level: int | str = CONFIG.log_level) -> None:
self.add_logging_level("TRACE", logging.DEBUG - 5)
super().__init__(name, level)
console_handler = logging.StreamHandler(stream=sys.stdout)
- console_handler.setFormatter(BeeLoggerFormatter())
+ console_handler.setFormatter(LoggerFormatter())
self.addHandler(console_handler)
diff --git a/python/beeai_framework/memory/file_cache.py b/python/beeai_framework/memory/file_cache.py
index 782984f3e..7088d5437 100644
--- a/python/beeai_framework/memory/file_cache.py
+++ b/python/beeai_framework/memory/file_cache.py
@@ -21,12 +21,12 @@
import aiofiles
+from beeai_framework.logger import Logger
from beeai_framework.memory.base_cache import BaseCache
from beeai_framework.memory.serializer import Serializer
from beeai_framework.memory.sliding_cache import SlidingCache
-from beeai_framework.utils import BeeLogger
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
T = TypeVar("T")
diff --git a/python/beeai_framework/retryable.py b/python/beeai_framework/retryable.py
index cbbd10d84..4f8155ca0 100644
--- a/python/beeai_framework/retryable.py
+++ b/python/beeai_framework/retryable.py
@@ -22,11 +22,11 @@
from beeai_framework.cancellation import AbortSignal, abort_signal_handler
from beeai_framework.errors import FrameworkError
-from beeai_framework.utils.custom_logger import BeeLogger
+from beeai_framework.logger import Logger
from beeai_framework.utils.models import ModelLike, to_model
T = TypeVar("T", bound=BaseModel)
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
class Meta(BaseModel):
diff --git a/python/beeai_framework/tools/__init__.py b/python/beeai_framework/tools/__init__.py
index 8101e4aa7..5e968a123 100644
--- a/python/beeai_framework/tools/__init__.py
+++ b/python/beeai_framework/tools/__init__.py
@@ -22,6 +22,7 @@
)
__all__ = [
+ "JSONToolOutput",
"StringToolOutput",
"Tool",
"ToolError",
diff --git a/python/beeai_framework/tools/mcp_tools.py b/python/beeai_framework/tools/mcp_tools.py
index 76776d321..8231bd89b 100644
--- a/python/beeai_framework/tools/mcp_tools.py
+++ b/python/beeai_framework/tools/mcp_tools.py
@@ -13,81 +13,61 @@
# limitations under the License.
-import json
-from dataclasses import dataclass
-from typing import Any, TypeVar
+from typing import Any
-from mcp.client.session import ClientSession
-from mcp.types import CallToolResult
+from mcp import ClientSession, StdioServerParameters
+from mcp.client.stdio import stdio_client
from mcp.types import Tool as MCPToolInfo
+from pydantic import BaseModel
+from beeai_framework.context import RunContext
from beeai_framework.emitter import Emitter
+from beeai_framework.logger import Logger
from beeai_framework.tools import Tool
-from beeai_framework.tools.tool import ToolOutput
-from beeai_framework.utils import BeeLogger
+from beeai_framework.tools.tool import JSONToolOutput, ToolOutput, ToolRunOptions
+from beeai_framework.utils.models import JSONSchemaModel
+from beeai_framework.utils.strings import to_safe_word
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
-T = TypeVar("T")
-
-@dataclass
-class MCPToolInput:
- """Input configuration for MCP Tool initialization."""
-
- client: ClientSession
- tool: MCPToolInfo
-
-
-class MCPToolOutput(ToolOutput):
- """Output class for MCP Tool results."""
-
- def __init__(self, result: CallToolResult) -> None:
- self.result = result
-
- def get_text_content(self) -> str:
- return json.dumps(self.result, default=lambda o: o.__dict__, sort_keys=True, indent=4)
-
- def is_empty(self) -> bool:
- return not self.result
-
-
-class MCPTool(Tool[MCPToolOutput]):
+class MCPTool(Tool[BaseModel, ToolRunOptions, ToolOutput]):
"""Tool implementation for Model Context Protocol."""
- def __init__(self, client: ClientSession, tool: MCPToolInfo, **options: int) -> None:
+ def __init__(self, server_params: StdioServerParameters, tool: MCPToolInfo, **options: int) -> None:
"""Initialize MCPTool with client and tool configuration."""
super().__init__(options)
- self.client = client
+ self._server_params = server_params
self._tool = tool
- self._name = tool.name
- self._description = tool.description or "No available description, use the tool based on its name and schema."
@property
def name(self) -> str:
- return self._name
+ return self._tool.name
@property
def description(self) -> str:
- return self._description
+ return self._tool.description or "No available description, use the tool based on its name and schema."
- def input_schema(self) -> str:
- return self._tool.inputSchema
+ @property
+ def input_schema(self) -> type[BaseModel]:
+ return JSONSchemaModel.create(self.name, self._tool.inputSchema)
def _create_emitter(self) -> Emitter:
return Emitter.root().child(
- namespace=["tool", "mcp", self.name],
+ namespace=["tool", "mcp", to_safe_word(self._tool.name)],
creator=self,
)
- async def _run(self, input_data: Any, options: dict | None = None) -> MCPToolOutput:
+ async def _run(self, input_data: Any, options: ToolRunOptions | None, context: RunContext) -> JSONToolOutput:
"""Execute the tool with given input."""
- logger.debug(f"Executing tool {self.name} with input: {input_data}")
- result = await self.client.call_tool(name=self.name, arguments=input_data)
- logger.debug(f"Tool result: {result}")
- return MCPToolOutput(result)
+ logger.debug(f"Executing tool {self._tool.name} with input: {input_data}")
+ async with stdio_client(self._server_params) as (read, write), ClientSession(read, write) as session:
+ await session.initialize()
+ result = await session.call_tool(name=self._tool.name, arguments=input_data.model_dump())
+ logger.debug(f"Tool result: {result}")
+ return JSONToolOutput(result.content)
@classmethod
- async def from_client(cls, client: ClientSession) -> list["MCPTool"]:
+ async def from_client(cls, client: ClientSession, server_params: StdioServerParameters) -> list["MCPTool"]:
tools_result = await client.list_tools()
- return [cls(client=client, tool=tool) for tool in tools_result.tools]
+ return [MCPTool(server_params, tool) for tool in tools_result.tools]
diff --git a/python/beeai_framework/tools/search/duckduckgo.py b/python/beeai_framework/tools/search/duckduckgo.py
index 6072ba6fd..b9df510b1 100644
--- a/python/beeai_framework/tools/search/duckduckgo.py
+++ b/python/beeai_framework/tools/search/duckduckgo.py
@@ -13,18 +13,17 @@
# limitations under the License.
-from typing import Any
-
from duckduckgo_search import DDGS
from pydantic import BaseModel, Field
+from beeai_framework.context import RunContext
from beeai_framework.emitter.emitter import Emitter
+from beeai_framework.logger import Logger
from beeai_framework.tools import ToolError
from beeai_framework.tools.search import SearchToolOutput, SearchToolResult
-from beeai_framework.tools.tool import Tool
-from beeai_framework.utils import BeeLogger
+from beeai_framework.tools.tool import Tool, ToolRunOptions
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
class DuckDuckGoSearchType:
@@ -45,7 +44,7 @@ class DuckDuckGoSearchToolOutput(SearchToolOutput):
pass
-class DuckDuckGoSearchTool(Tool[DuckDuckGoSearchToolInput]):
+class DuckDuckGoSearchTool(Tool[DuckDuckGoSearchToolInput, ToolRunOptions, DuckDuckGoSearchToolOutput]):
name = "DuckDuckGo"
description = "Search for online trends, news, current events, real-time information, or research topics."
input_schema = DuckDuckGoSearchToolInput
@@ -61,7 +60,9 @@ def _create_emitter(self) -> Emitter:
creator=self,
)
- async def _run(self, input: DuckDuckGoSearchToolInput, _: Any | None = None) -> DuckDuckGoSearchToolOutput:
+ async def _run(
+ self, input: DuckDuckGoSearchToolInput, options: ToolRunOptions | None, context: RunContext
+ ) -> DuckDuckGoSearchToolOutput:
try:
results = DDGS().text(input.query, max_results=self.max_results, safesearch=self.safe_search)
search_results: list[SearchToolResult] = [
diff --git a/python/beeai_framework/tools/search/wikipedia.py b/python/beeai_framework/tools/search/wikipedia.py
index cc0b25b06..88037437c 100644
--- a/python/beeai_framework/tools/search/wikipedia.py
+++ b/python/beeai_framework/tools/search/wikipedia.py
@@ -13,14 +13,13 @@
# limitations under the License.
-from typing import Any
-
import wikipediaapi
from pydantic import BaseModel, Field
+from beeai_framework.context import RunContext
from beeai_framework.emitter.emitter import Emitter
from beeai_framework.tools.search import SearchToolOutput, SearchToolResult
-from beeai_framework.tools.tool import Tool
+from beeai_framework.tools.tool import Tool, ToolRunOptions
class WikipediaToolInput(BaseModel):
@@ -38,7 +37,7 @@ class WikipediaToolOutput(SearchToolOutput):
pass
-class WikipediaTool(Tool[WikipediaToolInput]):
+class WikipediaTool(Tool[WikipediaToolInput, ToolRunOptions, WikipediaToolOutput]):
name = "Wikipedia"
description = "Search factual and historical information, including biography, \
history, politics, geography, society, culture, science, technology, people, \
@@ -60,7 +59,9 @@ def get_section_titles(self, sections: wikipediaapi.WikipediaPage.sections) -> s
titles.append(section.title)
return ",".join(str(title) for title in titles)
- async def _run(self, input: WikipediaToolInput, _: Any | None = None) -> WikipediaToolOutput:
+ async def _run(
+ self, input: WikipediaToolInput, options: ToolRunOptions | None, context: RunContext
+ ) -> WikipediaToolOutput:
page_py = self.client.page(input.query)
if not page_py.exists():
diff --git a/python/beeai_framework/tools/tool.py b/python/beeai_framework/tools/tool.py
index fba372afc..66b52a142 100644
--- a/python/beeai_framework/tools/tool.py
+++ b/python/beeai_framework/tools/tool.py
@@ -17,21 +17,22 @@
from abc import ABC, abstractmethod
from collections.abc import Callable
from functools import cached_property
-from typing import Any, Generic, TypeVar
+from typing import Any, Generic, TypeAlias
from pydantic import BaseModel, ConfigDict, ValidationError, create_model
+from typing_extensions import TypeVar
from beeai_framework.cancellation import AbortSignal
from beeai_framework.context import Run, RunContext, RunContextInput, RunInstance
from beeai_framework.emitter.emitter import Emitter
+from beeai_framework.logger import Logger
from beeai_framework.retryable import Retryable, RetryableConfig, RetryableContext, RetryableInput
from beeai_framework.tools.errors import ToolError, ToolInputValidationError
-from beeai_framework.utils import BeeLogger
-from beeai_framework.utils.strings import to_safe_word
+from beeai_framework.utils.strings import to_json, to_safe_word
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
-T = TypeVar("T", bound=BaseModel)
+IN = TypeVar("IN", bound=BaseModel)
class RetryOptions(BaseModel):
@@ -44,6 +45,9 @@ class ToolRunOptions(BaseModel):
signal: AbortSignal | None = None
+OPT = TypeVar("OPT", bound=ToolRunOptions, default=ToolRunOptions)
+
+
class ToolOutput(ABC):
@abstractmethod
def get_text_content(self) -> str:
@@ -53,10 +57,13 @@ def get_text_content(self) -> str:
def is_empty(self) -> bool:
pass
- def to_string(self) -> str:
+ def __str__(self) -> str:
return self.get_text_content()
+OUT = TypeVar("OUT", bound=ToolOutput, default=ToolOutput)
+
+
class StringToolOutput(ToolOutput):
def __init__(self, result: str = "") -> None:
super().__init__()
@@ -69,7 +76,18 @@ def get_text_content(self) -> str:
return self.result
-class Tool(Generic[T], ABC):
+class JSONToolOutput(ToolOutput):
+ def __init__(self, result: Any) -> None:
+ self.result = result
+
+ def get_text_content(self) -> str:
+ return to_json(self.result)
+
+ def is_empty(self) -> bool:
+ return not self.result
+
+
+class Tool(Generic[IN, OPT, OUT], ABC):
def __init__(self, options: dict[str, Any] | None = None) -> None:
self.options: dict[str, Any] | None = options or None
@@ -85,7 +103,7 @@ def description(self) -> str:
@property
@abstractmethod
- def input_schema(self) -> type[T]:
+ def input_schema(self) -> type[IN]:
pass
@cached_property
@@ -97,17 +115,17 @@ def _create_emitter(self) -> Emitter:
pass
@abstractmethod
- async def _run(self, input: T, options: ToolRunOptions | None = None) -> Any:
+ async def _run(self, input: IN, options: OPT | None, context: RunContext) -> OUT:
pass
- def validate_input(self, input: T | dict[str, Any]) -> T:
+ def validate_input(self, input: IN | dict[str, Any]) -> IN:
try:
return self.input_schema.model_validate(input)
except ValidationError as e:
raise ToolInputValidationError("Tool input validation error", cause=e)
- def run(self, input: T | dict[str, Any], options: ToolRunOptions | None = None) -> Run[T]:
- async def run_tool(context: RunContext) -> T:
+ def run(self, input: IN | dict[str, Any], options: OPT | None = None) -> Run[OUT]:
+ async def run_tool(context: RunContext) -> OUT:
error_propagated = False
try:
@@ -119,7 +137,7 @@ async def executor(_: RetryableContext) -> Any:
nonlocal error_propagated
error_propagated = False
await context.emitter.emit("start", meta)
- return await self._run(validated_input, options)
+ return await self._run(validated_input, options, context)
async def on_error(error: Exception, _: RetryableContext) -> None:
nonlocal error_propagated
@@ -203,7 +221,7 @@ def tool(tool_function: Callable) -> Tool:
if tool_description is None:
raise ValueError("No tool description provided.")
- class FunctionTool(Tool):
+ class FunctionTool(Tool[Any, ToolRunOptions, ToolOutput]):
name = tool_name
description = tool_description or ""
input_schema = tool_input
@@ -217,12 +235,20 @@ def _create_emitter(self) -> Emitter:
creator=self,
)
- async def _run(self, input: T, _: ToolRunOptions | None = None) -> None:
+ async def _run(self, input: Any, options: ToolRunOptions | None, context: RunContext) -> ToolOutput:
tool_input_dict = input.model_dump()
if inspect.iscoroutinefunction(tool_function):
- return await tool_function(**tool_input_dict)
+ result = await tool_function(**tool_input_dict)
else:
- return tool_function(**tool_input_dict)
+ result = tool_function(**tool_input_dict)
+
+ if isinstance(result, ToolOutput):
+ return result
+ else:
+ return StringToolOutput(result=str(result))
f_tool = FunctionTool()
return f_tool
+
+
+AnyTool: TypeAlias = Tool[BaseModel, ToolRunOptions, ToolOutput]
diff --git a/python/beeai_framework/tools/weather/openmeteo.py b/python/beeai_framework/tools/weather/openmeteo.py
index c7937b285..11091b798 100644
--- a/python/beeai_framework/tools/weather/openmeteo.py
+++ b/python/beeai_framework/tools/weather/openmeteo.py
@@ -23,12 +23,13 @@
import requests
from pydantic import BaseModel, Field
+from beeai_framework.context import RunContext
from beeai_framework.emitter.emitter import Emitter
+from beeai_framework.logger import Logger
from beeai_framework.tools import ToolInputValidationError
-from beeai_framework.tools.tool import StringToolOutput, Tool
-from beeai_framework.utils import BeeLogger
+from beeai_framework.tools.tool import StringToolOutput, Tool, ToolRunOptions
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
class OpenMeteoToolInput(BaseModel):
@@ -45,7 +46,7 @@ class OpenMeteoToolInput(BaseModel):
)
-class OpenMeteoTool(Tool[OpenMeteoToolInput]):
+class OpenMeteoTool(Tool[OpenMeteoToolInput, ToolRunOptions, StringToolOutput]):
name = "OpenMeteoTool"
description = "Retrieve current, past, or future weather forecasts for a location."
input_schema = OpenMeteoToolInput
@@ -137,7 +138,9 @@ def _trim_date(date_str: str) -> str:
params["temperature_unit"] = input.temperature_unit
return params
- async def _run(self, input: OpenMeteoToolInput, options: Any = None) -> StringToolOutput:
+ async def _run(
+ self, input: OpenMeteoToolInput, options: ToolRunOptions | None, context: RunContext
+ ) -> StringToolOutput:
params = urlencode(self.get_params(input), doseq=True)
logger.debug(f"Using OpenMeteo URL: https://api.open-meteo.com/v1/forecast?{params}")
diff --git a/python/beeai_framework/utils/__init__.py b/python/beeai_framework/utils/__init__.py
index 1a272219e..6b0927f94 100644
--- a/python/beeai_framework/utils/__init__.py
+++ b/python/beeai_framework/utils/__init__.py
@@ -14,7 +14,6 @@
from beeai_framework.utils.config import CONFIG
-from beeai_framework.utils.custom_logger import BeeLogger
from beeai_framework.utils.events import MessageEvent
-__all__ = ["CONFIG", "BeeLogger", "MessageEvent"]
+__all__ = ["CONFIG", "MessageEvent"]
diff --git a/python/beeai_framework/utils/dicts.py b/python/beeai_framework/utils/dicts.py
new file mode 100644
index 000000000..0cebcd3b1
--- /dev/null
+++ b/python/beeai_framework/utils/dicts.py
@@ -0,0 +1,26 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+def exclude_keys(input: dict, keys: set[str]) -> dict:
+ return {k: input[k] for k in input.keys() - keys}
+
+
+def include_keys(input: dict, keys: set[str]) -> dict:
+ valid_keys = [k for k in input if k in keys]
+ return {k: input[k] for k in valid_keys}
+
+
+def exclude_none(input: dict) -> dict:
+ return {k: v for k, v in input.items() if v is not None}
diff --git a/python/beeai_framework/utils/lists.py b/python/beeai_framework/utils/lists.py
new file mode 100644
index 000000000..357d0675d
--- /dev/null
+++ b/python/beeai_framework/utils/lists.py
@@ -0,0 +1,21 @@
+# Copyright 2025 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from typing import TypeVar
+
+T = TypeVar("T")
+
+
+def flatten(xss: list[list[T]]) -> list[T]:
+ return [x for xs in xss for x in xs]
diff --git a/python/beeai_framework/utils/models.py b/python/beeai_framework/utils/models.py
index 00bc91285..58a705b4b 100644
--- a/python/beeai_framework/utils/models.py
+++ b/python/beeai_framework/utils/models.py
@@ -12,12 +12,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+from abc import ABC
from collections.abc import Sequence
from contextlib import suppress
from typing import Any, TypeVar, Union
-from pydantic import BaseModel
-from pydantic_core import SchemaValidator
+from pydantic import BaseModel, ConfigDict, Field, GetJsonSchemaHandler, create_model
+from pydantic.json_schema import JsonSchemaValue
+from pydantic_core import CoreSchema, SchemaValidator
T = TypeVar("T", bound=BaseModel)
ModelLike = Union[T, dict] # noqa: UP007
@@ -47,3 +49,61 @@ def to_model_optional(cls: type[T], obj: ModelLike[T] | None) -> T | None:
def check_model(model: T) -> None:
schema_validator = SchemaValidator(schema=model.__pydantic_core_schema__)
schema_validator.validate_python(model.__dict__)
+
+
+class JSONSchemaModel(ABC, BaseModel):
+ _custom_json_schema: JsonSchemaValue
+
+ model_config = ConfigDict(
+ arbitrary_types_allowed=False, validate_default=True, json_schema_mode_override="validation"
+ )
+
+ @classmethod
+ def __get_pydantic_json_schema__(
+ cls,
+ core_schema: CoreSchema,
+ handler: GetJsonSchemaHandler,
+ /,
+ ) -> JsonSchemaValue:
+ return cls._custom_json_schema.copy()
+
+ @classmethod
+ def create(cls, schema_name: str, schema: dict) -> type["JSONSchemaModel"]:
+ type_mapping = {
+ "string": str,
+ "integer": int,
+ "number": float,
+ "boolean": bool,
+ "object": dict,
+ "array": list,
+ "null": None,
+ }
+
+ fields: dict[str, tuple[type, Any]] = {}
+ required = set(schema.get("required", []))
+ properties = schema.get("properties", {})
+
+ for param_name, param in properties.items():
+ target_type = type_mapping.get(param.get("type"))
+ is_optional = param_name not in required
+ if is_optional:
+ target_type = target_type | type(None)
+
+ if not target_type:
+ raise ValueError(f"Unsupported type '{param.get('type')}' found in the schema.")
+
+ if target_type is dict:
+ target_type = cls.create(param_name, param)
+
+ fields[param_name] = (
+ target_type,
+ Field(description=param.get("description"), default=None if is_optional else ...),
+ )
+
+ model: type[JSONSchemaModel] = create_model(
+ schema_name,
+ **fields,
+ __base__=cls,
+ )
+ model._custom_json_schema = schema
+ return model
diff --git a/python/beeai_framework/workflows/agent.py b/python/beeai_framework/workflows/agent.py
index 3148fab33..27314d1d5 100644
--- a/python/beeai_framework/workflows/agent.py
+++ b/python/beeai_framework/workflows/agent.py
@@ -22,11 +22,11 @@
from pydantic import BaseModel, ConfigDict, Field, InstanceOf
from beeai_framework.agents.base import BaseAgent, BaseMemory
-from beeai_framework.agents.bee import BeeAgent
+from beeai_framework.agents.react import ReActAgent
+from beeai_framework.agents.react.types import ReActAgentRunOutput
from beeai_framework.agents.types import (
AgentExecutionConfig,
AgentMeta,
- BeeRunOutput,
)
from beeai_framework.backend.chat import ChatModel
from beeai_framework.backend.message import AssistantMessage, Message
@@ -85,13 +85,13 @@ async def factory(memory: ReadOnlyMemory) -> BaseAgent:
return self._add(name, agent if callable(agent) else self._create_factory(agent))
def _create_factory(self, agent_input: AgentFactoryInput) -> AgentFactory:
- def factory(memory: BaseMemory) -> BeeAgent:
+ def factory(memory: BaseMemory) -> ReActAgent:
def customizer(config: PromptTemplateInput) -> PromptTemplateInput:
new_config = config.model_copy()
new_config.defaults["instructions"] = agent_input.instructions or config.defaults.get("instructions")
return new_config
- return BeeAgent(
+ return ReActAgent(
llm=agent_input.llm,
tools=agent_input.tools or [],
memory=memory,
@@ -109,7 +109,7 @@ async def step(state: Schema) -> None:
await memory.add(message)
agent = await ensure_async(factory)(memory.as_read_only())
- run_output: BeeRunOutput = await agent.run()
+ run_output: ReActAgentRunOutput = await agent.run()
state.final_answer = run_output.result.text
state.new_messages.append(
AssistantMessage(f"Assistant Name: {name}\nAssistant Response: {run_output.result.text}")
diff --git a/python/docs/agents.md b/python/docs/agents.md
index 988093c46..640b623c4 100644
--- a/python/docs/agents.md
+++ b/python/docs/agents.md
@@ -76,8 +76,8 @@ final_answer: The current weather in Las Vegas is 20.5°C with an apparent tempe
> During execution, the agent emits partial updates as it generates each line, followed by complete updates. Updates follow a strict order: first all partial updates for "thought," then a complete "thought" update, then moving to the next component.
For practical examples, see:
-- [simple.py](/python/examples/agents/simple.py) - Basic example of a Bee Agent using OpenMeteo and DuckDuckGo tools
-- [bee.py](/python/examples/agents/bee.py) - More complete example using Wikipedia integration
+- [simple.py](/python/examples/agents/simple.py) - Basic example of a ReAct Agent using OpenMeteo and DuckDuckGo tools
+- [react.py](/python/examples/agents/react.py) - More complete example using Wikipedia integration
- [granite.py](/python/examples/agents/granite.py) - Example using the Granite model
---
@@ -97,7 +97,7 @@ response = await agent.run(
).observe(observer)
```
-_Source: [examples/agents/bee.py](/python/examples/agents/bee.py)_
+_Source: [examples/agents/react.py](/python/examples/agents/react.py)_
> [!TIP]
> The default is zero retries and no timeout. For complex tasks, increasing the max_iterations is recommended.
@@ -112,7 +112,7 @@ Customize how the agent formats prompts, including the system prompt that define
import sys
import traceback
-from beeai_framework.agents.runners.default.prompts import (
+from beeai_framework.agents.react.runners.default.prompts import (
SystemPromptTemplate,
ToolDefinition,
)
@@ -163,7 +163,7 @@ The agent uses several templates that you can override:
Enhance your agent's capabilities by providing it with tools to interact with external systems.
```py
-agent = BeeAgent(
+agent = ReActAgent(
llm=llm,
tools=[DuckDuckGoSearchTool(), OpenMeteoTool()],
memory=UnconstrainedMemory()
@@ -186,7 +186,7 @@ _Source: [examples/agents/simple.py](/python/examples/agents/simple.py)_
Memory allows your agent to maintain context across multiple interactions.
```python
-agent = BeeAgent(
+agent = ReActAgent(
llm=llm,
tools=[DuckDuckGoSearchTool(), OpenMeteoTool()],
memory=UnconstrainedMemory()
@@ -248,7 +248,8 @@ from beeai_framework import (
UserMessage,
)
from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
-from beeai_framework.agents.types import AgentMeta, BeeRunInput, BeeRunOptions
+from beeai_framework.agents.react.types import ReActAgentRunInput, ReActAgentRunOptions
+from beeai_framework.agents.types import AgentMeta
from beeai_framework.backend.chat import ChatModel
from beeai_framework.context import RunContext
from beeai_framework.emitter import Emitter
@@ -266,7 +267,7 @@ class RunOutput(BaseModel):
state: State
-class RunOptions(BeeRunOptions):
+class RunOptions(ReActAgentRunOptions):
max_retries: int | None = None
@@ -283,10 +284,13 @@ class CustomAgent(BaseAgent[RunOutput]):
)
async def _run(
- self, run_input: ModelLike[BeeRunInput], options: ModelLike[BeeRunOptions] | None, context: RunContext
+ self,
+ run_input: ModelLike[ReActAgentRunInput],
+ options: ModelLike[ReActAgentRunOptions] | None,
+ context: RunContext,
) -> RunOutput:
- run_input = to_model(BeeRunInput, run_input)
- options = to_model_optional(BeeRunOptions, options)
+ run_input = to_model(ReActAgentRunInput, run_input)
+ options = to_model_optional(ReActAgentRunOptions, options)
class CustomSchema(BaseModel):
thought: str = Field(description="Describe your thought process before coming with a final answer")
@@ -345,14 +349,14 @@ if __name__ == "__main__":
Agents can be configured to use memory to maintain conversation context and state.
-
+
```py
import asyncio
import sys
import traceback
-from beeai_framework.agents.bee.agent import BeeAgent
+from beeai_framework.agents.react.agent import ReActAgent
from beeai_framework.agents.types import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel
from beeai_framework.backend.message import AssistantMessage, UserMessage
@@ -363,11 +367,11 @@ from beeai_framework.memory.unconstrained_memory import UnconstrainedMemory
memory = UnconstrainedMemory()
-def create_agent() -> BeeAgent:
+def create_agent() -> ReActAgent:
llm = ChatModel.from_name("ollama:granite3.1-dense:8b")
# Initialize the agent
- agent = BeeAgent(llm=llm, memory=memory, tools=[])
+ agent = ReActAgent(llm=llm, memory=memory, tools=[])
return agent
@@ -420,13 +424,13 @@ if __name__ == "__main__":
```
-_Source: [examples/memory/agentMemory.py](/python/examples/memory/agentMemory.py)_
+_Source: [examples/memory/agent_memory.py](/python/examples/memory/agent_memory.py)_
**Memory types for different use cases:**
-- [UnconstrainedMemory](/python/examples/memory/unconstrainedMemory.py) - For unlimited storage
-- [SlidingMemory](/python/examples/memory/slidingMemory.py) - For keeping only the most recent messages
-- [TokenMemory](/python/examples/memory/tokenMemory.py) - For managing token limits
-- [SummarizeMemory](/python/examples/memory/summarizeMemory.py) - For summarizing previous conversations
+- [UnconstrainedMemory](/python/examples/memory/unconstrained_memory.py) - For unlimited storage
+- [SlidingMemory](/python/examples/memory/sliding_memory.py) - For keeping only the most recent messages
+- [TokenMemory](/python/examples/memory/token_memory.py) - For managing token limits
+- [SummarizeMemory](/python/examples/memory/summarize_memory.py) - For summarizing previous conversations
---
@@ -511,6 +515,6 @@ _Source: [examples/workflows/multi_agents.py](/python/examples/workflows/multi_a
## Examples
- [simple.py](/python/examples/agents/simple.py) - Basic agent implementation
-- [bee.py](/python/examples/agents/bee.py) - More complete implementation
+- [react.py](/python/examples/agents/react.py) - More complete implementation
- [granite.py](/python/examples/agents/granite.py) - Using Granite model
-- [agents.ipynb](/python/examples/notebooks/agents.ipynb) - Interactive notebook examples
\ No newline at end of file
+- [agents.ipynb](/python/examples/notebooks/agents.ipynb) - Interactive notebook examples
diff --git a/python/docs/backend.md b/python/docs/backend.md
index a92c191b8..9d83a83c1 100644
--- a/python/docs/backend.md
+++ b/python/docs/backend.md
@@ -47,8 +47,8 @@ The following table depicts supported providers. Each provider requires specific
| `OpenAI` | ✅ | | `openai` | OPENAI_CHAT_MODEL
OPENAI_API_BASE
OPENAI_API_KEY
OPENAI_ORGANIZATION |
| `Watsonx` | ✅ | | `@ibm-cloud/watsonx-ai` | WATSONX_CHAT_MODEL
WATSONX_EMBEDDING_MODEL
WATSONX_API_KEY
WATSONX_PROJECT_ID
WATSONX_SPACE_ID
WATSONX_VERSION
WATSONX_REGION |
| `Groq` | ✅ | | | GROQ_CHAT_MODEL
GROQ_API_KEY |
-| `Amazon Bedrock` | | | Coming soon! | AWS_CHAT_MODEL
AWS_EMBEDDING_MODEL
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION
AWS_SESSION_TOKEN |
-| `Google Vertex` | | | Coming soon! | GOOGLE_VERTEX_CHAT_MODEL
GOOGLE_VERTEX_EMBEDDING_MODEL
GOOGLE_VERTEX_PROJECT
GOOGLE_VERTEX_ENDPOINT
GOOGLE_VERTEX_LOCATION |
+| `Amazon Bedrock` | ✅ | | `boto3`| AWS_CHAT_MODEL
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION_NAME |
+| `Google Vertex` | ✅ | | | VERTEXAI_CHAT_MODEL
VERTEXAI_PROJECT
GOOGLE_APPLICATION_CREDENTIALS
GOOGLE_APPLICATION_CREDENTIALS_JSON
GOOGLE_CREDENTIALS |
| `Azure OpenAI` | | | Coming soon! | AZURE_OPENAI_CHAT_MODEL
AZURE_OPENAI_EMBEDDING_MODEL
AZURE_OPENAI_API_KEY
AZURE_OPENAI_API_ENDPOINT
AZURE_OPENAI_API_RESOURCE
AZURE_OPENAI_API_VERSION |
| `Anthropic` | | | Coming soon! | ANTHROPIC_CHAT_MODEL
ANTHROPIC_EMBEDDING_MODEL
ANTHROPIC_API_KEY
ANTHROPIC_API_BASE_URL
ANTHROPIC_API_HEADERS |
| `xAI` | ✅ | | | XAI_CHAT_MODEL
XAI_API_KEY |
@@ -68,16 +68,19 @@ The `Backend` class serves as a central entry point to access models from your c
```py
import asyncio
+import json
import sys
import traceback
from pydantic import BaseModel, Field
+from beeai_framework import ToolMessage
from beeai_framework.adapters.watsonx.backend.chat import WatsonxChatModel
from beeai_framework.backend.chat import ChatModel
-from beeai_framework.backend.message import UserMessage
+from beeai_framework.backend.message import MessageToolResultContent, UserMessage
from beeai_framework.cancellation import AbortSignal
from beeai_framework.errors import AbortError, FrameworkError
+from beeai_framework.tools.weather.openmeteo import OpenMeteoTool
# Setting can be passed here during initiation or pre-configured via environment variables
llm = WatsonxChatModel(
@@ -139,6 +142,26 @@ async def watson_structure() -> None:
print(response.object)
+async def watson_tool_calling() -> None:
+ watsonx_llm = ChatModel.from_name(
+ "watsonx:ibm/granite-3-8b-instruct",
+ )
+ user_message = UserMessage("What is the current weather in Boston?")
+ weather_tool = OpenMeteoTool()
+ response = await watsonx_llm.create(messages=[user_message], tools=[weather_tool])
+ tool_call_msg = response.get_tool_calls()[0]
+ print(tool_call_msg.model_dump())
+ tool_response = await weather_tool.run(json.loads(tool_call_msg.args))
+ tool_response_msg = ToolMessage(
+ MessageToolResultContent(
+ result=tool_response.get_text_content(), tool_name=tool_call_msg.tool_name, tool_call_id=tool_call_msg.id
+ )
+ )
+ print(tool_response_msg.to_plain())
+ final_response = await watsonx_llm.create(messages=[user_message, tool_response_msg], tools=[])
+ print(final_response.get_text_content())
+
+
async def main() -> None:
print("*" * 10, "watsonx_from_name")
await watsonx_from_name()
@@ -150,6 +173,8 @@ async def main() -> None:
await watsonx_stream_abort()
print("*" * 10, "watson_structure")
await watson_structure()
+ print("*" * 10, "watson_tool_calling")
+ await watson_tool_calling()
if __name__ == "__main__":
@@ -232,8 +257,56 @@ response = await llm.create(messages=[user_message], stream=True)
Generate structured data according to a schema:
-```txt
-Coming soon
+
+
+```py
+import asyncio
+import json
+import sys
+import traceback
+
+from pydantic import BaseModel, Field
+
+from beeai_framework import UserMessage
+from beeai_framework.backend.chat import ChatModel
+from beeai_framework.errors import FrameworkError
+
+
+async def main() -> None:
+ model = ChatModel.from_name("ollama:llama3.1")
+
+ class ProfileSchema(BaseModel):
+ first_name: str = Field(..., min_length=1)
+ last_name: str = Field(..., min_length=1)
+ address: str
+ age: int = Field(..., min_length=1)
+ hobby: str
+
+ class ErrorSchema(BaseModel):
+ error: str
+
+ class SchemUnion(ProfileSchema, ErrorSchema):
+ pass
+
+ response = await model.create_structure(
+ schema=SchemUnion,
+ messages=[UserMessage("Generate a profile of a citizen of Europe.")],
+ )
+
+ print(
+ json.dumps(
+ response.object.model_dump() if isinstance(response.object, BaseModel) else response.object, indent=4
+ )
+ )
+
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except FrameworkError as e:
+ traceback.print_exc()
+ sys.exit(e.explain())
+
```
_Source: /examples/backend/structured.py_
@@ -242,11 +315,79 @@ _Source: /examples/backend/structured.py_
Integrate external tools with your AI model:
-```txt
-Coming soon
+
+
+
+```py
+import asyncio
+import json
+import re
+import sys
+import traceback
+
+from beeai_framework import Message, SystemMessage, Tool, ToolMessage, UserMessage
+from beeai_framework.backend.chat import ChatModel, ChatModelParameters
+from beeai_framework.backend.message import MessageToolResultContent
+from beeai_framework.errors import FrameworkError
+from beeai_framework.tools import ToolOutput
+from beeai_framework.tools.search import DuckDuckGoSearchTool
+from beeai_framework.tools.weather.openmeteo import OpenMeteoTool
+
+
+async def main() -> None:
+ model = ChatModel.from_name("ollama:llama3.1", ChatModelParameters(temperature=0))
+ tools: list[Tool] = [DuckDuckGoSearchTool(), OpenMeteoTool()]
+ messages: list[Message] = [
+ SystemMessage("You are a helpful assistant. Use tools to provide a correct answer."),
+ UserMessage("What's the fastest marathon time?"),
+ ]
+
+ while True:
+ response = await model.create(
+ messages=messages,
+ tools=tools,
+ )
+
+ tool_calls = response.get_tool_calls()
+
+ tool_results: list[ToolMessage] = []
+
+ for tool_call in tool_calls:
+ print(f"-> running '{tool_call.tool_name}' tool with {tool_call.args}")
+ tool: Tool = next(tool for tool in tools if tool.name == tool_call.tool_name)
+ assert tool is not None
+ res: ToolOutput = await tool.run(json.loads(tool_call.args))
+ result = res.get_text_content()
+ print(f"<- got response from '{tool_call.tool_name}'", re.sub(r"\s+", " ", result)[:90] + " (truncated)")
+ tool_results.append(
+ ToolMessage(
+ MessageToolResultContent(
+ result=result,
+ tool_name=tool_call.tool_name,
+ tool_call_id=tool_call.id,
+ )
+ )
+ )
+
+ messages.extend(tool_results)
+
+ answer = response.get_text_content()
+
+ if answer:
+ print(f"Agent: {answer}")
+ break
+
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except FrameworkError as e:
+ traceback.print_exc()
+ sys.exit(e.explain())
+
```
-_Source: /examples/backend/toolCalling.py_
+_Source: /examples/backend/tool_calling.py_
---
diff --git a/python/docs/emitter.md b/python/docs/emitter.md
index 43aad4e4b..dfb7359cf 100644
--- a/python/docs/emitter.md
+++ b/python/docs/emitter.md
@@ -206,13 +206,13 @@ import asyncio
import sys
import traceback
-from beeai_framework import BeeAgent, UnconstrainedMemory
+from beeai_framework import ReActAgent, UnconstrainedMemory
from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
from beeai_framework.errors import FrameworkError
async def main() -> None:
- agent = BeeAgent(
+ agent = ReActAgent(
llm=OllamaChatModel("llama3.1"),
memory=UnconstrainedMemory(),
tools=[],
diff --git a/python/docs/events.md b/python/docs/events.md
index 27956c47c..8930d9cb9 100644
--- a/python/docs/events.md
+++ b/python/docs/events.md
@@ -33,15 +33,15 @@ These events can be observed calling `agent.run`
```python
{
"data": Message,
- "iterations": list[BeeAgentRunIteration],
+ "iterations": list[ReActAgentRunIteration],
"memory": BaseMemory,
- "meta": BeeMeta,
+ "meta": ReActAgentIterationMeta,
}
- "update" and "partialUpdate":
```python
{
- "data": BeeIterationResult | dict[str, Any],
+ "data": ReActAgentIterationResult | dict[str, Any],
"update": {
"key": str,
"value": Any,
@@ -58,8 +58,8 @@ These events can be observed calling `agent.run`
"data": {
"tool": Tool,
"input": Any,
- "options": BeeRunOptions,
- "iteration": BeeIterationResult,
+ "options": ReActAgentRunOptions,
+ "iteration": ReActAgentIterationResult,
},
"meta": BeeMeta,
}
@@ -70,8 +70,8 @@ These events can be observed calling `agent.run`
"data": {
"tool": Tool,
"input": Any,
- "options": BeeRunOptions,
- "iteration": BeeIterationResult,
+ "options": ReActAgentRunOptions,
+ "iteration": ReActAgentIterationResult,
"result": ToolOutput,
},
"meta": BeeMeta,
@@ -83,8 +83,8 @@ These events can be observed calling `agent.run`
"data": {
"tool": Tool,
"input": Any,
- "options": BeeRunOptions,
- "iteration": BeeIterationResult,
+ "options": ReActAgentRunOptions,
+ "iteration": ReActAgentIterationResult,
"error": FrameworkError,
},
"meta": BeeMeta,
diff --git a/python/docs/logger.md b/python/docs/logger.md
index 4e11d4a00..456b5dde9 100644
--- a/python/docs/logger.md
+++ b/python/docs/logger.md
@@ -26,7 +26,7 @@ In the BeeAI framework, the `Logger` class is an abstraction built on top of Pyt
> [!NOTE]
>
-> Location within the framework: [beeai_framework/utils](/python/beeai_framework/utils).
+> Location within the framework: [beeai_framework/logger](/python/beeai_framework/logger).
---
@@ -45,8 +45,24 @@ In the BeeAI framework, the `Logger` class is an abstraction built on top of Pyt
To use the logger in your application:
+
+
```py
-# Coming soon
+import logging
+
+from beeai_framework.logger import Logger
+
+# Configure logger with default log level
+logger = Logger("app", level=logging.TRACE)
+
+# Log at different levels
+logger.trace("Trace!")
+logger.debug("Debug!")
+logger.info("Info!")
+logger.warning("Warning!")
+logger.error("Error!")
+logger.fatal("Fatal!")
+
```
_Source: examples/logger/base.py_
@@ -65,7 +81,7 @@ The logger adds a TRACE level below DEBUG for extremely detailed logging:
```py
# Configure a logger with a specific level
-logger = BeeLogger("app", level="TRACE") # Or use logging constants like logging.DEBUG
+logger = Logger("app", level="TRACE") # Or use logging constants like logging.DEBUG
# Log with the custom TRACE level
logger.trace("This is a very low-level trace message")
@@ -101,8 +117,43 @@ The logger integrates with BeeAI framework's error handling system through the `
The Logger seamlessly integrates with agents in the framework. Below is an example that demonstrates how logging can be used in conjunction with agents and event emitters.
+
+
```py
-# Coming soon
+import asyncio
+import logging
+import sys
+import traceback
+
+from beeai_framework.agents.react.agent import ReActAgent
+from beeai_framework.agents.react.types import ReActAgentRunOutput
+from beeai_framework.backend.chat import ChatModel
+from beeai_framework.errors import FrameworkError
+from beeai_framework.logger import Logger
+from beeai_framework.memory.unconstrained_memory import UnconstrainedMemory
+
+
+async def main() -> None:
+ logger = Logger("app", level=logging.TRACE)
+
+ agent = ReActAgent(llm=ChatModel.from_name("ollama:granite3.1-dense:8b"), tools=[], memory=UnconstrainedMemory())
+
+ output: ReActAgentRunOutput = await agent.run("Hello!").observe(
+ lambda emitter: emitter.on(
+ "update", lambda data, event: logger.info(f"Event {event.path} triggered by {type(event.creator).__name__}")
+ )
+ )
+
+ logger.info(f"Agent 🤖 : {output.result.text}")
+
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except FrameworkError as e:
+ traceback.print_exc()
+ sys.exit(e.explain())
+
```
_Source: /examples/logger/agent.py_
@@ -111,4 +162,5 @@ _Source: /examples/logger/agent.py_
## Examples
-* Coming soon! 🎉
\ No newline at end of file
+- [base.py](/python/examples/logger/base.py) - Simple example showing log levels
+- [agent.py](/python/examples/logger/agent.py) - Simple example showing agent integration
diff --git a/python/docs/memory.md b/python/docs/memory.md
index 95dc0687e..92ea6ef8c 100644
--- a/python/docs/memory.md
+++ b/python/docs/memory.md
@@ -112,7 +112,7 @@ _Source: [/python/examples/memory/base.py](/python/examples/memory/base.py)_
### Usage with LLMs
-
+
```py
import asyncio
@@ -153,7 +153,7 @@ if __name__ == "__main__":
```
-_Source: [/python/examples/memory/llmMemory.py](/python/examples/memory/llmMemory.py)_
+_Source: [/python/examples/memory/llm_memory.py](/python/examples/memory/llm_memory.py)_
> [!TIP]
>
@@ -161,14 +161,14 @@ _Source: [/python/examples/memory/llmMemory.py](/python/examples/memory/llmMemor
### Usage with agents
-
+
```py
import asyncio
import sys
import traceback
-from beeai_framework.agents.bee.agent import BeeAgent
+from beeai_framework.agents.react.agent import ReActAgent
from beeai_framework.agents.types import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel
from beeai_framework.backend.message import AssistantMessage, UserMessage
@@ -179,11 +179,11 @@ from beeai_framework.memory.unconstrained_memory import UnconstrainedMemory
memory = UnconstrainedMemory()
-def create_agent() -> BeeAgent:
+def create_agent() -> ReActAgent:
llm = ChatModel.from_name("ollama:granite3.1-dense:8b")
# Initialize the agent
- agent = BeeAgent(llm=llm, memory=memory, tools=[])
+ agent = ReActAgent(llm=llm, memory=memory, tools=[])
return agent
@@ -236,7 +236,7 @@ if __name__ == "__main__":
```
-_Source: [/python/examples/memory/agentMemory.py](/python/examples/memory/agentMemory.py)_
+_Source: [/python/examples/memory/agent_memory.py](/python/examples/memory/agent_memory.py)_
> [!TIP]
>
@@ -244,7 +244,7 @@ _Source: [/python/examples/memory/agentMemory.py](/python/examples/memory/agentM
> [!NOTE]
>
-> Bee Agent internally uses `TokenMemory` to store intermediate steps for a given run.
+> ReAct Agent internally uses `TokenMemory` to store intermediate steps for a given run.
---
@@ -256,7 +256,7 @@ The framework provides multiple out-of-the-box memory implementations for differ
Unlimited in size, stores all messages without constraints.
-
+
```py
import asyncio
@@ -293,14 +293,14 @@ if __name__ == "__main__":
```
-_Source: [/python/examples/memory/unconstrainedMemory.py](/python/examples/memory/unconstrainedMemory.py)_
+_Source: [/python/examples/memory/unconstrained_memory.py](/python/examples/memory/unconstrained_memory.py)_
### SlidingMemory
Keeps last `k` entries in the memory. The oldest ones are deleted (unless specified otherwise).
-
+
```py
import asyncio
@@ -346,7 +346,7 @@ if __name__ == "__main__":
```
-_Source: [/python/examples/memory/slidingMemory.py](/python/examples/memory/slidingMemory.py)_
+_Source: [/python/examples/memory/sliding_memory.py](/python/examples/memory/sliding_memory.py)_
### TokenMemory
@@ -354,7 +354,7 @@ _Source: [/python/examples/memory/slidingMemory.py](/python/examples/memory/slid
Ensures that the token sum of all messages is below the given threshold.
If overflow occurs, the oldest message will be removed.
-
+
```py
import asyncio
@@ -421,13 +421,13 @@ if __name__ == "__main__":
```
-_Source: [/python/examples/memory/tokenMemory.py](/python/examples/memory/tokenMemory.py)_
+_Source: [/python/examples/memory/token_memory.py](/python/examples/memory/token_memory.py)_
### SummarizeMemory
Only a single summarization of the conversation is preserved. Summarization is updated with every new message.
-
+
```py
import asyncio
@@ -477,7 +477,7 @@ if __name__ == "__main__":
```
-_Source: [python/examples/memory/summarizeMemory.py](/python/examples/memory/summarizeMemory.py)_
+_Source: [python/examples/memory/summarize_memory.py](/python/examples/memory/summarize_memory.py)_
---
@@ -526,8 +526,8 @@ _Source: [/python/examples/memory/custom.py](/python/examples/memory/custom.py)_
## Examples
-- [unconstrainedMemory.py](/examples/memory/unconstrainedMemory.py) - Basic memory usage
-- [slidingMemory.py](/examples/memory/slidingMemory.py) - Sliding window memory example
-- [tokenMemory.py](/examples/memory/tokenMemory.py) - Token-based memory management
-- [summarizeMemory.py](/examples/memory/summarizeMemory.py) - Summarization memory example
-- [agentMemory.py](/examples/memory/agentMemory.py) - Using memory with agents
\ No newline at end of file
+- [unconstrained_memory.py](/examples/memory/unconstrained_memory.py) - Basic memory usage
+- [sliding_memory.py](/examples/memory/sliding_memory.py) - Sliding window memory example
+- [token_memory.py](/examples/memory/token_memory.py) - Token-based memory management
+- [summarize_memory.py](/examples/memory/summarize_memory.py) - Summarization memory example
+- [agent_memory.py](/examples/memory/agent_memory.py) - Using memory with agents
diff --git a/python/docs/templates.md b/python/docs/templates.md
index 037093826..1bf936847 100644
--- a/python/docs/templates.md
+++ b/python/docs/templates.md
@@ -326,7 +326,7 @@ The framework's agents use specialized templates to structure their behavior. Yo
import sys
import traceback
-from beeai_framework.agents.runners.default.prompts import (
+from beeai_framework.agents.react.runners.default.prompts import (
SystemPromptTemplate,
ToolDefinition,
)
diff --git a/python/docs/tools.md b/python/docs/tools.md
index f27978586..6159791e7 100644
--- a/python/docs/tools.md
+++ b/python/docs/tools.md
@@ -13,6 +13,7 @@
- [DuckDuckGo Search Tool](#duckduckgo-search-tool)
- [OpenMeteo Weather Tool](#openmeteo-weather-tool)
- [Wikipedia Tool](#wikipedia-tool)
+ - [MCP Tool](#mcp-tool)
- [Creating Custom Tools](#creating-custom-tools)
- [Basic Custom Tool](#basic-custom-tool)
- [Advanced Custom Tool](#advanced-custom-tool)
@@ -46,6 +47,10 @@ Ready-to-use tools that provide immediate functionality for common agent tasks:
For detailed usage examples of each built-in tool with complete implementation code, see the [tools examples directory](/python/examples/tools).
+> [!TIP]
+>
+> Would you like to use a tool from LangChain? See the [LangChain tool example](/python/examples/tools/langchain.py).
+
## Usage
### Basic usage
@@ -123,13 +128,14 @@ _Source: [/python/examples/tools/advanced.py](/python/examples/tools/advanced.py
The true power of tools emerges when integrating them with agents. Tools extend the agent's capabilities, allowing it to perform actions beyond text generation:
+
```py
from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
-from beeai_framework.agents.bee import BeeAgent
+from beeai_framework.agents.react import ReActAgent
from beeai_framework.memory import UnconstrainedMemory
from beeai_framework.tools.weather.openmeteo import OpenMeteoTool
-agent = BeeAgent(llm=OllamaChatModel("llama3.1"), tools=[OpenMeteoTool()], memory=UnconstrainedMemory())
+agent = ReActAgent(llm=OllamaChatModel("llama3.1"), tools=[OpenMeteoTool()], memory=UnconstrainedMemory())
```
@@ -140,6 +146,7 @@ _Source: [/python/examples/tools/agent.py](/python/examples/tools/agent.py)_
For simpler tools, you can use the `tool` decorator to quickly create a tool from a function:
+
```py
import asyncio
import json
@@ -149,15 +156,15 @@ from urllib.parse import quote
import requests
-from beeai_framework import BeeAgent, tool
+from beeai_framework import ReActAgent, tool
from beeai_framework.agents.types import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel
from beeai_framework.errors import FrameworkError
+from beeai_framework.logger import Logger
from beeai_framework.memory.unconstrained_memory import UnconstrainedMemory
from beeai_framework.tools.tool import StringToolOutput
-from beeai_framework.utils import BeeLogger
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
# defining a tool using the `tool` decorator
@@ -192,7 +199,7 @@ async def main() -> None:
chat_model = ChatModel.from_name("ollama:granite3.1-dense:8b")
- agent = BeeAgent(llm=chat_model, tools=[basic_calculator], memory=UnconstrainedMemory())
+ agent = ReActAgent(llm=chat_model, tools=[basic_calculator], memory=UnconstrainedMemory())
result = await agent.run("What is the square root of 36?", execution=AgentExecutionConfig(total_max_retries=10))
@@ -217,12 +224,13 @@ _Source: [/python/examples/tools/decorator.py](/python/examples/tools/decorator.
Use the DuckDuckGo tool to search the web and retrieve current information:
+
```py
import asyncio
import sys
import traceback
-from beeai_framework.agents.bee import BeeAgent
+from beeai_framework.agents.react import ReActAgent
from beeai_framework.backend.chat import ChatModel
from beeai_framework.errors import FrameworkError
from beeai_framework.memory import UnconstrainedMemory
@@ -231,7 +239,7 @@ from beeai_framework.tools.search.duckduckgo import DuckDuckGoSearchTool
async def main() -> None:
chat_model = ChatModel.from_name("ollama:granite3.1-dense:8b")
- agent = BeeAgent(llm=chat_model, tools=[DuckDuckGoSearchTool()], memory=UnconstrainedMemory())
+ agent = ReActAgent(llm=chat_model, tools=[DuckDuckGoSearchTool()], memory=UnconstrainedMemory())
result = await agent.run("How tall is the mount Everest?")
@@ -260,7 +268,7 @@ import asyncio
import sys
import traceback
-from beeai_framework.agents.bee import BeeAgent
+from beeai_framework.agents.react import ReActAgent
from beeai_framework.backend.chat import ChatModel
from beeai_framework.errors import FrameworkError
from beeai_framework.memory import UnconstrainedMemory
@@ -269,7 +277,7 @@ from beeai_framework.tools.weather.openmeteo import OpenMeteoTool
async def main() -> None:
llm = ChatModel.from_name("ollama:granite3.1-dense:8b")
- agent = BeeAgent(llm=llm, tools=[OpenMeteoTool()], memory=UnconstrainedMemory())
+ agent = ReActAgent(llm=llm, tools=[OpenMeteoTool()], memory=UnconstrainedMemory())
result = await agent.run("What's the current weather in London?")
@@ -324,6 +332,54 @@ if __name__ == "__main__":
_Source: [/python/examples/tools/wikipedia.py](/python/examples/tools/wikipedia.py)_
+### MCP Tool
+
+The MCPTool allows you to instantiate tools given a connection to MCP server with tools capability.
+
+
+
+```py
+import asyncio
+import os
+
+from dotenv import load_dotenv
+from mcp import ClientSession, StdioServerParameters
+from mcp.client.stdio import stdio_client
+
+from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
+from beeai_framework.agents.react import ReActAgent
+from beeai_framework.memory import UnconstrainedMemory
+from beeai_framework.tools.mcp_tools import MCPTool
+
+load_dotenv()
+
+# Create server parameters for stdio connection
+server_params = StdioServerParameters(
+ command="npx",
+ args=["-y", "@modelcontextprotocol/server-slack"],
+ env={
+ "SLACK_BOT_TOKEN": os.environ["SLACK_BOT_TOKEN"],
+ "SLACK_TEAM_ID": os.environ["SLACK_TEAM_ID"],
+ "PATH": os.getenv("PATH", default=""),
+ },
+)
+
+
+async def slack_tool() -> MCPTool:
+ async with stdio_client(server_params) as (read, write), ClientSession(read, write) as session:
+ await session.initialize()
+ # Discover Slack tools via MCP client
+ slacktools = await MCPTool.from_client(session, server_params)
+ filter_tool = filter(lambda tool: tool.name == "slack_post_message", slacktools)
+ slack = list(filter_tool)
+ return slack[0]
+
+
+agent = ReActAgent(llm=OllamaChatModel("llama3.1"), tools=[asyncio.run(slack_tool())], memory=UnconstrainedMemory())
+```
+
+_Source: [/python/examples/tools/mcp_tool_creation.py](/python/examples/tools/mcp_tool_creation.py)_
+
## Creating custom tools
Custom tools allow you to build your own specialized tools to extend agent capabilities.
@@ -347,16 +403,17 @@ from typing import Any
from pydantic import BaseModel, Field
+from beeai_framework.context import RunContext
from beeai_framework.emitter.emitter import Emitter
from beeai_framework.errors import FrameworkError
-from beeai_framework.tools.tool import StringToolOutput, Tool
+from beeai_framework.tools.tool import StringToolOutput, Tool, ToolRunOptions
class RiddleToolInput(BaseModel):
riddle_number: int = Field(description="Index of riddle to retrieve.")
-class RiddleTool(Tool[RiddleToolInput]):
+class RiddleTool(Tool[RiddleToolInput, ToolRunOptions, StringToolOutput]):
name = "Riddle"
description = "It selects a riddle to test your knowledge."
input_schema = RiddleToolInput
@@ -380,7 +437,9 @@ class RiddleTool(Tool[RiddleToolInput]):
creator=self,
)
- async def _run(self, input: RiddleToolInput, _: Any | None = None) -> StringToolOutput:
+ async def _run(
+ self, input: RiddleToolInput, options: ToolRunOptions | None, context: RunContext
+ ) -> StringToolOutput:
index = input.riddle_number % (len(self.data))
riddle = self.data[index]
return StringToolOutput(result=riddle)
@@ -424,10 +483,11 @@ from typing import Any
import httpx
from pydantic import BaseModel, Field
+from beeai_framework.context import RunContext
from beeai_framework.emitter.emitter import Emitter
from beeai_framework.errors import FrameworkError
from beeai_framework.tools import ToolInputValidationError
-from beeai_framework.tools.tool import Tool
+from beeai_framework.tools.tool import JSONToolOutput, Tool, ToolRunOptions
class OpenLibraryToolInput(BaseModel):
@@ -442,7 +502,7 @@ class OpenLibraryToolResult(BaseModel):
bib_key: str
-class OpenLibraryTool(Tool[OpenLibraryToolInput]):
+class OpenLibraryTool(Tool[OpenLibraryToolInput, ToolRunOptions, JSONToolOutput]):
name = "OpenLibrary"
description = """Provides access to a library of books with information about book titles,
authors, contributors, publication dates, publisher and isbn."""
@@ -457,7 +517,9 @@ class OpenLibraryTool(Tool[OpenLibraryToolInput]):
creator=self,
)
- async def _run(self, tool_input: OpenLibraryToolInput, _: Any | None = None) -> OpenLibraryToolResult:
+ async def _run(
+ self, tool_input: OpenLibraryToolInput, options: ToolRunOptions | None, context: RunContext
+ ) -> JSONToolOutput:
key = ""
value = ""
input_vars = vars(tool_input)
@@ -479,10 +541,12 @@ class OpenLibraryTool(Tool[OpenLibraryToolInput]):
json_output = response.json()[f"{key}:{value}"]
- return OpenLibraryToolResult(
- preview_url=json_output.get("preview_url", ""),
- info_url=json_output.get("info_url", ""),
- bib_key=json_output.get("bib_key", ""),
+ return JSONToolOutput(
+ result={
+ "preview_url": json_output.get("preview_url", ""),
+ "info_url": json_output.get("info_url", ""),
+ "bib_key": json_output.get("bib_key", ""),
+ }
)
diff --git a/python/examples/README.md b/python/examples/README.md
index a000f7dbe..2a73c3530 100644
--- a/python/examples/README.md
+++ b/python/examples/README.md
@@ -15,13 +15,13 @@ This repository contains examples demonstrating the usage of the BeeAI Framework
## Agents
-- [`bee.py`](/python/examples/agents/bee.py): Basic Bee Agent implementation
+- [`react.py`](/python/examples/agents/react.py): Basic ReAct Agent implementation
- [`simple.py`](/python/examples/agents/simple.py): Simple agent implementation
## Workflows
- [`simple.py`](/python/examples/workflows/simple.py): Introduction to workflows
-- [`multiAgents.py`](/python/examples/workflows/multi_agents.py): Multi-step sequential agentic workflow.
+- [`multi_agents.py`](/python/examples/workflows/multi_agents.py): Multi-step sequential agentic workflow.
- [`web_agent.py`](/python/examples/workflows/web_agent.py): Web Agent
## Helpers
@@ -40,11 +40,11 @@ This repository contains examples demonstrating the usage of the BeeAI Framework
## Memory
-- [`agentMemory.py`](/python/examples/memory/agentMemory.py): Memory management for agents
-- [`slidingMemory.py`](/python/examples/memory/slidingMemory.py): Sliding window memory
-- [`summarizeMemory.py`](/python/examples/memory/summarizeMemory.py): Memory with summarization
-- [`tokenMemory.py`](/python/examples/memory/tokenMemory.py): Token-based memory
-- [`unconstrainedMemory.py`](/python/examples/memory/unconstrainedMemory.py): Unconstrained memory example
+- [unconstrained_memory.py](/examples/memory/unconstrained_memory.py) - Basic memory usage
+- [sliding_memory.py](/examples/memory/sliding_memory.py) - Sliding window memory example
+- [token_memory.py](/examples/memory/token_memory.py) - Token-based memory management
+- [summarize_memory.py](/examples/memory/summarize_memory.py) - Summarization memory example
+- [agent_memory.py](/examples/memory/agent_memory.py) - Using memory with agents
## Templates
@@ -54,6 +54,7 @@ This repository contains examples demonstrating the usage of the BeeAI Framework
- [`arrays.py`](/python/examples/templates/arrays.py): Working with arrays
- [`forking.py`](/python/examples/templates/forking.py): Template forking
- [`system_prompt.py`](/python/examples/templates/system_prompt.py): Using templates with agents
+
## Tools
- [`decorator.py`](/python/examples/tools/decorator.py): Tool creation using decorator
diff --git a/python/examples/agents/custom_agent.py b/python/examples/agents/custom_agent.py
index 9c69c4a9d..9f84552c7 100644
--- a/python/examples/agents/custom_agent.py
+++ b/python/examples/agents/custom_agent.py
@@ -14,7 +14,8 @@
UserMessage,
)
from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
-from beeai_framework.agents.types import AgentMeta, BeeRunInput, BeeRunOptions
+from beeai_framework.agents.react.types import ReActAgentRunInput, ReActAgentRunOptions
+from beeai_framework.agents.types import AgentMeta
from beeai_framework.backend.chat import ChatModel
from beeai_framework.context import RunContext
from beeai_framework.emitter import Emitter
@@ -32,7 +33,7 @@ class RunOutput(BaseModel):
state: State
-class RunOptions(BeeRunOptions):
+class RunOptions(ReActAgentRunOptions):
max_retries: int | None = None
@@ -49,10 +50,13 @@ def __init__(self, llm: ChatModel, memory: BaseMemory) -> None:
)
async def _run(
- self, run_input: ModelLike[BeeRunInput], options: ModelLike[BeeRunOptions] | None, context: RunContext
+ self,
+ run_input: ModelLike[ReActAgentRunInput],
+ options: ModelLike[ReActAgentRunOptions] | None,
+ context: RunContext,
) -> RunOutput:
- run_input = to_model(BeeRunInput, run_input)
- options = to_model_optional(BeeRunOptions, options)
+ run_input = to_model(ReActAgentRunInput, run_input)
+ options = to_model_optional(ReActAgentRunOptions, options)
class CustomSchema(BaseModel):
thought: str = Field(description="Describe your thought process before coming with a final answer")
diff --git a/python/examples/agents/granite.py b/python/examples/agents/granite.py
index c5c83c36e..1b2fecb27 100644
--- a/python/examples/agents/granite.py
+++ b/python/examples/agents/granite.py
@@ -2,8 +2,9 @@
import sys
import traceback
-from beeai_framework.agents.bee.agent import BeeAgent
-from beeai_framework.agents.types import AgentExecutionConfig, BeeRunOutput
+from beeai_framework.agents.react.agent import ReActAgent
+from beeai_framework.agents.react.types import ReActAgentRunOutput
+from beeai_framework.agents.types import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel
from beeai_framework.emitter import Emitter, EventMeta
from beeai_framework.errors import FrameworkError
@@ -16,7 +17,7 @@
async def main() -> None:
chat_model: ChatModel = ChatModel.from_name("ollama:granite3.1-dense:8b")
- agent = BeeAgent(
+ agent = ReActAgent(
llm=chat_model, tools=[OpenMeteoTool(), DuckDuckGoSearchTool(max_results=3)], memory=UnconstrainedMemory()
)
@@ -30,7 +31,7 @@ def update_callback(data: dict, event: EventMeta) -> None:
def on_update(emitter: Emitter) -> None:
emitter.on("update", update_callback)
- output: BeeRunOutput = await agent.run(
+ output: ReActAgentRunOutput = await agent.run(
prompt=prompt, execution=AgentExecutionConfig(total_max_retries=2, max_retries_per_step=3, max_iterations=8)
).observe(on_update)
diff --git a/python/examples/agents/bee.py b/python/examples/agents/react.py
similarity index 90%
rename from python/examples/agents/bee.py
rename to python/examples/agents/react.py
index a3a154b1b..e53ddafae 100644
--- a/python/examples/agents/bee.py
+++ b/python/examples/agents/react.py
@@ -8,28 +8,28 @@
from dotenv import load_dotenv
from beeai_framework import Tool
-from beeai_framework.agents.bee.agent import BeeAgent
+from beeai_framework.agents.react.agent import ReActAgent
from beeai_framework.agents.types import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel, ChatModelParameters
from beeai_framework.emitter.emitter import Emitter, EventMeta
from beeai_framework.emitter.types import EmitterOptions
from beeai_framework.errors import FrameworkError
+from beeai_framework.logger import Logger
from beeai_framework.memory.token_memory import TokenMemory
from beeai_framework.tools.search import DuckDuckGoSearchTool, WikipediaTool
from beeai_framework.tools.weather.openmeteo import OpenMeteoTool
-from beeai_framework.utils.custom_logger import BeeLogger
from examples.helpers.io import ConsoleReader
# Load environment variables
load_dotenv()
# Configure logging - using DEBUG instead of trace
-logger = BeeLogger("app", level=logging.DEBUG)
+logger = Logger("app", level=logging.DEBUG)
reader = ConsoleReader()
-def create_agent() -> BeeAgent:
+def create_agent() -> ReActAgent:
"""Create and configure the agent with tools and LLM"""
# Other models to try:
@@ -56,7 +56,7 @@ def create_agent() -> BeeAgent:
pass
# Create agent with memory and tools
- agent = BeeAgent(llm=llm, tools=tools, memory=TokenMemory(llm))
+ agent = ReActAgent(llm=llm, tools=tools, memory=TokenMemory(llm))
return agent
@@ -94,7 +94,7 @@ async def main() -> None:
f"The code interpreter tool is enabled. Please ensure that it is running on {code_interpreter_url}",
)
- reader.write("🛠️ System: ", "Agent initialized with LangChain Wikipedia tool.")
+ reader.write("🛠️ System: ", "Agent initialized with Wikipedia tool.")
# Main interaction loop with user input
for prompt in reader:
diff --git a/python/examples/agents/bee_advanced.py b/python/examples/agents/react_advanced.py
similarity index 90%
rename from python/examples/agents/bee_advanced.py
rename to python/examples/agents/react_advanced.py
index 589509143..a733976ba 100644
--- a/python/examples/agents/bee_advanced.py
+++ b/python/examples/agents/react_advanced.py
@@ -8,8 +8,9 @@
from beeai_framework import Tool, UnconstrainedMemory
from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
-from beeai_framework.agents.bee.agent import BeeAgent
-from beeai_framework.agents.types import AgentExecutionConfig, BeeAgentTemplates, BeeTemplateFactory
+from beeai_framework.agents.react.agent import ReActAgent
+from beeai_framework.agents.react.types import ReActAgentTemplateFactory, ReActAgentTemplates
+from beeai_framework.agents.types import AgentExecutionConfig
from beeai_framework.cancellation import AbortSignal
from beeai_framework.emitter.emitter import Emitter, EventMeta
from beeai_framework.emitter.types import EmitterOptions
@@ -64,12 +65,12 @@ class NotFoundSchema(BaseModel):
return new_config
-def create_agent() -> BeeAgent:
+def create_agent() -> ReActAgent:
"""Create and configure the agent with tools and LLM"""
llm = OllamaChatModel("llama3.1")
- templates: dict[str, BeeAgentTemplates | BeeTemplateFactory] = {
+ templates: dict[str, ReActAgentTemplates | ReActAgentTemplateFactory] = {
"user": lambda template: template.fork(customizer=user_customizer),
"system": lambda template: template.fork(customizer=system_customizer),
"tool_no_result_error": lambda template: template.fork(customizer=no_result_customizer),
@@ -82,7 +83,7 @@ def create_agent() -> BeeAgent:
DuckDuckGoSearchTool(),
]
- agent = BeeAgent(llm=llm, templates=templates, tools=tools, memory=UnconstrainedMemory())
+ agent = ReActAgent(llm=llm, templates=templates, tools=tools, memory=UnconstrainedMemory())
return agent
diff --git a/python/examples/agents/simple.py b/python/examples/agents/simple.py
index 8529c0d78..be70a80eb 100644
--- a/python/examples/agents/simple.py
+++ b/python/examples/agents/simple.py
@@ -2,8 +2,8 @@
import sys
import traceback
-from beeai_framework.agents.bee.agent import BeeAgent
-from beeai_framework.agents.types import BeeRunOutput
+from beeai_framework.agents.react.agent import ReActAgent
+from beeai_framework.agents.react.types import ReActAgentRunOutput
from beeai_framework.backend.chat import ChatModel
from beeai_framework.emitter.emitter import Emitter, EventMeta
from beeai_framework.errors import FrameworkError
@@ -14,7 +14,7 @@
async def main() -> None:
llm = ChatModel.from_name("ollama:granite3.1-dense:8b")
- agent = BeeAgent(llm=llm, tools=[DuckDuckGoSearchTool(), OpenMeteoTool()], memory=UnconstrainedMemory())
+ agent = ReActAgent(llm=llm, tools=[DuckDuckGoSearchTool(), OpenMeteoTool()], memory=UnconstrainedMemory())
def update_callback(data: dict, event: EventMeta) -> None:
print(f"Agent({data['update']['key']}) 🤖 : ", data["update"]["parsedValue"])
@@ -22,7 +22,7 @@ def update_callback(data: dict, event: EventMeta) -> None:
def on_update(emitter: Emitter) -> None:
emitter.on("update", update_callback)
- output: BeeRunOutput = await agent.run("What's the current weather in Las Vegas?").observe(on_update)
+ output: ReActAgentRunOutput = await agent.run("What's the current weather in Las Vegas?").observe(on_update)
print("Agent 🤖 : ", output.result.text)
diff --git a/python/examples/backend/chat.py b/python/examples/backend/chat.py
new file mode 100644
index 000000000..1caa77ff0
--- /dev/null
+++ b/python/examples/backend/chat.py
@@ -0,0 +1,27 @@
+import asyncio
+import sys
+import traceback
+
+from beeai_framework import UserMessage
+from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
+from beeai_framework.errors import FrameworkError
+from examples.helpers.io import ConsoleReader
+
+
+async def main() -> None:
+ llm = OllamaChatModel("llama3.1")
+
+ reader = ConsoleReader()
+
+ for prompt in reader:
+ response = await llm.create(messages=[UserMessage(prompt)])
+ reader.write("LLM 🤖 (txt) : ", response.get_text_content())
+ reader.write("LLM 🤖 (raw) : ", "\n".join([str(msg.to_plain()) for msg in response.messages]))
+
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except FrameworkError as e:
+ traceback.print_exc()
+ sys.exit(e.explain())
diff --git a/python/examples/backend/chat_stream.py b/python/examples/backend/chat_stream.py
new file mode 100644
index 000000000..9874e1c85
--- /dev/null
+++ b/python/examples/backend/chat_stream.py
@@ -0,0 +1,32 @@
+import asyncio
+import sys
+import traceback
+
+from beeai_framework import UserMessage
+from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
+from beeai_framework.errors import FrameworkError
+from examples.helpers.io import ConsoleReader
+
+
+async def main() -> None:
+ llm = OllamaChatModel("llama3.1")
+
+ reader = ConsoleReader()
+
+ for prompt in reader:
+ response = await llm.create(messages=[UserMessage(prompt)]).observe(
+ lambda emitter: emitter.match(
+ "*", lambda data, event: reader.write(f"LLM 🤖 (event: {event.name})", str(data))
+ )
+ )
+
+ reader.write("LLM 🤖 (txt) : ", response.get_text_content())
+ reader.write("LLM 🤖 (raw) : ", "\n".join([str(msg.to_plain()) for msg in response.messages]))
+
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except FrameworkError as e:
+ traceback.print_exc()
+ sys.exit(e.explain())
diff --git a/python/examples/backend/providers/amazon_bedrock.py b/python/examples/backend/providers/amazon_bedrock.py
new file mode 100644
index 000000000..eb4445f21
--- /dev/null
+++ b/python/examples/backend/providers/amazon_bedrock.py
@@ -0,0 +1,105 @@
+import asyncio
+from typing import Any, Final
+
+from pydantic import BaseModel, Field
+
+from beeai_framework.adapters.amazon_bedrock.backend.chat import AmazonBedrockChatModel
+from beeai_framework.backend.chat import ChatModel
+from beeai_framework.backend.message import UserMessage
+from beeai_framework.cancellation import AbortSignal
+from beeai_framework.emitter import EventMeta
+from beeai_framework.errors import AbortError
+from beeai_framework.parsers.field import ParserField
+from beeai_framework.parsers.line_prefix import LinePrefixParser, LinePrefixParserNode
+
+# NOTE: See README.md for additional usage notes
+MODEL_NAME: Final[str] = "meta.llama3-8b-instruct-v1:0"
+
+
+async def amazon_bedrock_from_name() -> None:
+ llm = ChatModel.from_name(f"amazon_bedrock:{MODEL_NAME}")
+ user_message = UserMessage("what states are part of New England?")
+ response = await llm.create(messages=[user_message])
+ print(response.get_text_content())
+
+
+async def amazon_bedrock_sync() -> None:
+ llm = AmazonBedrockChatModel(MODEL_NAME)
+ user_message = UserMessage("what is the capital of Massachusetts?")
+ response = await llm.create(messages=[user_message])
+ print(response.get_text_content())
+
+
+async def amazon_bedrock_stream() -> None:
+ llm = AmazonBedrockChatModel(MODEL_NAME)
+ user_message = UserMessage("How many islands make up the country of Cape Verde?")
+ response = await llm.create(messages=[user_message], stream=True)
+ print(response.get_text_content())
+
+
+async def amazon_bedrock_stream_abort() -> None:
+ llm = AmazonBedrockChatModel(MODEL_NAME)
+ user_message = UserMessage("What is the smallest of the Cape Verde islands?")
+
+ try:
+ response = await llm.create(messages=[user_message], stream=True, abort_signal=AbortSignal.timeout(0.5))
+
+ if response is not None:
+ print(response.get_text_content())
+ else:
+ print("No response returned.")
+ except AbortError as err:
+ print(f"Aborted: {err}")
+
+
+async def amazon_bedrock_structure() -> None:
+ class TestSchema(BaseModel):
+ answer: str = Field(description="your final answer")
+
+ llm = AmazonBedrockChatModel(MODEL_NAME)
+ user_message = UserMessage("How many islands make up the country of Cape Verde?")
+ response = await llm.create_structure(schema=TestSchema, messages=[user_message])
+ print(response.object)
+
+
+async def amazon_bedrock_stream_parser() -> None:
+ llm = AmazonBedrockChatModel(MODEL_NAME)
+
+ parser = LinePrefixParser(
+ nodes={
+ "test": LinePrefixParserNode(
+ prefix="Prefix: ",
+ field=ParserField.from_type(str),
+ is_start=True,
+ is_end=True,
+ )
+ }
+ )
+
+ async def on_new_token(data: dict[str, Any], event: EventMeta) -> None:
+ await parser.add(chunk=data["value"].get_text_content())
+
+ user_message = UserMessage("Produce 3 lines each starting with 'Prefix: ' followed by a sentence and a new line.")
+ await llm.create(messages=[user_message], stream=True).observe(lambda emitter: emitter.on("newToken", on_new_token))
+ result = await parser.end()
+ print(result)
+
+
+async def main() -> None:
+ print("*" * 10, "amazon_bedrock_from_name")
+ await amazon_bedrock_from_name()
+ print("*" * 10, "amazon_bedrock_sync")
+ await amazon_bedrock_sync()
+ print("*" * 10, "amazon_bedrock_stream")
+ await amazon_bedrock_stream()
+ print("*" * 10, "amazon_bedrock_stream_abort")
+ await amazon_bedrock_stream_abort()
+ # NOTE: Disabled by default -- see README for more information
+ # print("*" * 10, "amazon_bedrock_structure")
+ # await amazon_bedrock_structure()
+ print("*" * 10, "amazon_bedrock_stream_parser")
+ await amazon_bedrock_stream_parser()
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
diff --git a/python/examples/backend/providers/vertexai.py b/python/examples/backend/providers/vertexai.py
new file mode 100644
index 000000000..dd2f2f089
--- /dev/null
+++ b/python/examples/backend/providers/vertexai.py
@@ -0,0 +1,98 @@
+import asyncio
+from typing import Any
+
+from pydantic import BaseModel, Field
+
+from beeai_framework.adapters.vertexai.backend.chat import VertexAIChatModel
+from beeai_framework.backend.chat import ChatModel
+from beeai_framework.backend.message import UserMessage
+from beeai_framework.cancellation import AbortSignal
+from beeai_framework.emitter import EventMeta
+from beeai_framework.errors import AbortError
+from beeai_framework.parsers.field import ParserField
+from beeai_framework.parsers.line_prefix import LinePrefixParser, LinePrefixParserNode
+
+
+async def vertexai_from_name() -> None:
+ llm = ChatModel.from_name("vertexai:gemini-2.0-flash-lite-001")
+ user_message = UserMessage("what states are part of New England?")
+ response = await llm.create(messages=[user_message])
+ print(response.get_text_content())
+
+
+async def vertexai_sync() -> None:
+ llm = VertexAIChatModel("gemini-2.0-flash-lite-001")
+ user_message = UserMessage("what is the capital of Massachusetts?")
+ response = await llm.create(messages=[user_message])
+ print(response.get_text_content())
+
+
+async def vertexai_stream() -> None:
+ llm = VertexAIChatModel("gemini-2.0-flash-lite-001")
+ user_message = UserMessage("How many islands make up the country of Cape Verde?")
+ response = await llm.create(messages=[user_message], stream=True)
+ print(response.get_text_content())
+
+
+async def vertexai_stream_abort() -> None:
+ llm = VertexAIChatModel("gemini-2.0-flash-lite-001")
+ user_message = UserMessage("What is the smallest of the Cape Verde islands?")
+
+ try:
+ response = await llm.create(messages=[user_message], stream=True, abort_signal=AbortSignal.timeout(0.5))
+
+ if response is not None:
+ print(response.get_text_content())
+ else:
+ print("No response returned.")
+ except AbortError as err:
+ print(f"Aborted: {err}")
+
+
+async def vertexai_structure() -> None:
+ class TestSchema(BaseModel):
+ answer: str = Field(description="your final answer")
+
+ llm = VertexAIChatModel("gemini-2.0-flash-lite-001")
+ user_message = UserMessage("How many islands make up the country of Cape Verde?")
+ response = await llm.create_structure(schema=TestSchema, messages=[user_message])
+ print(response.object)
+
+
+async def vertexai_stream_parser() -> None:
+ llm = VertexAIChatModel("gemini-2.0-flash-lite-001")
+
+ parser = LinePrefixParser(
+ nodes={
+ "test": LinePrefixParserNode(
+ prefix="Prefix: ", field=ParserField.from_type(str), is_start=True, is_end=True
+ )
+ }
+ )
+
+ async def on_new_token(data: dict[str, Any], event: EventMeta) -> None:
+ await parser.add(data["value"].get_text_content())
+
+ user_message = UserMessage("Produce 3 lines each starting with 'Prefix: ' followed by a sentence and a new line.")
+ await llm.create(messages=[user_message], stream=True).observe(lambda emitter: emitter.on("newToken", on_new_token))
+ result = await parser.end()
+ print(result)
+
+
+async def main() -> None:
+ print("*" * 10, "vertexai_from_name")
+ await vertexai_from_name()
+ print("*" * 10, "vertexai_sync")
+ await vertexai_sync()
+ print("*" * 10, "vertexai_stream")
+ await vertexai_stream()
+ print("*" * 10, "vertexai_stream_abort")
+ await vertexai_stream_abort()
+ print("*" * 10, "vertexai_structure")
+ await vertexai_structure()
+ print("*" * 10, "vertexai_stream_parser")
+ await vertexai_stream_parser()
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
diff --git a/python/examples/backend/providers/watsonx.py b/python/examples/backend/providers/watsonx.py
index 538ed0469..2e79ef0c5 100644
--- a/python/examples/backend/providers/watsonx.py
+++ b/python/examples/backend/providers/watsonx.py
@@ -1,14 +1,17 @@
import asyncio
+import json
import sys
import traceback
from pydantic import BaseModel, Field
+from beeai_framework import ToolMessage
from beeai_framework.adapters.watsonx.backend.chat import WatsonxChatModel
from beeai_framework.backend.chat import ChatModel
-from beeai_framework.backend.message import UserMessage
+from beeai_framework.backend.message import MessageToolResultContent, UserMessage
from beeai_framework.cancellation import AbortSignal
from beeai_framework.errors import AbortError, FrameworkError
+from beeai_framework.tools.weather.openmeteo import OpenMeteoTool
# Setting can be passed here during initiation or pre-configured via environment variables
llm = WatsonxChatModel(
@@ -70,6 +73,26 @@ class TestSchema(BaseModel):
print(response.object)
+async def watson_tool_calling() -> None:
+ watsonx_llm = ChatModel.from_name(
+ "watsonx:ibm/granite-3-8b-instruct",
+ )
+ user_message = UserMessage("What is the current weather in Boston?")
+ weather_tool = OpenMeteoTool()
+ response = await watsonx_llm.create(messages=[user_message], tools=[weather_tool])
+ tool_call_msg = response.get_tool_calls()[0]
+ print(tool_call_msg.model_dump())
+ tool_response = await weather_tool.run(json.loads(tool_call_msg.args))
+ tool_response_msg = ToolMessage(
+ MessageToolResultContent(
+ result=tool_response.get_text_content(), tool_name=tool_call_msg.tool_name, tool_call_id=tool_call_msg.id
+ )
+ )
+ print(tool_response_msg.to_plain())
+ final_response = await watsonx_llm.create(messages=[user_message, tool_response_msg], tools=[])
+ print(final_response.get_text_content())
+
+
async def main() -> None:
print("*" * 10, "watsonx_from_name")
await watsonx_from_name()
@@ -81,6 +104,8 @@ async def main() -> None:
await watsonx_stream_abort()
print("*" * 10, "watson_structure")
await watson_structure()
+ print("*" * 10, "watson_tool_calling")
+ await watson_tool_calling()
if __name__ == "__main__":
diff --git a/python/examples/backend/structured.py b/python/examples/backend/structured.py
new file mode 100644
index 000000000..9addc89b2
--- /dev/null
+++ b/python/examples/backend/structured.py
@@ -0,0 +1,46 @@
+import asyncio
+import json
+import sys
+import traceback
+
+from pydantic import BaseModel, Field
+
+from beeai_framework import UserMessage
+from beeai_framework.backend.chat import ChatModel
+from beeai_framework.errors import FrameworkError
+
+
+async def main() -> None:
+ model = ChatModel.from_name("ollama:llama3.1")
+
+ class ProfileSchema(BaseModel):
+ first_name: str = Field(..., min_length=1)
+ last_name: str = Field(..., min_length=1)
+ address: str
+ age: int = Field(..., min_length=1)
+ hobby: str
+
+ class ErrorSchema(BaseModel):
+ error: str
+
+ class SchemUnion(ProfileSchema, ErrorSchema):
+ pass
+
+ response = await model.create_structure(
+ schema=SchemUnion,
+ messages=[UserMessage("Generate a profile of a citizen of Europe.")],
+ )
+
+ print(
+ json.dumps(
+ response.object.model_dump() if isinstance(response.object, BaseModel) else response.object, indent=4
+ )
+ )
+
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except FrameworkError as e:
+ traceback.print_exc()
+ sys.exit(e.explain())
diff --git a/python/examples/backend/tool_calling.py b/python/examples/backend/tool_calling.py
new file mode 100644
index 000000000..e01174477
--- /dev/null
+++ b/python/examples/backend/tool_calling.py
@@ -0,0 +1,65 @@
+import asyncio
+import json
+import re
+import sys
+import traceback
+
+from beeai_framework import Message, SystemMessage, Tool, ToolMessage, UserMessage
+from beeai_framework.backend.chat import ChatModel, ChatModelParameters
+from beeai_framework.backend.message import MessageToolResultContent
+from beeai_framework.errors import FrameworkError
+from beeai_framework.tools import ToolOutput
+from beeai_framework.tools.search import DuckDuckGoSearchTool
+from beeai_framework.tools.weather.openmeteo import OpenMeteoTool
+
+
+async def main() -> None:
+ model = ChatModel.from_name("ollama:llama3.1", ChatModelParameters(temperature=0))
+ tools: list[Tool] = [DuckDuckGoSearchTool(), OpenMeteoTool()]
+ messages: list[Message] = [
+ SystemMessage("You are a helpful assistant. Use tools to provide a correct answer."),
+ UserMessage("What's the fastest marathon time?"),
+ ]
+
+ while True:
+ response = await model.create(
+ messages=messages,
+ tools=tools,
+ )
+
+ tool_calls = response.get_tool_calls()
+
+ tool_results: list[ToolMessage] = []
+
+ for tool_call in tool_calls:
+ print(f"-> running '{tool_call.tool_name}' tool with {tool_call.args}")
+ tool: Tool = next(tool for tool in tools if tool.name == tool_call.tool_name)
+ assert tool is not None
+ res: ToolOutput = await tool.run(json.loads(tool_call.args))
+ result = res.get_text_content()
+ print(f"<- got response from '{tool_call.tool_name}'", re.sub(r"\s+", " ", result)[:90] + " (truncated)")
+ tool_results.append(
+ ToolMessage(
+ MessageToolResultContent(
+ result=result,
+ tool_name=tool_call.tool_name,
+ tool_call_id=tool_call.id,
+ )
+ )
+ )
+
+ messages.extend(tool_results)
+
+ answer = response.get_text_content()
+
+ if answer:
+ print(f"Agent: {answer}")
+ break
+
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except FrameworkError as e:
+ traceback.print_exc()
+ sys.exit(e.explain())
diff --git a/python/examples/basic.py b/python/examples/basic.py
index 42c3ecafd..390699ff9 100644
--- a/python/examples/basic.py
+++ b/python/examples/basic.py
@@ -2,7 +2,7 @@
import sys
import traceback
-from beeai_framework.agents.bee.agent import BeeAgent
+from beeai_framework.agents.react.agent import ReActAgent
from beeai_framework.backend.chat import ChatModel
from beeai_framework.errors import FrameworkError
from beeai_framework.memory.unconstrained_memory import UnconstrainedMemory
@@ -11,7 +11,7 @@
async def main() -> None:
chat_model = ChatModel.from_name("ollama:llama3.1")
- agent = BeeAgent(llm=chat_model, tools=[], memory=UnconstrainedMemory())
+ agent = ReActAgent(llm=chat_model, tools=[], memory=UnconstrainedMemory())
result = await agent.run("What is the capital of Massachusetts")
diff --git a/python/examples/emitter/agent_matchers.py b/python/examples/emitter/agent_matchers.py
index 8cde645ac..466dfa851 100644
--- a/python/examples/emitter/agent_matchers.py
+++ b/python/examples/emitter/agent_matchers.py
@@ -2,13 +2,13 @@
import sys
import traceback
-from beeai_framework import BeeAgent, UnconstrainedMemory
+from beeai_framework import ReActAgent, UnconstrainedMemory
from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
from beeai_framework.errors import FrameworkError
async def main() -> None:
- agent = BeeAgent(
+ agent = ReActAgent(
llm=OllamaChatModel("llama3.1"),
memory=UnconstrainedMemory(),
tools=[],
diff --git a/python/examples/llms.py b/python/examples/llms.py
index 5728e59dd..1dcb48bf2 100644
--- a/python/examples/llms.py
+++ b/python/examples/llms.py
@@ -3,7 +3,7 @@
from dotenv import load_dotenv
-from beeai_framework.agents.bee import BeeAgent
+from beeai_framework.agents.react import ReActAgent
from beeai_framework.backend.chat import ChatModel
from beeai_framework.memory.unconstrained_memory import UnconstrainedMemory
@@ -27,7 +27,7 @@
async def main(name: str) -> None:
chat_model = ChatModel.from_name(name)
- agent = BeeAgent(llm=chat_model, tools=[], memory=UnconstrainedMemory())
+ agent = ReActAgent(llm=chat_model, tools=[], memory=UnconstrainedMemory())
result = await agent.run("What is the smallest of the Cabo Verde islands?")
diff --git a/python/examples/logger/agent.py b/python/examples/logger/agent.py
new file mode 100644
index 000000000..2db0198ed
--- /dev/null
+++ b/python/examples/logger/agent.py
@@ -0,0 +1,33 @@
+import asyncio
+import logging
+import sys
+import traceback
+
+from beeai_framework.agents.react.agent import ReActAgent
+from beeai_framework.agents.react.types import ReActAgentRunOutput
+from beeai_framework.backend.chat import ChatModel
+from beeai_framework.errors import FrameworkError
+from beeai_framework.logger import Logger
+from beeai_framework.memory.unconstrained_memory import UnconstrainedMemory
+
+
+async def main() -> None:
+ logger = Logger("app", level=logging.TRACE)
+
+ agent = ReActAgent(llm=ChatModel.from_name("ollama:granite3.1-dense:8b"), tools=[], memory=UnconstrainedMemory())
+
+ output: ReActAgentRunOutput = await agent.run("Hello!").observe(
+ lambda emitter: emitter.on(
+ "update", lambda data, event: logger.info(f"Event {event.path} triggered by {type(event.creator).__name__}")
+ )
+ )
+
+ logger.info(f"Agent 🤖 : {output.result.text}")
+
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except FrameworkError as e:
+ traceback.print_exc()
+ sys.exit(e.explain())
diff --git a/python/examples/logger/base.py b/python/examples/logger/base.py
new file mode 100644
index 000000000..503226eac
--- /dev/null
+++ b/python/examples/logger/base.py
@@ -0,0 +1,14 @@
+import logging
+
+from beeai_framework.logger import Logger
+
+# Configure logger with default log level
+logger = Logger("app", level=logging.TRACE)
+
+# Log at different levels
+logger.trace("Trace!")
+logger.debug("Debug!")
+logger.info("Info!")
+logger.warning("Warning!")
+logger.error("Error!")
+logger.fatal("Fatal!")
diff --git a/python/examples/memory/agentMemory.py b/python/examples/memory/agent_memory.py
similarity index 92%
rename from python/examples/memory/agentMemory.py
rename to python/examples/memory/agent_memory.py
index 3884a0546..a05b5e6bf 100644
--- a/python/examples/memory/agentMemory.py
+++ b/python/examples/memory/agent_memory.py
@@ -2,7 +2,7 @@
import sys
import traceback
-from beeai_framework.agents.bee.agent import BeeAgent
+from beeai_framework.agents.react.agent import ReActAgent
from beeai_framework.agents.types import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel
from beeai_framework.backend.message import AssistantMessage, UserMessage
@@ -13,11 +13,11 @@
memory = UnconstrainedMemory()
-def create_agent() -> BeeAgent:
+def create_agent() -> ReActAgent:
llm = ChatModel.from_name("ollama:granite3.1-dense:8b")
# Initialize the agent
- agent = BeeAgent(llm=llm, memory=memory, tools=[])
+ agent = ReActAgent(llm=llm, memory=memory, tools=[])
return agent
diff --git a/python/examples/memory/llmMemory.py b/python/examples/memory/llm_memory.py
similarity index 100%
rename from python/examples/memory/llmMemory.py
rename to python/examples/memory/llm_memory.py
diff --git a/python/examples/memory/slidingMemory.py b/python/examples/memory/sliding_memory.py
similarity index 100%
rename from python/examples/memory/slidingMemory.py
rename to python/examples/memory/sliding_memory.py
diff --git a/python/examples/memory/summarizeMemory.py b/python/examples/memory/summarize_memory.py
similarity index 100%
rename from python/examples/memory/summarizeMemory.py
rename to python/examples/memory/summarize_memory.py
diff --git a/python/examples/memory/tokenMemory.py b/python/examples/memory/token_memory.py
similarity index 100%
rename from python/examples/memory/tokenMemory.py
rename to python/examples/memory/token_memory.py
diff --git a/python/examples/memory/unconstrainedMemory.py b/python/examples/memory/unconstrained_memory.py
similarity index 100%
rename from python/examples/memory/unconstrainedMemory.py
rename to python/examples/memory/unconstrained_memory.py
diff --git a/python/examples/notebooks/agents.ipynb b/python/examples/notebooks/agents.ipynb
index 9236cb616..f892e2a11 100644
--- a/python/examples/notebooks/agents.ipynb
+++ b/python/examples/notebooks/agents.ipynb
@@ -18,12 +18,12 @@
]
},
{
- "cell_type": "markdown",
"metadata": {},
+ "cell_type": "markdown",
"source": [
"## Basic ReAct Agent\n",
"\n",
- "To configure a ReAct agent, you need to define a ChatModel and construct a BeeAgent.\n",
+ "To configure a ReAct agent, you need to define a ChatModel and construct a Agent.\n",
"\n",
"In this example, we won't provide any external tools to the agent. It will rely solely on its own memory to provide answers. This is a basic setup where the agent tries to reason and act based on the context it has built internally.\n",
"\n",
@@ -31,25 +31,15 @@
]
},
{
- "cell_type": "code",
- "execution_count": 1,
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Agent(thought) 🤖 : The user is asking for the chemical elements that compose a water molecule. A water molecule (H2O) is composed of two hydrogen atoms and one oxygen atom.\n",
- "\n",
- "Agent(final_answer) 🤖 : A water molecule consists of two hydrogen atoms and one oxygen atom.\n"
- ]
- }
- ],
+ "cell_type": "code",
+ "outputs": [],
+ "execution_count": null,
"source": [
"from typing import Any\n",
"\n",
- "from beeai_framework.agents.bee.agent import BeeAgent\n",
- "from beeai_framework.agents.types import BeeRunOutput\n",
+ "from beeai_framework.agents.react.agent import ReActAgent\n",
+ "from beeai_framework.agents.react.types import ReActAgentRunOutput\n",
"from beeai_framework.backend.chat import ChatModel\n",
"from beeai_framework.emitter.emitter import Emitter, EventMeta\n",
"from beeai_framework.emitter.types import EmitterOptions\n",
@@ -59,7 +49,7 @@
"chat_model: ChatModel = ChatModel.from_name(\"ollama:granite3.1-dense:8b\")\n",
"\n",
"# Construct Agent instance with the chat model\n",
- "agent = BeeAgent(llm=chat_model, tools=[], memory=UnconstrainedMemory())\n",
+ "agent = ReActAgent(llm=chat_model, tools=[], memory=UnconstrainedMemory())\n",
"\n",
"\n",
"async def process_agent_events(event_data: dict[str, Any], event_meta: EventMeta) -> None:\n",
@@ -79,7 +69,7 @@
"\n",
"\n",
"# Run the agent\n",
- "result: BeeRunOutput = await agent.run(\"What chemical elements make up a water molecule?\").observe(observer)"
+ "result: ReActAgentRunOutput = await agent.run(\"What chemical elements make up a water molecule?\").observe(observer)"
]
},
{
@@ -118,10 +108,10 @@
"\n",
"Agent(tool_name) 🤖 : OpenMeteoTool\n",
"Agent(tool_input) 🤖 : {'location_name': 'London', 'temperature_unit': 'celsius'}\n",
- "Agent(tool_output) 🤖 : {\"latitude\": 51.5, \"longitude\": -0.120000124, \"generationtime_ms\": 0.07617473602294922, \"utc_offset_seconds\": 0, \"timezone\": \"GMT\", \"timezone_abbreviation\": \"GMT\", \"elevation\": 23.0, \"current_units\": {\"time\": \"iso8601\", \"interval\": \"seconds\", \"temperature_2m\": \"\\u00b0C\", \"rain\": \"mm\", \"relative_humidity_2m\": \"%\", \"wind_speed_10m\": \"km/h\"}, \"current\": {\"time\": \"2025-03-04T22:15\", \"interval\": 900, \"temperature_2m\": 6.8, \"rain\": 0.0, \"relative_humidity_2m\": 66, \"wind_speed_10m\": 5.0}, \"daily_units\": {\"time\": \"iso8601\", \"temperature_2m_max\": \"\\u00b0C\", \"temperature_2m_min\": \"\\u00b0C\", \"rain_sum\": \"mm\"}, \"daily\": {\"time\": [\"2025-03-04\"], \"temperature_2m_max\": [12.6], \"temperature_2m_min\": [-0.4], \"rain_sum\": [0.0]}}\n",
- "Agent(thought) 🤖 : I have obtained the current weather information for London using the OpenMeteoTool. The current temperature in London is 6.8°C, with a relative humidity of 66%, wind speeds at 5 km/h, and no rainfall recorded over the past hour.\n",
+ "Agent(tool_output) 🤖 : {\"latitude\": 51.5, \"longitude\": -0.120000124, \"generationtime_ms\": 0.04935264587402344, \"utc_offset_seconds\": 0, \"timezone\": \"GMT\", \"timezone_abbreviation\": \"GMT\", \"elevation\": 23.0, \"current_units\": {\"time\": \"iso8601\", \"interval\": \"seconds\", \"temperature_2m\": \"\\u00b0C\", \"rain\": \"mm\", \"relative_humidity_2m\": \"%\", \"wind_speed_10m\": \"km/h\"}, \"current\": {\"time\": \"2025-03-06T14:00\", \"interval\": 900, \"temperature_2m\": 15.8, \"rain\": 0.0, \"relative_humidity_2m\": 46, \"wind_speed_10m\": 13.7}, \"daily_units\": {\"time\": \"iso8601\", \"temperature_2m_max\": \"\\u00b0C\", \"temperature_2m_min\": \"\\u00b0C\", \"rain_sum\": \"mm\"}, \"daily\": {\"time\": [\"2025-03-06\"], \"temperature_2m_max\": [15.7], \"temperature_2m_min\": [3.7], \"rain_sum\": [0.0]}}\n",
+ "Agent(thought) 🤖 : The OpenMeteoTool returned the current weather information for London. The temperature is 15.8°C, with no rain and a wind speed of 13.7 km/h.\n",
"\n",
- "Agent(final_answer) 🤖 : The current weather in London is 6.8°C with a relative humidity of 66%, light winds at 5 km/h, and no rainfall.\n"
+ "Agent(final_answer) 🤖 : The current weather in London is 15.8°C with clear skies and light winds.\n"
]
}
],
@@ -132,11 +122,11 @@
"chat_model: ChatModel = ChatModel.from_name(\"ollama:granite3.1-dense:8b\")\n",
"\n",
"# create an agent using the default LLM and add the OpenMeteoTool that is capable of fetching weather-based information\n",
- "agent = BeeAgent(llm=chat_model, tools=[OpenMeteoTool()], memory=UnconstrainedMemory())\n",
+ "agent = ReActAgent(llm=chat_model, tools=[OpenMeteoTool()], memory=UnconstrainedMemory())\n",
"\n",
"\n",
"# Run the agent\n",
- "result: BeeRunOutput = await agent.run(\"What's the current weather in London?\").observe(observer)"
+ "result: ReActAgentRunOutput = await agent.run(\"What's the current weather in London?\").observe(observer)"
]
},
{
@@ -159,9 +149,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "Agent(thought) 🤖 : The user is asking about the current president of the European Commission. I should use the Wikipedia function to find this information.\n",
+ "Agent(thought) 🤖 : I need to find out who the current president of the European Commission is. I will use the Wikipedia tool to search for this information.\n",
"Agent(tool_name) 🤖 : Wikipedia\n",
- "Agent(tool_input) 🤖 : {'query': 'current president of the European Commission'}\n",
+ "Agent(tool_input) 🤖 : {'query': 'Current President of the European Commission'}\n",
"Agent(tool_output) 🤖 : Page: Vice-President of the European Commission\n",
"Summary: A Vice-President of the European Commission is a member of the European Commission who leads the commission's work in particular focus areas in which multiple European Commissioners participate.\n",
"Currently, the European Commission has a total of six Vice-Presidents: five Executive-Vice Presidents, and the High Representative who is ex officio one of the Vice-Presidents as well.\n",
@@ -172,10 +162,8 @@
"\n",
"Page: List of presidents of the institutions of the European Union\n",
"Summary: This is a list of presidents of the institutions of the European Union (EU). Each of the institutions is headed by a president or a presidency, with some being more prominent than others. Both the President of the European Council and the President of the European Commission are sometimes wrongly termed the President of the European Union. Most go back to 1957 but others, such as the President of the Auditors Court or of the European Central Bank, have been created recently. Currently (2025), the President of the European Commission is Ursula von der Leyen and the President of the European Council is António Costa.\n",
- "\n",
- "\n",
- "Agent(thought) 🤖 : I found information about the Vice-Presidents and the current President of the European Commission, Ursula von der Leyen. However, there are also six Executive Vice-Presidents. The user might be asking for a single individual in the role. I need to consolidate this information into a clear answer.\n",
- "Agent(final_answer) 🤖 : The current president of the European Commission is Ursula von der Leyen. There are also six Executive Vice-Presidents and the High Representative who is ex officio one of the Vice-Presidents as well. All members, including the President, represent the general interest of the EU as a whole rather than their home state.\n"
+ "Agent(thought) 🤖 : The Wikipedia search results indicate that the current president of the European Commission is Ursula von der Leyen, who took office in December 2019.\n",
+ "Agent(final_answer) 🤖 : The current president of the European Commission is Ursula von der Leyen.\n"
]
}
],
@@ -186,16 +174,17 @@
"from langchain_community.utilities import WikipediaAPIWrapper\n",
"from pydantic import BaseModel, Field\n",
"\n",
- "from beeai_framework.agents.bee.agent import BeeAgent\n",
+ "from beeai_framework.agents.react.agent import ReActAgent\n",
+ "from beeai_framework.context import RunContext\n",
"from beeai_framework.tools import Tool\n",
- "from beeai_framework.tools.tool import StringToolOutput\n",
+ "from beeai_framework.tools.tool import StringToolOutput, ToolRunOptions\n",
"\n",
"\n",
"class LangChainWikipediaToolInput(BaseModel):\n",
" query: str = Field(description=\"The topic or question to search for on Wikipedia.\")\n",
"\n",
"\n",
- "class LangChainWikipediaTool(Tool):\n",
+ "class LangChainWikipediaTool(Tool[LangChainWikipediaToolInput, ToolRunOptions, StringToolOutput]):\n",
" \"\"\"Adapter class to integrate LangChain's Wikipedia tool with our framework\"\"\"\n",
"\n",
" name = \"Wikipedia\"\n",
@@ -212,7 +201,9 @@
" creator=self,\n",
" )\n",
"\n",
- " async def _run(self, input: LangChainWikipediaToolInput, _: Any | None = None) -> StringToolOutput:\n",
+ " async def _run(\n",
+ " self, input: LangChainWikipediaToolInput, options: ToolRunOptions | None, context: RunContext\n",
+ " ) -> StringToolOutput:\n",
" query = input.query\n",
" try:\n",
" result = self._wikipedia.run(query)\n",
@@ -223,9 +214,11 @@
"\n",
"\n",
"chat_model: ChatModel = ChatModel.from_name(\"ollama:granite3.1-dense:8b\")\n",
- "agent = BeeAgent(llm=chat_model, tools=[LangChainWikipediaTool()], memory=UnconstrainedMemory())\n",
+ "agent = ReActAgent(llm=chat_model, tools=[LangChainWikipediaTool()], memory=UnconstrainedMemory())\n",
"\n",
- "result: BeeRunOutput = await agent.run(\"Who is the current president of the European Commission?\").observe(observer)"
+ "result: ReActAgentRunOutput = await agent.run(\"Who is the current president of the European Commission?\").observe(\n",
+ " observer\n",
+ ")"
]
},
{
@@ -237,37 +230,37 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
- "Agent(thought) 🤖 : The user wants to know about the longest living vertebrate. I will use the 'langchain_wikipedia_tool' to search for this information on Wikipedia.\n",
+ "Agent(thought) 🤖 : I need to find factual and historical information about the longest living vertebrate. I will use the langchain_wikipedia_tool for this purpose.\n",
"Agent(tool_name) 🤖 : langchain_wikipedia_tool\n",
"Agent(tool_input) 🤖 : {'query': 'longest living vertebrate'}\n",
"Agent(tool_output) 🤖 : Page: Hákarl\n",
"Summary: Hákarl (an abbreviation of kæstur hákarl [ˈcʰaistʏr ˈhauːˌkʰa(r)tl̥]), referred to as fermented shark in English, is a national dish of Iceland consisting of Greenland shark or other sleeper shark that has been cured with a particular fermentation process and hung to dry for four to five months. It has a strong ammonia-rich smell and fishy taste, making hákarl an acquired taste.\n",
"Fermented shark is readily available in Icelandic stores and may be eaten year-round, but is most often served as part of a Þorramatur, a selection of traditional Icelandic food served at the midwinter festival þorrablót.\n",
"\n",
+ "Page: Greenland shark\n",
+ "Summary: The Greenland shark (Somniosus microcephalus), also known as the gurry shark or grey shark, is a large shark of the family Somniosidae (\"sleeper sharks\"), closely related to the Pacific and southern sleeper sharks. Inhabiting the North Atlantic and Arctic Oceans, they are notable for their exceptional longevity, although they are poorly studied due to the depth and remoteness of their natural habitat.\n",
+ "Greenland sharks have the longest lifespan of any known vertebrate, estimated to be between 250 and 500 years. They are among the largest extant species of shark, reaching a maximum confirmed length of 6.4 m (21 ft) long and weighing over 1,000 kg (2,200 lb). They reach sexual maturity at about 150 years of age, and their pups are born alive after an estimated gestation period of 8 to 18 years.\n",
+ "The shark is a generalist feeder, consuming a variety of available foods, including carrion.\n",
+ "Greenland shark meat is toxic to mammals due to its high levels of trimethylamine N-oxide, although a treated form of it is eaten in Iceland as a delicacy known as kæstur hákarl. Because they live deep in remote parts of the northern oceans, Greenland sharks are not considered a threat to humans. A possible attack occurred in August 1936 on two British fisherman, but the species was never identified.\n",
+ "\n",
+ "\n",
+ "\n",
"Page: List of longest-living organisms\n",
"Summary: This is a list of the longest-living biological organisms: the individual(s) (or in some instances, clones) of a species with the longest natural maximum life spans. For a given species, such a designation may include:\n",
"\n",
"The oldest known individual(s) that are currently alive, with verified ages.\n",
"Verified individual record holders, such as the longest-lived human, Jeanne Calment, or the longest-lived domestic cat, Creme Puff.\n",
"The definition of \"longest-living\" used in this article considers only the observed or estimated length of an individual organism's natural lifespan – that is, the duration of time between its birth or conception, or the earliest emergence of its identity as an individual organism, and its death – and does not consider other conceivable interpretations of \"longest-living\", such as the length of time between the earliest appearance of a species in the fossil record and the present (the historical \"age\" of the species as a whole), the time between a species' first speciation and its extinction (the phylogenetic \"lifespan\" of the species), or the range of possible lifespans of a species' individuals. This list includes long-lived organisms that are currently still alive as well as those that are dead.\n",
- "Determining the length of an organism's natural lifespan is complicated by many problems of definition and interpretation, as well as by practical difficulties in reliably measuring age, particularly for extremely old organisms and for those that reproduce by asexual cloning. In many cases the ages listed below are estimates based on observed present-day growth rates, which may differ significantly from the growth rates experienced thousands of years ago. Identifying the longest-living organisms also depends on defining what constitutes an \"individual\" organism, which can be problematic, since many asexual organisms and clonal colonies defy one or both of the traditional colloquial definitions of individuality (having a distinct genotype and having an independent, physically separate body). Additionally, some organisms maintain the capability to reproduce through very long periods of metabolic dormancy, during which they may not be considered \"alive\" by certain definitions but nonetheless can resume normal metabolism afterward; it is unclear whether the dormant periods should be counted as part of the organism's lifespan.\n",
- "\n",
- "\n",
- "\n",
- "Page: Greenland shark\n",
- "Summary: The Greenland shark (Somniosus microcephalus), also known as the gurry shark or grey shark, is a large shark of the family Somniosidae (\"sleeper sharks\"), closely related to the Pacific and southern sleeper sharks. Inhabiting the North Atlantic and Arctic Oceans, they are notable for their exceptional longevity, although they are poorly studied due to the depth and remoteness of their natural habitat.\n",
- "Greenland sharks have the longest lifespan of any known vertebrate, estimated to be between 250 and 500 years. They are among the largest extant species of shark, reaching a maximum confirmed length of 6.4 m (21 ft) long and weighing over 1,000 kg (2,200 lb). They reach sexual maturity at about 150 years of age, and their pups are born alive after an estimated gestation period of 8 to 18 years.\n",
- "The shark is a generalist feeder, consuming a variety of available foods, including carrion.\n",
- "Greenland shark \n",
- "Agent(thought) 🤖 : I have retrieved information from Wikipedia. The longest-living vertebrate is the Greenland Shark with an estimated lifespan of 250 to 500 years.\n",
- "Agent(final_answer) 🤖 : The longest living vertebrate is the Greenland Shark, which can live for an estimated period of 250 to 500 years.\n"
+ "Determining the length of an organism's natural lifespan is complicated by many problems of definition and interpretation, as well as by practical difficulties in reliably measuring age, particularly for extremely old organisms and for those that reproduce by asexual cloning. In many cases the ages listed below are estimates based on observed present-day growth rates, which may differ significantly from the growth rates experienced thousands of years ago. Identifying the longest-living organisms also depends on defining what constitutes an \"individual\" organism, which can be problematic, since many asexual organisms and clonal colonies defy one or both of the traditional colloquial definitions of individuality (having a distinct genotype and \n",
+ "Agent(thought) 🤖 : The Greenland shark is identified as the longest living vertebrate in the provided information. It has an estimated lifespan between 250 and 500 years, making it the longest-lived vertebrate known to science.\n",
+ "Agent(final_answer) 🤖 : The Greenland shark (Somniosus microcephalus) is the longest living vertebrate, with an estimated lifespan of 250 to 500 years.\n"
]
}
],
@@ -275,7 +268,7 @@
"from langchain_community.tools import WikipediaQueryRun # noqa: F811\n",
"from langchain_community.utilities import WikipediaAPIWrapper # noqa: F811\n",
"\n",
- "from beeai_framework.agents.bee.agent import BeeAgent\n",
+ "from beeai_framework.agents.react.agent import ReActAgent\n",
"from beeai_framework.tools import Tool, tool\n",
"\n",
"\n",
@@ -294,14 +287,14 @@
" The information found via searching Wikipedia.\n",
" \"\"\"\n",
" wikipedia = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())\n",
- " return StringToolOutput(wikipedia.run(query))\n",
+ " return wikipedia.run(query)\n",
"\n",
"\n",
"# using the tool in an agent\n",
"chat_model: ChatModel = ChatModel.from_name(\"ollama:granite3.1-dense:8b\")\n",
- "agent = BeeAgent(llm=chat_model, tools=[langchain_wikipedia_tool], memory=UnconstrainedMemory())\n",
+ "agent = ReActAgent(llm=chat_model, tools=[langchain_wikipedia_tool], memory=UnconstrainedMemory())\n",
"\n",
- "result: BeeRunOutput = await agent.run(\"What is the longest living vertebrate?\").observe(observer)"
+ "result: ReActAgentRunOutput = await agent.run(\"What is the longest living vertebrate?\").observe(observer)"
]
}
],
diff --git a/python/examples/notebooks/workflows.ipynb b/python/examples/notebooks/workflows.ipynb
index 0108c124c..ecc2e78e5 100644
--- a/python/examples/notebooks/workflows.ipynb
+++ b/python/examples/notebooks/workflows.ipynb
@@ -99,7 +99,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
@@ -182,7 +182,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
@@ -249,8 +249,14 @@
" response: ChatModelStructureOutput = await model.create_structure(\n",
" schema=WebSearchQuery, messages=[UserMessage(prompt)]\n",
" )\n",
+ "\n",
" # Run search and store results in state\n",
- " state.search_results = str(search_tool.run(response.object[\"query\"]))\n",
+ " try:\n",
+ " state.search_results = str(search_tool.run(response.object[\"query\"]))\n",
+ " except Exception:\n",
+ " print(\"Search tool failed! Agent will answer from memory.\")\n",
+ " state.search_results = \"No search results available.\"\n",
+ "\n",
" return \"generate_answer\""
]
},
@@ -304,10 +310,11 @@
"output_type": "stream",
"text": [
"Step: web_search\n",
+ "Search tool failed!\n",
"Step: generate_answer\n",
"*****\n",
"Question: What is the term for a baby hedgehog?\n",
- "Answer: The term for a baby hedgehog is hoglet. This information is consistent across multiple search results. Hoglets are born in litters and are covered with a layer of skin and fluid to prevent injury during birth. It's generally recommended to leave hoglets alone if found, as their parent is usually nearby. Rearing hoglets requires constant warmth, regular feeding, and toileting.\n"
+ "Answer: The term for a baby hedgehog is \"hoglet.\"\n"
]
}
],
@@ -346,7 +353,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
diff --git a/python/examples/templates/system_prompt.py b/python/examples/templates/system_prompt.py
index d17ffc084..1f72950a1 100644
--- a/python/examples/templates/system_prompt.py
+++ b/python/examples/templates/system_prompt.py
@@ -1,7 +1,7 @@
import sys
import traceback
-from beeai_framework.agents.runners.default.prompts import (
+from beeai_framework.agents.react.runners.default.prompts import (
SystemPromptTemplate,
ToolDefinition,
)
diff --git a/python/examples/tools/agent.py b/python/examples/tools/agent.py
index 1118c8579..def8e660f 100644
--- a/python/examples/tools/agent.py
+++ b/python/examples/tools/agent.py
@@ -1,6 +1,6 @@
from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
-from beeai_framework.agents.bee import BeeAgent
+from beeai_framework.agents.react import ReActAgent
from beeai_framework.memory import UnconstrainedMemory
from beeai_framework.tools.weather.openmeteo import OpenMeteoTool
-agent = BeeAgent(llm=OllamaChatModel("llama3.1"), tools=[OpenMeteoTool()], memory=UnconstrainedMemory())
+agent = ReActAgent(llm=OllamaChatModel("llama3.1"), tools=[OpenMeteoTool()], memory=UnconstrainedMemory())
diff --git a/python/examples/tools/custom/base.py b/python/examples/tools/custom/base.py
index 971dd3ebe..b450f8dae 100644
--- a/python/examples/tools/custom/base.py
+++ b/python/examples/tools/custom/base.py
@@ -5,16 +5,17 @@
from pydantic import BaseModel, Field
+from beeai_framework.context import RunContext
from beeai_framework.emitter.emitter import Emitter
from beeai_framework.errors import FrameworkError
-from beeai_framework.tools.tool import StringToolOutput, Tool
+from beeai_framework.tools.tool import StringToolOutput, Tool, ToolRunOptions
class RiddleToolInput(BaseModel):
riddle_number: int = Field(description="Index of riddle to retrieve.")
-class RiddleTool(Tool[RiddleToolInput]):
+class RiddleTool(Tool[RiddleToolInput, ToolRunOptions, StringToolOutput]):
name = "Riddle"
description = "It selects a riddle to test your knowledge."
input_schema = RiddleToolInput
@@ -38,7 +39,9 @@ def _create_emitter(self) -> Emitter:
creator=self,
)
- async def _run(self, input: RiddleToolInput, _: Any | None = None) -> StringToolOutput:
+ async def _run(
+ self, input: RiddleToolInput, options: ToolRunOptions | None, context: RunContext
+ ) -> StringToolOutput:
index = input.riddle_number % (len(self.data))
riddle = self.data[index]
return StringToolOutput(result=riddle)
diff --git a/python/examples/tools/custom/openlibrary.py b/python/examples/tools/custom/openlibrary.py
index f01ee4f0c..b91ff55b3 100644
--- a/python/examples/tools/custom/openlibrary.py
+++ b/python/examples/tools/custom/openlibrary.py
@@ -5,10 +5,11 @@
import httpx
from pydantic import BaseModel, Field
+from beeai_framework.context import RunContext
from beeai_framework.emitter.emitter import Emitter
from beeai_framework.errors import FrameworkError
from beeai_framework.tools import ToolInputValidationError
-from beeai_framework.tools.tool import Tool
+from beeai_framework.tools.tool import JSONToolOutput, Tool, ToolRunOptions
class OpenLibraryToolInput(BaseModel):
@@ -23,7 +24,7 @@ class OpenLibraryToolResult(BaseModel):
bib_key: str
-class OpenLibraryTool(Tool[OpenLibraryToolInput]):
+class OpenLibraryTool(Tool[OpenLibraryToolInput, ToolRunOptions, JSONToolOutput]):
name = "OpenLibrary"
description = """Provides access to a library of books with information about book titles,
authors, contributors, publication dates, publisher and isbn."""
@@ -38,7 +39,9 @@ def _create_emitter(self) -> Emitter:
creator=self,
)
- async def _run(self, tool_input: OpenLibraryToolInput, _: Any | None = None) -> OpenLibraryToolResult:
+ async def _run(
+ self, tool_input: OpenLibraryToolInput, options: ToolRunOptions | None, context: RunContext
+ ) -> JSONToolOutput:
key = ""
value = ""
input_vars = vars(tool_input)
@@ -60,10 +63,12 @@ async def _run(self, tool_input: OpenLibraryToolInput, _: Any | None = None) ->
json_output = response.json()[f"{key}:{value}"]
- return OpenLibraryToolResult(
- preview_url=json_output.get("preview_url", ""),
- info_url=json_output.get("info_url", ""),
- bib_key=json_output.get("bib_key", ""),
+ return JSONToolOutput(
+ result={
+ "preview_url": json_output.get("preview_url", ""),
+ "info_url": json_output.get("info_url", ""),
+ "bib_key": json_output.get("bib_key", ""),
+ }
)
diff --git a/python/examples/tools/decorator.py b/python/examples/tools/decorator.py
index 998519cc1..80a77546c 100644
--- a/python/examples/tools/decorator.py
+++ b/python/examples/tools/decorator.py
@@ -6,15 +6,15 @@
import requests
-from beeai_framework import BeeAgent, tool
+from beeai_framework import ReActAgent, tool
from beeai_framework.agents.types import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel
from beeai_framework.errors import FrameworkError
+from beeai_framework.logger import Logger
from beeai_framework.memory.unconstrained_memory import UnconstrainedMemory
from beeai_framework.tools.tool import StringToolOutput
-from beeai_framework.utils import BeeLogger
-logger = BeeLogger(__name__)
+logger = Logger(__name__)
# defining a tool using the `tool` decorator
@@ -49,7 +49,7 @@ async def main() -> None:
chat_model = ChatModel.from_name("ollama:granite3.1-dense:8b")
- agent = BeeAgent(llm=chat_model, tools=[basic_calculator], memory=UnconstrainedMemory())
+ agent = ReActAgent(llm=chat_model, tools=[basic_calculator], memory=UnconstrainedMemory())
result = await agent.run("What is the square root of 36?", execution=AgentExecutionConfig(total_max_retries=10))
diff --git a/python/examples/tools/duckduckgo.py b/python/examples/tools/duckduckgo.py
index ce52d34e2..659340c4a 100644
--- a/python/examples/tools/duckduckgo.py
+++ b/python/examples/tools/duckduckgo.py
@@ -2,7 +2,7 @@
import sys
import traceback
-from beeai_framework.agents.bee import BeeAgent
+from beeai_framework.agents.react import ReActAgent
from beeai_framework.backend.chat import ChatModel
from beeai_framework.errors import FrameworkError
from beeai_framework.memory import UnconstrainedMemory
@@ -11,7 +11,7 @@
async def main() -> None:
chat_model = ChatModel.from_name("ollama:granite3.1-dense:8b")
- agent = BeeAgent(llm=chat_model, tools=[DuckDuckGoSearchTool()], memory=UnconstrainedMemory())
+ agent = ReActAgent(llm=chat_model, tools=[DuckDuckGoSearchTool()], memory=UnconstrainedMemory())
result = await agent.run("How tall is the mount Everest?")
diff --git a/python/examples/tools/langchain.py b/python/examples/tools/langchain.py
new file mode 100644
index 000000000..db69ae7c7
--- /dev/null
+++ b/python/examples/tools/langchain.py
@@ -0,0 +1,68 @@
+# To run this example, the optional packages:
+#
+# - langchain-core
+# - langchain-community
+#
+# need to be installed
+
+import asyncio
+import pathlib
+import random
+import sys
+import traceback
+
+import langchain
+from langchain_community.tools.file_management.list_dir import ListDirectoryTool
+from langchain_core.tools import StructuredTool
+from pydantic import BaseModel, Field
+
+from beeai_framework.adapters.langchain.tools import LangChainTool
+from beeai_framework.errors import FrameworkError
+
+
+async def directory_list_tool() -> None:
+ list_dir_tool = ListDirectoryTool()
+ tool = LangChainTool(list_dir_tool)
+ dir_path = str(pathlib.Path(__file__).parent.resolve())
+ response = await tool.run({"dir_path": dir_path})
+ print(f"Listing contents of {dir_path}:\n{response}")
+
+
+async def custom_structured_tool() -> None:
+ class RandomNumberToolArgsSchema(BaseModel):
+ min: int = Field(description="The minimum integer", ge=0)
+ max: int = Field(description="The maximum integer", ge=0)
+
+ def random_number_func(min: int, max: int) -> int:
+ """Generate a random integer between two given integers. The two given integers are inclusive."""
+ return random.randint(min, max)
+
+ generate_random_number = StructuredTool.from_function(
+ func=random_number_func,
+ # coroutine=async_random_number_func, <- if you want to specify an async method instead
+ name="GenerateRandomNumber",
+ description="Generate a random number between a minimum and maximum value.",
+ args_schema=RandomNumberToolArgsSchema,
+ return_direct=True,
+ )
+
+ tool = LangChainTool(generate_random_number)
+ response = await tool.run({"min": 1, "max": 10})
+
+ print(f"Your random number: {response}")
+
+
+async def main() -> None:
+ print("*" * 10, "Using custom StructuredTool")
+ await custom_structured_tool()
+ print("*" * 10, "Using ListDirectoryTool")
+ await directory_list_tool()
+
+
+if __name__ == "__main__":
+ langchain.debug = False
+ try:
+ asyncio.run(main())
+ except FrameworkError as e:
+ traceback.print_exc()
+ sys.exit(e.explain())
diff --git a/python/examples/tools/mcp_agent.py b/python/examples/tools/mcp_agent.py
new file mode 100644
index 000000000..e7c229f0b
--- /dev/null
+++ b/python/examples/tools/mcp_agent.py
@@ -0,0 +1,118 @@
+import asyncio
+import logging
+import os
+import sys
+import traceback
+from typing import Any
+
+from dotenv import load_dotenv
+from mcp import ClientSession, StdioServerParameters
+from mcp.client.stdio import stdio_client
+
+from beeai_framework import Tool
+from beeai_framework.agents.react.agent import ReActAgent
+from beeai_framework.agents.types import AgentExecutionConfig
+from beeai_framework.backend.chat import ChatModel, ChatModelParameters
+from beeai_framework.emitter.emitter import Emitter, EventMeta
+from beeai_framework.emitter.types import EmitterOptions
+from beeai_framework.errors import FrameworkError
+from beeai_framework.logger import Logger
+from beeai_framework.memory.token_memory import TokenMemory
+from beeai_framework.tools.mcp_tools import MCPTool
+from examples.helpers.io import ConsoleReader
+
+# Load environment variables
+load_dotenv()
+
+reader = ConsoleReader()
+
+# Configure logging - using DEBUG instead of trace
+logger = Logger("app", level=logging.DEBUG)
+
+# Create server parameters for stdio connection
+server_params = StdioServerParameters(
+ command="npx",
+ args=["-y", "@modelcontextprotocol/server-slack"],
+ env={
+ "SLACK_BOT_TOKEN": os.environ["SLACK_BOT_TOKEN"],
+ "SLACK_TEAM_ID": os.environ["SLACK_TEAM_ID"],
+ "PATH": os.getenv("PATH", default=""),
+ },
+)
+
+
+async def slack_tool() -> MCPTool:
+ async with stdio_client(server_params) as (read, write), ClientSession(read, write) as session:
+ await session.initialize()
+ # Discover Slack tools via MCP client
+ slacktools = await MCPTool.from_client(session, server_params)
+ filter_tool = filter(lambda tool: tool.name == "slack_post_message", slacktools)
+ slack = list(filter_tool)
+ return slack[0]
+
+
+async def create_agent() -> ReActAgent:
+ """Create and configure the agent with tools and LLM"""
+
+ # Other models to try:
+ # "llama3.1"
+ # "granite3.1-dense"
+ # "deepseek-r1"
+ # ensure the model is pulled before running
+ llm = ChatModel.from_name(
+ "ollama:llama3.1",
+ ChatModelParameters(temperature=0),
+ )
+
+ # Configure tools
+ tools: list[Tool] = [await slack_tool()]
+
+ # Create agent with memory and tools
+ agent = ReActAgent(llm=llm, tools=tools, memory=TokenMemory(llm))
+ return agent
+
+
+def process_agent_events(data: dict[str, Any], event: EventMeta) -> None:
+ """Process agent events and log appropriately"""
+
+ if event.name == "error":
+ reader.write("Agent 🤖 : ", FrameworkError.ensure(data["error"]).explain())
+ elif event.name == "retry":
+ reader.write("Agent 🤖 : ", "retrying the action...")
+ elif event.name == "update":
+ reader.write(f"Agent({data['update']['key']}) 🤖 : ", data["update"]["parsedValue"])
+ elif event.name == "start":
+ reader.write("Agent 🤖 : ", "starting new iteration")
+ elif event.name == "success":
+ reader.write("Agent 🤖 : ", "success")
+ else:
+ print(event.path)
+
+
+def observer(emitter: Emitter) -> None:
+ emitter.on("*.*", process_agent_events, EmitterOptions(match_nested=True))
+
+
+async def main() -> None:
+ """Main application loop"""
+
+ # Create agent
+ agent = await create_agent()
+
+ # Main interaction loop with user input
+ for prompt in reader:
+ # Run agent with the prompt
+ response = await agent.run(
+ prompt=prompt,
+ execution=AgentExecutionConfig(max_retries_per_step=3, total_max_retries=10, max_iterations=20),
+ ).observe(observer)
+
+ reader.write("Agent 🤖 : ", response.result.text)
+
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except FrameworkError as e:
+ traceback.print_exc()
+ sys.exit(e.explain())
diff --git a/python/examples/tools/mcp_tool_creation.py b/python/examples/tools/mcp_tool_creation.py
new file mode 100644
index 000000000..d570559c9
--- /dev/null
+++ b/python/examples/tools/mcp_tool_creation.py
@@ -0,0 +1,37 @@
+import asyncio
+import os
+
+from dotenv import load_dotenv
+from mcp import ClientSession, StdioServerParameters
+from mcp.client.stdio import stdio_client
+
+from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
+from beeai_framework.agents.react import ReActAgent
+from beeai_framework.memory import UnconstrainedMemory
+from beeai_framework.tools.mcp_tools import MCPTool
+
+load_dotenv()
+
+# Create server parameters for stdio connection
+server_params = StdioServerParameters(
+ command="npx",
+ args=["-y", "@modelcontextprotocol/server-slack"],
+ env={
+ "SLACK_BOT_TOKEN": os.environ["SLACK_BOT_TOKEN"],
+ "SLACK_TEAM_ID": os.environ["SLACK_TEAM_ID"],
+ "PATH": os.getenv("PATH", default=""),
+ },
+)
+
+
+async def slack_tool() -> MCPTool:
+ async with stdio_client(server_params) as (read, write), ClientSession(read, write) as session:
+ await session.initialize()
+ # Discover Slack tools via MCP client
+ slacktools = await MCPTool.from_client(session, server_params)
+ filter_tool = filter(lambda tool: tool.name == "slack_post_message", slacktools)
+ slack = list(filter_tool)
+ return slack[0]
+
+
+agent = ReActAgent(llm=OllamaChatModel("llama3.1"), tools=[asyncio.run(slack_tool())], memory=UnconstrainedMemory())
diff --git a/python/examples/tools/openmeteo.py b/python/examples/tools/openmeteo.py
index ad2c5f46a..b37c6afbc 100644
--- a/python/examples/tools/openmeteo.py
+++ b/python/examples/tools/openmeteo.py
@@ -2,7 +2,7 @@
import sys
import traceback
-from beeai_framework.agents.bee import BeeAgent
+from beeai_framework.agents.react import ReActAgent
from beeai_framework.backend.chat import ChatModel
from beeai_framework.errors import FrameworkError
from beeai_framework.memory import UnconstrainedMemory
@@ -11,7 +11,7 @@
async def main() -> None:
llm = ChatModel.from_name("ollama:granite3.1-dense:8b")
- agent = BeeAgent(llm=llm, tools=[OpenMeteoTool()], memory=UnconstrainedMemory())
+ agent = ReActAgent(llm=llm, tools=[OpenMeteoTool()], memory=UnconstrainedMemory())
result = await agent.run("What's the current weather in London?")
diff --git a/python/poetry.lock b/python/poetry.lock
index 8eaf5eb55..e54ede81c 100644
--- a/python/poetry.lock
+++ b/python/poetry.lock
@@ -1,4 +1,4 @@
-# This file is automatically @generated by Poetry 2.0.1 and should not be changed by hand.
+# This file is automatically @generated by Poetry 2.0.0 and should not be changed by hand.
[[package]]
name = "aiofiles"
@@ -258,6 +258,46 @@ charset-normalizer = ["charset-normalizer"]
html5lib = ["html5lib"]
lxml = ["lxml"]
+[[package]]
+name = "boto3"
+version = "1.37.7"
+description = "The AWS SDK for Python"
+optional = true
+python-versions = ">=3.8"
+groups = ["dev"]
+files = [
+ {file = "boto3-1.37.7-py3-none-any.whl", hash = "sha256:9758429ebc11ed391249a16406af7175c94140fe99956e319a191805a625b383"},
+ {file = "boto3-1.37.7.tar.gz", hash = "sha256:ac2e022edcd6a94a2adbb21f0ad373a16557ec14a8910366bee0bbc7138fc72a"},
+]
+
+[package.dependencies]
+botocore = ">=1.37.7,<1.38.0"
+jmespath = ">=0.7.1,<2.0.0"
+s3transfer = ">=0.11.0,<0.12.0"
+
+[package.extras]
+crt = ["botocore[crt] (>=1.21.0,<2.0a0)"]
+
+[[package]]
+name = "botocore"
+version = "1.37.7"
+description = "Low-level, data-driven core of boto 3."
+optional = true
+python-versions = ">=3.8"
+groups = ["dev"]
+files = [
+ {file = "botocore-1.37.7-py3-none-any.whl", hash = "sha256:047b8a27035832607fd368464dcce58026005d3d7630fcd1d703051370abcb26"},
+ {file = "botocore-1.37.7.tar.gz", hash = "sha256:2faeac11766db912bc444669b04359080b7b83b2f57a3906c77c8105b70ce1e8"},
+]
+
+[package.dependencies]
+jmespath = ">=0.7.1,<2.0.0"
+python-dateutil = ">=2.1,<3.0.0"
+urllib3 = {version = ">=1.25.4,<2.2.0 || >2.2.0,<3", markers = "python_version >= \"3.10\""}
+
+[package.extras]
+crt = ["awscrt (==0.23.8)"]
+
[[package]]
name = "cachetools"
version = "5.5.2"
@@ -282,6 +322,87 @@ files = [
{file = "certifi-2025.1.31.tar.gz", hash = "sha256:3d5da6925056f6f18f119200434a4780a94263f10d1c21d032a6f6b2baa20651"},
]
+[[package]]
+name = "cffi"
+version = "1.17.1"
+description = "Foreign Function Interface for Python calling C code."
+optional = true
+python-versions = ">=3.8"
+groups = ["main"]
+markers = "platform_python_implementation == \"PyPy\""
+files = [
+ {file = "cffi-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14"},
+ {file = "cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67"},
+ {file = "cffi-1.17.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382"},
+ {file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702"},
+ {file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3"},
+ {file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6"},
+ {file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17"},
+ {file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8"},
+ {file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e"},
+ {file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be"},
+ {file = "cffi-1.17.1-cp310-cp310-win32.whl", hash = "sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c"},
+ {file = "cffi-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15"},
+ {file = "cffi-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401"},
+ {file = "cffi-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf"},
+ {file = "cffi-1.17.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4"},
+ {file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41"},
+ {file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1"},
+ {file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6"},
+ {file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d"},
+ {file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6"},
+ {file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f"},
+ {file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b"},
+ {file = "cffi-1.17.1-cp311-cp311-win32.whl", hash = "sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655"},
+ {file = "cffi-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0"},
+ {file = "cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4"},
+ {file = "cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c"},
+ {file = "cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36"},
+ {file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5"},
+ {file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff"},
+ {file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99"},
+ {file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93"},
+ {file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3"},
+ {file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8"},
+ {file = "cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65"},
+ {file = "cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903"},
+ {file = "cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e"},
+ {file = "cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2"},
+ {file = "cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3"},
+ {file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683"},
+ {file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5"},
+ {file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4"},
+ {file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd"},
+ {file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed"},
+ {file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9"},
+ {file = "cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d"},
+ {file = "cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a"},
+ {file = "cffi-1.17.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b"},
+ {file = "cffi-1.17.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964"},
+ {file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9"},
+ {file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc"},
+ {file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c"},
+ {file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1"},
+ {file = "cffi-1.17.1-cp38-cp38-win32.whl", hash = "sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8"},
+ {file = "cffi-1.17.1-cp38-cp38-win_amd64.whl", hash = "sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1"},
+ {file = "cffi-1.17.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16"},
+ {file = "cffi-1.17.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36"},
+ {file = "cffi-1.17.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8"},
+ {file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576"},
+ {file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87"},
+ {file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0"},
+ {file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3"},
+ {file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595"},
+ {file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a"},
+ {file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e"},
+ {file = "cffi-1.17.1-cp39-cp39-win32.whl", hash = "sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7"},
+ {file = "cffi-1.17.1-cp39-cp39-win_amd64.whl", hash = "sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662"},
+ {file = "cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824"},
+]
+
+[package.dependencies]
+pycparser = "*"
+
[[package]]
name = "chardet"
version = "5.2.0"
@@ -536,6 +657,22 @@ files = [
[package.extras]
toml = ["tomli"]
+[[package]]
+name = "dataclasses-json"
+version = "0.6.7"
+description = "Easily serialize dataclasses to and from JSON."
+optional = true
+python-versions = "<4.0,>=3.7"
+groups = ["main"]
+files = [
+ {file = "dataclasses_json-0.6.7-py3-none-any.whl", hash = "sha256:0dbf33f26c8d5305befd61b39d2b3414e8a407bedc2834dea9b8d642666fb40a"},
+ {file = "dataclasses_json-0.6.7.tar.gz", hash = "sha256:b6b3e528266ea45b9535223bc53ca645f5208833c29229e847b3f26a1cc55fc0"},
+]
+
+[package.dependencies]
+marshmallow = ">=3.18.0,<4.0.0"
+typing-inspect = ">=0.4.0,<1"
+
[[package]]
name = "decli"
version = "0.6.2"
@@ -782,6 +919,94 @@ test-downstream = ["aiobotocore (>=2.5.4,<3.0.0)", "dask[dataframe,test]", "moto
test-full = ["adlfs", "aiohttp (!=4.0.0a0,!=4.0.0a1)", "cloudpickle", "dask", "distributed", "dropbox", "dropboxdrivefs", "fastparquet", "fusepy", "gcsfs", "jinja2", "kerchunk", "libarchive-c", "lz4", "notebook", "numpy", "ocifs", "pandas", "panel", "paramiko", "pyarrow", "pyarrow (>=1)", "pyftpdlib", "pygit2", "pytest", "pytest-asyncio (!=0.22.0)", "pytest-benchmark", "pytest-cov", "pytest-mock", "pytest-recording", "pytest-rerunfailures", "python-snappy", "requests", "smbprotocol", "tqdm", "urllib3", "zarr", "zstandard"]
tqdm = ["tqdm"]
+[[package]]
+name = "greenlet"
+version = "3.1.1"
+description = "Lightweight in-process concurrent programming"
+optional = true
+python-versions = ">=3.7"
+groups = ["main"]
+markers = "python_version < \"3.14\" and (platform_machine == \"aarch64\" or platform_machine == \"ppc64le\" or platform_machine == \"x86_64\" or platform_machine == \"amd64\" or platform_machine == \"AMD64\" or platform_machine == \"win32\" or platform_machine == \"WIN32\")"
+files = [
+ {file = "greenlet-3.1.1-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:0bbae94a29c9e5c7e4a2b7f0aae5c17e8e90acbfd3bf6270eeba60c39fce3563"},
+ {file = "greenlet-3.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0fde093fb93f35ca72a556cf72c92ea3ebfda3d79fc35bb19fbe685853869a83"},
+ {file = "greenlet-3.1.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:36b89d13c49216cadb828db8dfa6ce86bbbc476a82d3a6c397f0efae0525bdd0"},
+ {file = "greenlet-3.1.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:94b6150a85e1b33b40b1464a3f9988dcc5251d6ed06842abff82e42632fac120"},
+ {file = "greenlet-3.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:93147c513fac16385d1036b7e5b102c7fbbdb163d556b791f0f11eada7ba65dc"},
+ {file = "greenlet-3.1.1-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:da7a9bff22ce038e19bf62c4dd1ec8391062878710ded0a845bcf47cc0200617"},
+ {file = "greenlet-3.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b2795058c23988728eec1f36a4e5e4ebad22f8320c85f3587b539b9ac84128d7"},
+ {file = "greenlet-3.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:ed10eac5830befbdd0c32f83e8aa6288361597550ba669b04c48f0f9a2c843c6"},
+ {file = "greenlet-3.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:77c386de38a60d1dfb8e55b8c1101d68c79dfdd25c7095d51fec2dd800892b80"},
+ {file = "greenlet-3.1.1-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:e4d333e558953648ca09d64f13e6d8f0523fa705f51cae3f03b5983489958c70"},
+ {file = "greenlet-3.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:09fc016b73c94e98e29af67ab7b9a879c307c6731a2c9da0db5a7d9b7edd1159"},
+ {file = "greenlet-3.1.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d5e975ca70269d66d17dd995dafc06f1b06e8cb1ec1e9ed54c1d1e4a7c4cf26e"},
+ {file = "greenlet-3.1.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3b2813dc3de8c1ee3f924e4d4227999285fd335d1bcc0d2be6dc3f1f6a318ec1"},
+ {file = "greenlet-3.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e347b3bfcf985a05e8c0b7d462ba6f15b1ee1c909e2dcad795e49e91b152c383"},
+ {file = "greenlet-3.1.1-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9e8f8c9cb53cdac7ba9793c276acd90168f416b9ce36799b9b885790f8ad6c0a"},
+ {file = "greenlet-3.1.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:62ee94988d6b4722ce0028644418d93a52429e977d742ca2ccbe1c4f4a792511"},
+ {file = "greenlet-3.1.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1776fd7f989fc6b8d8c8cb8da1f6b82c5814957264d1f6cf818d475ec2bf6395"},
+ {file = "greenlet-3.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:48ca08c771c268a768087b408658e216133aecd835c0ded47ce955381105ba39"},
+ {file = "greenlet-3.1.1-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:4afe7ea89de619adc868e087b4d2359282058479d7cfb94970adf4b55284574d"},
+ {file = "greenlet-3.1.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f406b22b7c9a9b4f8aa9d2ab13d6ae0ac3e85c9a809bd590ad53fed2bf70dc79"},
+ {file = "greenlet-3.1.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c3a701fe5a9695b238503ce5bbe8218e03c3bcccf7e204e455e7462d770268aa"},
+ {file = "greenlet-3.1.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2846930c65b47d70b9d178e89c7e1a69c95c1f68ea5aa0a58646b7a96df12441"},
+ {file = "greenlet-3.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:99cfaa2110534e2cf3ba31a7abcac9d328d1d9f1b95beede58294a60348fba36"},
+ {file = "greenlet-3.1.1-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1443279c19fca463fc33e65ef2a935a5b09bb90f978beab37729e1c3c6c25fe9"},
+ {file = "greenlet-3.1.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:b7cede291382a78f7bb5f04a529cb18e068dd29e0fb27376074b6d0317bf4dd0"},
+ {file = "greenlet-3.1.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:23f20bb60ae298d7d8656c6ec6db134bca379ecefadb0b19ce6f19d1f232a942"},
+ {file = "greenlet-3.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:7124e16b4c55d417577c2077be379514321916d5790fa287c9ed6f23bd2ffd01"},
+ {file = "greenlet-3.1.1-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:05175c27cb459dcfc05d026c4232f9de8913ed006d42713cb8a5137bd49375f1"},
+ {file = "greenlet-3.1.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:935e943ec47c4afab8965954bf49bfa639c05d4ccf9ef6e924188f762145c0ff"},
+ {file = "greenlet-3.1.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:667a9706c970cb552ede35aee17339a18e8f2a87a51fba2ed39ceeeb1004798a"},
+ {file = "greenlet-3.1.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b8a678974d1f3aa55f6cc34dc480169d58f2e6d8958895d68845fa4ab566509e"},
+ {file = "greenlet-3.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:efc0f674aa41b92da8c49e0346318c6075d734994c3c4e4430b1c3f853e498e4"},
+ {file = "greenlet-3.1.1-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0153404a4bb921f0ff1abeb5ce8a5131da56b953eda6e14b88dc6bbc04d2049e"},
+ {file = "greenlet-3.1.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:275f72decf9932639c1c6dd1013a1bc266438eb32710016a1c742df5da6e60a1"},
+ {file = "greenlet-3.1.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:c4aab7f6381f38a4b42f269057aee279ab0fc7bf2e929e3d4abfae97b682a12c"},
+ {file = "greenlet-3.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:b42703b1cf69f2aa1df7d1030b9d77d3e584a70755674d60e710f0af570f3761"},
+ {file = "greenlet-3.1.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f1695e76146579f8c06c1509c7ce4dfe0706f49c6831a817ac04eebb2fd02011"},
+ {file = "greenlet-3.1.1-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7876452af029456b3f3549b696bb36a06db7c90747740c5302f74a9e9fa14b13"},
+ {file = "greenlet-3.1.1-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4ead44c85f8ab905852d3de8d86f6f8baf77109f9da589cb4fa142bd3b57b475"},
+ {file = "greenlet-3.1.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8320f64b777d00dd7ccdade271eaf0cad6636343293a25074cc5566160e4de7b"},
+ {file = "greenlet-3.1.1-cp313-cp313t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6510bf84a6b643dabba74d3049ead221257603a253d0a9873f55f6a59a65f822"},
+ {file = "greenlet-3.1.1-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:04b013dc07c96f83134b1e99888e7a79979f1a247e2a9f59697fa14b5862ed01"},
+ {file = "greenlet-3.1.1-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:411f015496fec93c1c8cd4e5238da364e1da7a124bcb293f085bf2860c32c6f6"},
+ {file = "greenlet-3.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:47da355d8687fd65240c364c90a31569a133b7b60de111c255ef5b606f2ae291"},
+ {file = "greenlet-3.1.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:98884ecf2ffb7d7fe6bd517e8eb99d31ff7855a840fa6d0d63cd07c037f6a981"},
+ {file = "greenlet-3.1.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f1d4aeb8891338e60d1ab6127af1fe45def5259def8094b9c7e34690c8858803"},
+ {file = "greenlet-3.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db32b5348615a04b82240cc67983cb315309e88d444a288934ee6ceaebcad6cc"},
+ {file = "greenlet-3.1.1-cp37-cp37m-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:dcc62f31eae24de7f8dce72134c8651c58000d3b1868e01392baea7c32c247de"},
+ {file = "greenlet-3.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:1d3755bcb2e02de341c55b4fca7a745a24a9e7212ac953f6b3a48d117d7257aa"},
+ {file = "greenlet-3.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:b8da394b34370874b4572676f36acabac172602abf054cbc4ac910219f3340af"},
+ {file = "greenlet-3.1.1-cp37-cp37m-win32.whl", hash = "sha256:a0dfc6c143b519113354e780a50381508139b07d2177cb6ad6a08278ec655798"},
+ {file = "greenlet-3.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:54558ea205654b50c438029505def3834e80f0869a70fb15b871c29b4575ddef"},
+ {file = "greenlet-3.1.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:346bed03fe47414091be4ad44786d1bd8bef0c3fcad6ed3dee074a032ab408a9"},
+ {file = "greenlet-3.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dfc59d69fc48664bc693842bd57acfdd490acafda1ab52c7836e3fc75c90a111"},
+ {file = "greenlet-3.1.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d21e10da6ec19b457b82636209cbe2331ff4306b54d06fa04b7c138ba18c8a81"},
+ {file = "greenlet-3.1.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:37b9de5a96111fc15418819ab4c4432e4f3c2ede61e660b1e33971eba26ef9ba"},
+ {file = "greenlet-3.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6ef9ea3f137e5711f0dbe5f9263e8c009b7069d8a1acea822bd5e9dae0ae49c8"},
+ {file = "greenlet-3.1.1-cp38-cp38-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:85f3ff71e2e60bd4b4932a043fbbe0f499e263c628390b285cb599154a3b03b1"},
+ {file = "greenlet-3.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:95ffcf719966dd7c453f908e208e14cde192e09fde6c7186c8f1896ef778d8cd"},
+ {file = "greenlet-3.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:03a088b9de532cbfe2ba2034b2b85e82df37874681e8c470d6fb2f8c04d7e4b7"},
+ {file = "greenlet-3.1.1-cp38-cp38-win32.whl", hash = "sha256:8b8b36671f10ba80e159378df9c4f15c14098c4fd73a36b9ad715f057272fbef"},
+ {file = "greenlet-3.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:7017b2be767b9d43cc31416aba48aab0d2309ee31b4dbf10a1d38fb7972bdf9d"},
+ {file = "greenlet-3.1.1-cp39-cp39-macosx_11_0_universal2.whl", hash = "sha256:396979749bd95f018296af156201d6211240e7a23090f50a8d5d18c370084dc3"},
+ {file = "greenlet-3.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca9d0ff5ad43e785350894d97e13633a66e2b50000e8a183a50a88d834752d42"},
+ {file = "greenlet-3.1.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f6ff3b14f2df4c41660a7dec01045a045653998784bf8cfcb5a525bdffffbc8f"},
+ {file = "greenlet-3.1.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:94ebba31df2aa506d7b14866fed00ac141a867e63143fe5bca82a8e503b36437"},
+ {file = "greenlet-3.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73aaad12ac0ff500f62cebed98d8789198ea0e6f233421059fa68a5aa7220145"},
+ {file = "greenlet-3.1.1-cp39-cp39-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:63e4844797b975b9af3a3fb8f7866ff08775f5426925e1e0bbcfe7932059a12c"},
+ {file = "greenlet-3.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7939aa3ca7d2a1593596e7ac6d59391ff30281ef280d8632fa03d81f7c5f955e"},
+ {file = "greenlet-3.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:d0028e725ee18175c6e422797c407874da24381ce0690d6b9396c204c7f7276e"},
+ {file = "greenlet-3.1.1-cp39-cp39-win32.whl", hash = "sha256:5e06afd14cbaf9e00899fae69b24a32f2196c19de08fcb9f4779dd4f004e5e7c"},
+ {file = "greenlet-3.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:3319aa75e0e0639bc15ff54ca327e8dc7a6fe404003496e3c6925cd3142e0e22"},
+ {file = "greenlet-3.1.1.tar.gz", hash = "sha256:4ce3ac6cdb6adf7946475d7ef31777c26d94bccc377e070a7986bd2d5c515467"},
+]
+
+[package.extras]
+docs = ["Sphinx", "furo"]
+test = ["objgraph", "psutil"]
+
[[package]]
name = "h11"
version = "0.14.0"
@@ -1059,6 +1284,18 @@ files = [
{file = "jiter-0.8.2.tar.gz", hash = "sha256:cd73d3e740666d0e639f678adb176fad25c1bcbdae88d8d7b857e1783bb4212d"},
]
+[[package]]
+name = "jmespath"
+version = "1.0.1"
+description = "JSON Matching Expressions"
+optional = true
+python-versions = ">=3.7"
+groups = ["dev"]
+files = [
+ {file = "jmespath-1.0.1-py3-none-any.whl", hash = "sha256:02e2e4cc71b5bcab88332eebf907519190dd9e6e82107fa7f83b1003a6252980"},
+ {file = "jmespath-1.0.1.tar.gz", hash = "sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe"},
+]
+
[[package]]
name = "json-repair"
version = "0.39.1"
@@ -1071,6 +1308,33 @@ files = [
{file = "json_repair-0.39.1.tar.gz", hash = "sha256:e90a489f247e1a8fc86612a5c719872a3dbf9cbaffd6d55f238ec571a77740fa"},
]
+[[package]]
+name = "jsonpatch"
+version = "1.33"
+description = "Apply JSON-Patches (RFC 6902)"
+optional = true
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
+groups = ["main"]
+files = [
+ {file = "jsonpatch-1.33-py2.py3-none-any.whl", hash = "sha256:0ae28c0cd062bbd8b8ecc26d7d164fbbea9652a1a3693f3b956c1eae5145dade"},
+ {file = "jsonpatch-1.33.tar.gz", hash = "sha256:9fcd4009c41e6d12348b4a0ff2563ba56a2923a7dfee731d004e212e1ee5030c"},
+]
+
+[package.dependencies]
+jsonpointer = ">=1.9"
+
+[[package]]
+name = "jsonpointer"
+version = "3.0.0"
+description = "Identify specific nodes in a JSON document (RFC 6901)"
+optional = true
+python-versions = ">=3.7"
+groups = ["main"]
+files = [
+ {file = "jsonpointer-3.0.0-py2.py3-none-any.whl", hash = "sha256:13e088adc14fca8b6aa8177c044e12701e6ad4b28ff10e65f2267a90109c9942"},
+ {file = "jsonpointer-3.0.0.tar.gz", hash = "sha256:2b2d729f2091522d61c3b31f82e11870f60b68f43fbc705cb76bf4b832af59ef"},
+]
+
[[package]]
name = "jsonschema"
version = "4.23.0"
@@ -1129,6 +1393,137 @@ traitlets = ">=5.3"
docs = ["myst-parser", "pydata-sphinx-theme", "sphinx-autodoc-typehints", "sphinxcontrib-github-alt", "sphinxcontrib-spelling", "traitlets"]
test = ["ipykernel", "pre-commit", "pytest (<8)", "pytest-cov", "pytest-timeout"]
+[[package]]
+name = "langchain"
+version = "0.3.20"
+description = "Building applications with LLMs through composability"
+optional = true
+python-versions = "<4.0,>=3.9"
+groups = ["main"]
+files = [
+ {file = "langchain-0.3.20-py3-none-any.whl", hash = "sha256:273287f8e61ffdf7e811cf8799e6a71e9381325b8625fd6618900faba79cfdd0"},
+ {file = "langchain-0.3.20.tar.gz", hash = "sha256:edcc3241703e1f6557ef5a5c35cd56f9ccc25ff12e38b4829c66d94971737a93"},
+]
+
+[package.dependencies]
+langchain-core = ">=0.3.41,<1.0.0"
+langchain-text-splitters = ">=0.3.6,<1.0.0"
+langsmith = ">=0.1.17,<0.4"
+pydantic = ">=2.7.4,<3.0.0"
+PyYAML = ">=5.3"
+requests = ">=2,<3"
+SQLAlchemy = ">=1.4,<3"
+
+[package.extras]
+anthropic = ["langchain-anthropic"]
+aws = ["langchain-aws"]
+cohere = ["langchain-cohere"]
+community = ["langchain-community"]
+deepseek = ["langchain-deepseek"]
+fireworks = ["langchain-fireworks"]
+google-genai = ["langchain-google-genai"]
+google-vertexai = ["langchain-google-vertexai"]
+groq = ["langchain-groq"]
+huggingface = ["langchain-huggingface"]
+mistralai = ["langchain-mistralai"]
+ollama = ["langchain-ollama"]
+openai = ["langchain-openai"]
+together = ["langchain-together"]
+xai = ["langchain-xai"]
+
+[[package]]
+name = "langchain-community"
+version = "0.3.19"
+description = "Community contributed LangChain integrations."
+optional = true
+python-versions = "<4.0,>=3.9"
+groups = ["main"]
+files = [
+ {file = "langchain_community-0.3.19-py3-none-any.whl", hash = "sha256:268ce7b322c0d1961d7bab1a9419d6ff30c99ad09487dca48d47389b69875b16"},
+ {file = "langchain_community-0.3.19.tar.gz", hash = "sha256:fc100b6d4d6523566a957cdc306b0500e4982d5b221b98f67432da18ba5b2bf5"},
+]
+
+[package.dependencies]
+aiohttp = ">=3.8.3,<4.0.0"
+dataclasses-json = ">=0.5.7,<0.7"
+httpx-sse = ">=0.4.0,<1.0.0"
+langchain = ">=0.3.20,<1.0.0"
+langchain-core = ">=0.3.41,<1.0.0"
+langsmith = ">=0.1.125,<0.4"
+numpy = ">=1.26.2,<3"
+pydantic-settings = ">=2.4.0,<3.0.0"
+PyYAML = ">=5.3"
+requests = ">=2,<3"
+SQLAlchemy = ">=1.4,<3"
+tenacity = ">=8.1.0,<8.4.0 || >8.4.0,<10"
+
+[[package]]
+name = "langchain-core"
+version = "0.3.41"
+description = "Building applications with LLMs through composability"
+optional = true
+python-versions = "<4.0,>=3.9"
+groups = ["main"]
+files = [
+ {file = "langchain_core-0.3.41-py3-none-any.whl", hash = "sha256:1a27cca5333bae7597de4004fb634b5f3e71667a3da6493b94ce83bcf15a23bd"},
+ {file = "langchain_core-0.3.41.tar.gz", hash = "sha256:d3ee9f3616ebbe7943470ade23d4a04e1729b1512c0ec55a4a07bd2ac64dedb4"},
+]
+
+[package.dependencies]
+jsonpatch = ">=1.33,<2.0"
+langsmith = ">=0.1.125,<0.4"
+packaging = ">=23.2,<25"
+pydantic = [
+ {version = ">=2.5.2,<3.0.0", markers = "python_full_version < \"3.12.4\""},
+ {version = ">=2.7.4,<3.0.0", markers = "python_full_version >= \"3.12.4\""},
+]
+PyYAML = ">=5.3"
+tenacity = ">=8.1.0,<8.4.0 || >8.4.0,<10.0.0"
+typing-extensions = ">=4.7"
+
+[[package]]
+name = "langchain-text-splitters"
+version = "0.3.6"
+description = "LangChain text splitting utilities"
+optional = true
+python-versions = "<4.0,>=3.9"
+groups = ["main"]
+files = [
+ {file = "langchain_text_splitters-0.3.6-py3-none-any.whl", hash = "sha256:e5d7b850f6c14259ea930be4a964a65fa95d9df7e1dbdd8bad8416db72292f4e"},
+ {file = "langchain_text_splitters-0.3.6.tar.gz", hash = "sha256:c537972f4b7c07451df431353a538019ad9dadff7a1073ea363946cea97e1bee"},
+]
+
+[package.dependencies]
+langchain-core = ">=0.3.34,<1.0.0"
+
+[[package]]
+name = "langsmith"
+version = "0.3.11"
+description = "Client library to connect to the LangSmith LLM Tracing and Evaluation Platform."
+optional = true
+python-versions = "<4.0,>=3.9"
+groups = ["main"]
+files = [
+ {file = "langsmith-0.3.11-py3-none-any.whl", hash = "sha256:0cca22737ef07d3b038a437c141deda37e00add56022582680188b681bec095e"},
+ {file = "langsmith-0.3.11.tar.gz", hash = "sha256:ddf29d24352e99de79c9618aaf95679214324e146c5d3d9475a7ddd2870018b1"},
+]
+
+[package.dependencies]
+httpx = ">=0.23.0,<1"
+orjson = {version = ">=3.9.14,<4.0.0", markers = "platform_python_implementation != \"PyPy\""}
+packaging = ">=23.2"
+pydantic = [
+ {version = ">=1,<3", markers = "python_full_version < \"3.12.4\""},
+ {version = ">=2.7.4,<3.0.0", markers = "python_full_version >= \"3.12.4\""},
+]
+requests = ">=2,<3"
+requests-toolbelt = ">=1.0.0,<2.0.0"
+zstandard = ">=0.23.0,<0.24.0"
+
+[package.extras]
+langsmith-pyo3 = ["langsmith-pyo3 (>=0.1.0rc2,<0.2.0)"]
+pytest = ["pytest (>=7.0.0)", "rich (>=13.9.4,<14.0.0)"]
+
[[package]]
name = "litellm"
version = "1.62.1"
@@ -1384,6 +1779,26 @@ files = [
{file = "markupsafe-3.0.2.tar.gz", hash = "sha256:ee55d3edf80167e48ea11a923c7386f4669df67d7994554387f84e7d8b0a2bf0"},
]
+[[package]]
+name = "marshmallow"
+version = "3.26.1"
+description = "A lightweight library for converting complex datatypes to and from native Python datatypes."
+optional = true
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "marshmallow-3.26.1-py3-none-any.whl", hash = "sha256:3350409f20a70a7e4e11a27661187b77cdcaeb20abca41c1454fe33636bea09c"},
+ {file = "marshmallow-3.26.1.tar.gz", hash = "sha256:e6d8affb6cb61d39d26402096dc0aee12d5a26d490a121f118d2e81dc0719dc6"},
+]
+
+[package.dependencies]
+packaging = ">=17.0"
+
+[package.extras]
+dev = ["marshmallow[tests]", "pre-commit (>=3.5,<5.0)", "tox"]
+docs = ["autodocsumm (==0.2.14)", "furo (==2024.8.6)", "sphinx (==8.1.3)", "sphinx-copybutton (==0.5.2)", "sphinx-issues (==5.0.0)", "sphinxext-opengraph (==0.9.1)"]
+tests = ["pytest", "simplejson"]
+
[[package]]
name = "mccabe"
version = "0.7.0"
@@ -1583,7 +1998,7 @@ version = "1.0.0"
description = "Type system extensions for programs checked with the mypy type checker."
optional = false
python-versions = ">=3.5"
-groups = ["dev"]
+groups = ["main", "dev"]
files = [
{file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"},
{file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
@@ -1626,16 +2041,81 @@ files = [
[package.dependencies]
nbformat = "*"
+[[package]]
+name = "numpy"
+version = "2.2.3"
+description = "Fundamental package for array computing in Python"
+optional = true
+python-versions = ">=3.10"
+groups = ["main"]
+files = [
+ {file = "numpy-2.2.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cbc6472e01952d3d1b2772b720428f8b90e2deea8344e854df22b0618e9cce71"},
+ {file = "numpy-2.2.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:cdfe0c22692a30cd830c0755746473ae66c4a8f2e7bd508b35fb3b6a0813d787"},
+ {file = "numpy-2.2.3-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:e37242f5324ffd9f7ba5acf96d774f9276aa62a966c0bad8dae692deebec7716"},
+ {file = "numpy-2.2.3-cp310-cp310-macosx_14_0_x86_64.whl", hash = "sha256:95172a21038c9b423e68be78fd0be6e1b97674cde269b76fe269a5dfa6fadf0b"},
+ {file = "numpy-2.2.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d5b47c440210c5d1d67e1cf434124e0b5c395eee1f5806fdd89b553ed1acd0a3"},
+ {file = "numpy-2.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0391ea3622f5c51a2e29708877d56e3d276827ac5447d7f45e9bc4ade8923c52"},
+ {file = "numpy-2.2.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:f6b3dfc7661f8842babd8ea07e9897fe3d9b69a1d7e5fbb743e4160f9387833b"},
+ {file = "numpy-2.2.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:1ad78ce7f18ce4e7df1b2ea4019b5817a2f6a8a16e34ff2775f646adce0a5027"},
+ {file = "numpy-2.2.3-cp310-cp310-win32.whl", hash = "sha256:5ebeb7ef54a7be11044c33a17b2624abe4307a75893c001a4800857956b41094"},
+ {file = "numpy-2.2.3-cp310-cp310-win_amd64.whl", hash = "sha256:596140185c7fa113563c67c2e894eabe0daea18cf8e33851738c19f70ce86aeb"},
+ {file = "numpy-2.2.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:16372619ee728ed67a2a606a614f56d3eabc5b86f8b615c79d01957062826ca8"},
+ {file = "numpy-2.2.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5521a06a3148686d9269c53b09f7d399a5725c47bbb5b35747e1cb76326b714b"},
+ {file = "numpy-2.2.3-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:7c8dde0ca2f77828815fd1aedfdf52e59071a5bae30dac3b4da2a335c672149a"},
+ {file = "numpy-2.2.3-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:77974aba6c1bc26e3c205c2214f0d5b4305bdc719268b93e768ddb17e3fdd636"},
+ {file = "numpy-2.2.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d42f9c36d06440e34226e8bd65ff065ca0963aeecada587b937011efa02cdc9d"},
+ {file = "numpy-2.2.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2712c5179f40af9ddc8f6727f2bd910ea0eb50206daea75f58ddd9fa3f715bb"},
+ {file = "numpy-2.2.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c8b0451d2ec95010d1db8ca733afc41f659f425b7f608af569711097fd6014e2"},
+ {file = "numpy-2.2.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:d9b4a8148c57ecac25a16b0e11798cbe88edf5237b0df99973687dd866f05e1b"},
+ {file = "numpy-2.2.3-cp311-cp311-win32.whl", hash = "sha256:1f45315b2dc58d8a3e7754fe4e38b6fce132dab284a92851e41b2b344f6441c5"},
+ {file = "numpy-2.2.3-cp311-cp311-win_amd64.whl", hash = "sha256:9f48ba6f6c13e5e49f3d3efb1b51c8193215c42ac82610a04624906a9270be6f"},
+ {file = "numpy-2.2.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:12c045f43b1d2915eca6b880a7f4a256f59d62df4f044788c8ba67709412128d"},
+ {file = "numpy-2.2.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:87eed225fd415bbae787f93a457af7f5990b92a334e346f72070bf569b9c9c95"},
+ {file = "numpy-2.2.3-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:712a64103d97c404e87d4d7c47fb0c7ff9acccc625ca2002848e0d53288b90ea"},
+ {file = "numpy-2.2.3-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:a5ae282abe60a2db0fd407072aff4599c279bcd6e9a2475500fc35b00a57c532"},
+ {file = "numpy-2.2.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5266de33d4c3420973cf9ae3b98b54a2a6d53a559310e3236c4b2b06b9c07d4e"},
+ {file = "numpy-2.2.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b787adbf04b0db1967798dba8da1af07e387908ed1553a0d6e74c084d1ceafe"},
+ {file = "numpy-2.2.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:34c1b7e83f94f3b564b35f480f5652a47007dd91f7c839f404d03279cc8dd021"},
+ {file = "numpy-2.2.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:4d8335b5f1b6e2bce120d55fb17064b0262ff29b459e8493d1785c18ae2553b8"},
+ {file = "numpy-2.2.3-cp312-cp312-win32.whl", hash = "sha256:4d9828d25fb246bedd31e04c9e75714a4087211ac348cb39c8c5f99dbb6683fe"},
+ {file = "numpy-2.2.3-cp312-cp312-win_amd64.whl", hash = "sha256:83807d445817326b4bcdaaaf8e8e9f1753da04341eceec705c001ff342002e5d"},
+ {file = "numpy-2.2.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:7bfdb06b395385ea9b91bf55c1adf1b297c9fdb531552845ff1d3ea6e40d5aba"},
+ {file = "numpy-2.2.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:23c9f4edbf4c065fddb10a4f6e8b6a244342d95966a48820c614891e5059bb50"},
+ {file = "numpy-2.2.3-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:a0c03b6be48aaf92525cccf393265e02773be8fd9551a2f9adbe7db1fa2b60f1"},
+ {file = "numpy-2.2.3-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:2376e317111daa0a6739e50f7ee2a6353f768489102308b0d98fcf4a04f7f3b5"},
+ {file = "numpy-2.2.3-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8fb62fe3d206d72fe1cfe31c4a1106ad2b136fcc1606093aeab314f02930fdf2"},
+ {file = "numpy-2.2.3-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:52659ad2534427dffcc36aac76bebdd02b67e3b7a619ac67543bc9bfe6b7cdb1"},
+ {file = "numpy-2.2.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1b416af7d0ed3271cad0f0a0d0bee0911ed7eba23e66f8424d9f3dfcdcae1304"},
+ {file = "numpy-2.2.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:1402da8e0f435991983d0a9708b779f95a8c98c6b18a171b9f1be09005e64d9d"},
+ {file = "numpy-2.2.3-cp313-cp313-win32.whl", hash = "sha256:136553f123ee2951bfcfbc264acd34a2fc2f29d7cdf610ce7daf672b6fbaa693"},
+ {file = "numpy-2.2.3-cp313-cp313-win_amd64.whl", hash = "sha256:5b732c8beef1d7bc2d9e476dbba20aaff6167bf205ad9aa8d30913859e82884b"},
+ {file = "numpy-2.2.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:435e7a933b9fda8126130b046975a968cc2d833b505475e588339e09f7672890"},
+ {file = "numpy-2.2.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:7678556eeb0152cbd1522b684dcd215250885993dd00adb93679ec3c0e6e091c"},
+ {file = "numpy-2.2.3-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:2e8da03bd561504d9b20e7a12340870dfc206c64ea59b4cfee9fceb95070ee94"},
+ {file = "numpy-2.2.3-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:c9aa4496fd0e17e3843399f533d62857cef5900facf93e735ef65aa4bbc90ef0"},
+ {file = "numpy-2.2.3-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f4ca91d61a4bf61b0f2228f24bbfa6a9facd5f8af03759fe2a655c50ae2c6610"},
+ {file = "numpy-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:deaa09cd492e24fd9b15296844c0ad1b3c976da7907e1c1ed3a0ad21dded6f76"},
+ {file = "numpy-2.2.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:246535e2f7496b7ac85deffe932896a3577be7af8fb7eebe7146444680297e9a"},
+ {file = "numpy-2.2.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:daf43a3d1ea699402c5a850e5313680ac355b4adc9770cd5cfc2940e7861f1bf"},
+ {file = "numpy-2.2.3-cp313-cp313t-win32.whl", hash = "sha256:cf802eef1f0134afb81fef94020351be4fe1d6681aadf9c5e862af6602af64ef"},
+ {file = "numpy-2.2.3-cp313-cp313t-win_amd64.whl", hash = "sha256:aee2512827ceb6d7f517c8b85aa5d3923afe8fc7a57d028cffcd522f1c6fd082"},
+ {file = "numpy-2.2.3-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:3c2ec8a0f51d60f1e9c0c5ab116b7fc104b165ada3f6c58abf881cb2eb16044d"},
+ {file = "numpy-2.2.3-pp310-pypy310_pp73-macosx_14_0_x86_64.whl", hash = "sha256:ed2cf9ed4e8ebc3b754d398cba12f24359f018b416c380f577bbae112ca52fc9"},
+ {file = "numpy-2.2.3-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:39261798d208c3095ae4f7bc8eaeb3481ea8c6e03dc48028057d3cbdbdb8937e"},
+ {file = "numpy-2.2.3-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:783145835458e60fa97afac25d511d00a1eca94d4a8f3ace9fe2043003c678e4"},
+ {file = "numpy-2.2.3.tar.gz", hash = "sha256:dbdc15f0c81611925f382dfa97b3bd0bc2c1ce19d4fe50482cb0ddc12ba30020"},
+]
+
[[package]]
name = "openai"
-version = "1.65.2"
+version = "1.65.3"
description = "The official Python library for the openai API"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
- {file = "openai-1.65.2-py3-none-any.whl", hash = "sha256:27d9fe8de876e31394c2553c4e6226378b6ed85e480f586ccfe25b7193fb1750"},
- {file = "openai-1.65.2.tar.gz", hash = "sha256:729623efc3fd91c956f35dd387fa5c718edd528c4bed9f00b40ef290200fb2ce"},
+ {file = "openai-1.65.3-py3-none-any.whl", hash = "sha256:a155fa5d60eccda516384d3d60d923e083909cc126f383fe4a350f79185c232a"},
+ {file = "openai-1.65.3.tar.gz", hash = "sha256:9b7cd8f79140d03d77f4ed8aeec6009be5dcd79bbc02f03b0e8cd83356004f71"},
]
[package.dependencies]
@@ -1652,6 +2132,96 @@ typing-extensions = ">=4.11,<5"
datalib = ["numpy (>=1)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)"]
realtime = ["websockets (>=13,<15)"]
+[[package]]
+name = "orjson"
+version = "3.10.15"
+description = "Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy"
+optional = true
+python-versions = ">=3.8"
+groups = ["main"]
+markers = "platform_python_implementation != \"PyPy\""
+files = [
+ {file = "orjson-3.10.15-cp310-cp310-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:552c883d03ad185f720d0c09583ebde257e41b9521b74ff40e08b7dec4559c04"},
+ {file = "orjson-3.10.15-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:616e3e8d438d02e4854f70bfdc03a6bcdb697358dbaa6bcd19cbe24d24ece1f8"},
+ {file = "orjson-3.10.15-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7c2c79fa308e6edb0ffab0a31fd75a7841bf2a79a20ef08a3c6e3b26814c8ca8"},
+ {file = "orjson-3.10.15-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cb85490aa6bf98abd20607ab5c8324c0acb48d6da7863a51be48505646c814"},
+ {file = "orjson-3.10.15-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:763dadac05e4e9d2bc14938a45a2d0560549561287d41c465d3c58aec818b164"},
+ {file = "orjson-3.10.15-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a330b9b4734f09a623f74a7490db713695e13b67c959713b78369f26b3dee6bf"},
+ {file = "orjson-3.10.15-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a61a4622b7ff861f019974f73d8165be1bd9a0855e1cad18ee167acacabeb061"},
+ {file = "orjson-3.10.15-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:acd271247691574416b3228db667b84775c497b245fa275c6ab90dc1ffbbd2b3"},
+ {file = "orjson-3.10.15-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:e4759b109c37f635aa5c5cc93a1b26927bfde24b254bcc0e1149a9fada253d2d"},
+ {file = "orjson-3.10.15-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:9e992fd5cfb8b9f00bfad2fd7a05a4299db2bbe92e6440d9dd2fab27655b3182"},
+ {file = "orjson-3.10.15-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:f95fb363d79366af56c3f26b71df40b9a583b07bbaaf5b317407c4d58497852e"},
+ {file = "orjson-3.10.15-cp310-cp310-win32.whl", hash = "sha256:f9875f5fea7492da8ec2444839dcc439b0ef298978f311103d0b7dfd775898ab"},
+ {file = "orjson-3.10.15-cp310-cp310-win_amd64.whl", hash = "sha256:17085a6aa91e1cd70ca8533989a18b5433e15d29c574582f76f821737c8d5806"},
+ {file = "orjson-3.10.15-cp311-cp311-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:c4cc83960ab79a4031f3119cc4b1a1c627a3dc09df125b27c4201dff2af7eaa6"},
+ {file = "orjson-3.10.15-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ddbeef2481d895ab8be5185f2432c334d6dec1f5d1933a9c83014d188e102cef"},
+ {file = "orjson-3.10.15-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:9e590a0477b23ecd5b0ac865b1b907b01b3c5535f5e8a8f6ab0e503efb896334"},
+ {file = "orjson-3.10.15-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a6be38bd103d2fd9bdfa31c2720b23b5d47c6796bcb1d1b598e3924441b4298d"},
+ {file = "orjson-3.10.15-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ff4f6edb1578960ed628a3b998fa54d78d9bb3e2eb2cfc5c2a09732431c678d0"},
+ {file = "orjson-3.10.15-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b0482b21d0462eddd67e7fce10b89e0b6ac56570424662b685a0d6fccf581e13"},
+ {file = "orjson-3.10.15-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bb5cc3527036ae3d98b65e37b7986a918955f85332c1ee07f9d3f82f3a6899b5"},
+ {file = "orjson-3.10.15-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d569c1c462912acdd119ccbf719cf7102ea2c67dd03b99edcb1a3048651ac96b"},
+ {file = "orjson-3.10.15-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:1e6d33efab6b71d67f22bf2962895d3dc6f82a6273a965fab762e64fa90dc399"},
+ {file = "orjson-3.10.15-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:c33be3795e299f565681d69852ac8c1bc5c84863c0b0030b2b3468843be90388"},
+ {file = "orjson-3.10.15-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:eea80037b9fae5339b214f59308ef0589fc06dc870578b7cce6d71eb2096764c"},
+ {file = "orjson-3.10.15-cp311-cp311-win32.whl", hash = "sha256:d5ac11b659fd798228a7adba3e37c010e0152b78b1982897020a8e019a94882e"},
+ {file = "orjson-3.10.15-cp311-cp311-win_amd64.whl", hash = "sha256:cf45e0214c593660339ef63e875f32ddd5aa3b4adc15e662cdb80dc49e194f8e"},
+ {file = "orjson-3.10.15-cp312-cp312-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:9d11c0714fc85bfcf36ada1179400862da3288fc785c30e8297844c867d7505a"},
+ {file = "orjson-3.10.15-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dba5a1e85d554e3897fa9fe6fbcff2ed32d55008973ec9a2b992bd9a65d2352d"},
+ {file = "orjson-3.10.15-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7723ad949a0ea502df656948ddd8b392780a5beaa4c3b5f97e525191b102fff0"},
+ {file = "orjson-3.10.15-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6fd9bc64421e9fe9bd88039e7ce8e58d4fead67ca88e3a4014b143cec7684fd4"},
+ {file = "orjson-3.10.15-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dadba0e7b6594216c214ef7894c4bd5f08d7c0135f4dd0145600be4fbcc16767"},
+ {file = "orjson-3.10.15-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b48f59114fe318f33bbaee8ebeda696d8ccc94c9e90bc27dbe72153094e26f41"},
+ {file = "orjson-3.10.15-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:035fb83585e0f15e076759b6fedaf0abb460d1765b6a36f48018a52858443514"},
+ {file = "orjson-3.10.15-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d13b7fe322d75bf84464b075eafd8e7dd9eae05649aa2a5354cfa32f43c59f17"},
+ {file = "orjson-3.10.15-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:7066b74f9f259849629e0d04db6609db4cf5b973248f455ba5d3bd58a4daaa5b"},
+ {file = "orjson-3.10.15-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:88dc3f65a026bd3175eb157fea994fca6ac7c4c8579fc5a86fc2114ad05705b7"},
+ {file = "orjson-3.10.15-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b342567e5465bd99faa559507fe45e33fc76b9fb868a63f1642c6bc0735ad02a"},
+ {file = "orjson-3.10.15-cp312-cp312-win32.whl", hash = "sha256:0a4f27ea5617828e6b58922fdbec67b0aa4bb844e2d363b9244c47fa2180e665"},
+ {file = "orjson-3.10.15-cp312-cp312-win_amd64.whl", hash = "sha256:ef5b87e7aa9545ddadd2309efe6824bd3dd64ac101c15dae0f2f597911d46eaa"},
+ {file = "orjson-3.10.15-cp313-cp313-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:bae0e6ec2b7ba6895198cd981b7cca95d1487d0147c8ed751e5632ad16f031a6"},
+ {file = "orjson-3.10.15-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f93ce145b2db1252dd86af37d4165b6faa83072b46e3995ecc95d4b2301b725a"},
+ {file = "orjson-3.10.15-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7c203f6f969210128af3acae0ef9ea6aab9782939f45f6fe02d05958fe761ef9"},
+ {file = "orjson-3.10.15-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8918719572d662e18b8af66aef699d8c21072e54b6c82a3f8f6404c1f5ccd5e0"},
+ {file = "orjson-3.10.15-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f71eae9651465dff70aa80db92586ad5b92df46a9373ee55252109bb6b703307"},
+ {file = "orjson-3.10.15-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e117eb299a35f2634e25ed120c37c641398826c2f5a3d3cc39f5993b96171b9e"},
+ {file = "orjson-3.10.15-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:13242f12d295e83c2955756a574ddd6741c81e5b99f2bef8ed8d53e47a01e4b7"},
+ {file = "orjson-3.10.15-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:7946922ada8f3e0b7b958cc3eb22cfcf6c0df83d1fe5521b4a100103e3fa84c8"},
+ {file = "orjson-3.10.15-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:b7155eb1623347f0f22c38c9abdd738b287e39b9982e1da227503387b81b34ca"},
+ {file = "orjson-3.10.15-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:208beedfa807c922da4e81061dafa9c8489c6328934ca2a562efa707e049e561"},
+ {file = "orjson-3.10.15-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eca81f83b1b8c07449e1d6ff7074e82e3fd6777e588f1a6632127f286a968825"},
+ {file = "orjson-3.10.15-cp313-cp313-win32.whl", hash = "sha256:c03cd6eea1bd3b949d0d007c8d57049aa2b39bd49f58b4b2af571a5d3833d890"},
+ {file = "orjson-3.10.15-cp313-cp313-win_amd64.whl", hash = "sha256:fd56a26a04f6ba5fb2045b0acc487a63162a958ed837648c5781e1fe3316cfbf"},
+ {file = "orjson-3.10.15-cp38-cp38-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:5e8afd6200e12771467a1a44e5ad780614b86abb4b11862ec54861a82d677746"},
+ {file = "orjson-3.10.15-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da9a18c500f19273e9e104cca8c1f0b40a6470bcccfc33afcc088045d0bf5ea6"},
+ {file = "orjson-3.10.15-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:bb00b7bfbdf5d34a13180e4805d76b4567025da19a197645ca746fc2fb536586"},
+ {file = "orjson-3.10.15-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:33aedc3d903378e257047fee506f11e0833146ca3e57a1a1fb0ddb789876c1e1"},
+ {file = "orjson-3.10.15-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dd0099ae6aed5eb1fc84c9eb72b95505a3df4267e6962eb93cdd5af03be71c98"},
+ {file = "orjson-3.10.15-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7c864a80a2d467d7786274fce0e4f93ef2a7ca4ff31f7fc5634225aaa4e9e98c"},
+ {file = "orjson-3.10.15-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c25774c9e88a3e0013d7d1a6c8056926b607a61edd423b50eb5c88fd7f2823ae"},
+ {file = "orjson-3.10.15-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:e78c211d0074e783d824ce7bb85bf459f93a233eb67a5b5003498232ddfb0e8a"},
+ {file = "orjson-3.10.15-cp38-cp38-musllinux_1_2_armv7l.whl", hash = "sha256:43e17289ffdbbac8f39243916c893d2ae41a2ea1a9cbb060a56a4d75286351ae"},
+ {file = "orjson-3.10.15-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:781d54657063f361e89714293c095f506c533582ee40a426cb6489c48a637b81"},
+ {file = "orjson-3.10.15-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:6875210307d36c94873f553786a808af2788e362bd0cf4c8e66d976791e7b528"},
+ {file = "orjson-3.10.15-cp38-cp38-win32.whl", hash = "sha256:305b38b2b8f8083cc3d618927d7f424349afce5975b316d33075ef0f73576b60"},
+ {file = "orjson-3.10.15-cp38-cp38-win_amd64.whl", hash = "sha256:5dd9ef1639878cc3efffed349543cbf9372bdbd79f478615a1c633fe4e4180d1"},
+ {file = "orjson-3.10.15-cp39-cp39-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:ffe19f3e8d68111e8644d4f4e267a069ca427926855582ff01fc012496d19969"},
+ {file = "orjson-3.10.15-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d433bf32a363823863a96561a555227c18a522a8217a6f9400f00ddc70139ae2"},
+ {file = "orjson-3.10.15-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:da03392674f59a95d03fa5fb9fe3a160b0511ad84b7a3914699ea5a1b3a38da2"},
+ {file = "orjson-3.10.15-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3a63bb41559b05360ded9132032239e47983a39b151af1201f07ec9370715c82"},
+ {file = "orjson-3.10.15-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3766ac4702f8f795ff3fa067968e806b4344af257011858cc3d6d8721588b53f"},
+ {file = "orjson-3.10.15-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a1c73dcc8fadbd7c55802d9aa093b36878d34a3b3222c41052ce6b0fc65f8e8"},
+ {file = "orjson-3.10.15-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b299383825eafe642cbab34be762ccff9fd3408d72726a6b2a4506d410a71ab3"},
+ {file = "orjson-3.10.15-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:abc7abecdbf67a173ef1316036ebbf54ce400ef2300b4e26a7b843bd446c2480"},
+ {file = "orjson-3.10.15-cp39-cp39-musllinux_1_2_armv7l.whl", hash = "sha256:3614ea508d522a621384c1d6639016a5a2e4f027f3e4a1c93a51867615d28829"},
+ {file = "orjson-3.10.15-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:295c70f9dc154307777ba30fe29ff15c1bcc9dfc5c48632f37d20a607e9ba85a"},
+ {file = "orjson-3.10.15-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:63309e3ff924c62404923c80b9e2048c1f74ba4b615e7584584389ada50ed428"},
+ {file = "orjson-3.10.15-cp39-cp39-win32.whl", hash = "sha256:a2f708c62d026fb5340788ba94a55c23df4e1869fec74be455e0b2f5363b8507"},
+ {file = "orjson-3.10.15-cp39-cp39-win_amd64.whl", hash = "sha256:efcf6c735c3d22ef60c4aa27a5238f1a477df85e9b15f2142f9d669beb2d13fd"},
+ {file = "orjson-3.10.15.tar.gz", hash = "sha256:05ca7fe452a2e9d8d9d706a2984c95b9c2ebc5db417ce0b7a49b91d50642a23e"},
+]
+
[[package]]
name = "packaging"
version = "24.2"
@@ -1873,6 +2443,19 @@ files = [
{file = "propcache-0.3.0.tar.gz", hash = "sha256:a8fd93de4e1d278046345f49e2238cdb298589325849b2645d4a94c53faeffc5"},
]
+[[package]]
+name = "pycparser"
+version = "2.22"
+description = "C parser in Python"
+optional = true
+python-versions = ">=3.8"
+groups = ["main"]
+markers = "platform_python_implementation == \"PyPy\""
+files = [
+ {file = "pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc"},
+ {file = "pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6"},
+]
+
[[package]]
name = "pydantic"
version = "2.10.6"
@@ -2134,6 +2717,21 @@ pytest = ">=4.6"
[package.extras]
testing = ["fields", "hunter", "process-tests", "pytest-xdist", "virtualenv"]
+[[package]]
+name = "python-dateutil"
+version = "2.9.0.post0"
+description = "Extensions to the standard Python datetime module"
+optional = true
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
+groups = ["dev"]
+files = [
+ {file = "python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3"},
+ {file = "python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427"},
+]
+
+[package.dependencies]
+six = ">=1.5"
+
[[package]]
name = "python-dotenv"
version = "1.0.1"
@@ -2420,6 +3018,21 @@ urllib3 = ">=1.21.1,<3"
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
+[[package]]
+name = "requests-toolbelt"
+version = "1.0.0"
+description = "A utility belt for advanced users of python-requests"
+optional = true
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+groups = ["main"]
+files = [
+ {file = "requests-toolbelt-1.0.0.tar.gz", hash = "sha256:7681a0a3d047012b5bdc0ee37d7f8f07ebe76ab08caeccfc3921ce23c88d5bc6"},
+ {file = "requests_toolbelt-1.0.0-py2.py3-none-any.whl", hash = "sha256:cccfdd665f0a24fcf4726e690f65639d272bb0637b9b92dfd91a5568ccf6bd06"},
+]
+
+[package.dependencies]
+requests = ">=2.0.1,<3.0.0"
+
[[package]]
name = "rpds-py"
version = "0.23.1"
@@ -2561,6 +3174,36 @@ files = [
{file = "ruff-0.9.9.tar.gz", hash = "sha256:0062ed13f22173e85f8f7056f9a24016e692efeea8704d1a5e8011b8aa850933"},
]
+[[package]]
+name = "s3transfer"
+version = "0.11.4"
+description = "An Amazon S3 Transfer Manager"
+optional = true
+python-versions = ">=3.8"
+groups = ["dev"]
+files = [
+ {file = "s3transfer-0.11.4-py3-none-any.whl", hash = "sha256:ac265fa68318763a03bf2dc4f39d5cbd6a9e178d81cc9483ad27da33637e320d"},
+ {file = "s3transfer-0.11.4.tar.gz", hash = "sha256:559f161658e1cf0a911f45940552c696735f5c74e64362e515f333ebed87d679"},
+]
+
+[package.dependencies]
+botocore = ">=1.37.4,<2.0a.0"
+
+[package.extras]
+crt = ["botocore[crt] (>=1.37.4,<2.0a.0)"]
+
+[[package]]
+name = "six"
+version = "1.17.0"
+description = "Python 2 and 3 compatibility utilities"
+optional = true
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
+groups = ["dev"]
+files = [
+ {file = "six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274"},
+ {file = "six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81"},
+]
+
[[package]]
name = "sniffio"
version = "1.3.1"
@@ -2585,6 +3228,102 @@ files = [
{file = "soupsieve-2.6.tar.gz", hash = "sha256:e2e68417777af359ec65daac1057404a3c8a5455bb8abc36f1a9866ab1a51abb"},
]
+[[package]]
+name = "sqlalchemy"
+version = "2.0.38"
+description = "Database Abstraction Library"
+optional = true
+python-versions = ">=3.7"
+groups = ["main"]
+files = [
+ {file = "SQLAlchemy-2.0.38-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5e1d9e429028ce04f187a9f522818386c8b076723cdbe9345708384f49ebcec6"},
+ {file = "SQLAlchemy-2.0.38-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b87a90f14c68c925817423b0424381f0e16d80fc9a1a1046ef202ab25b19a444"},
+ {file = "SQLAlchemy-2.0.38-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:402c2316d95ed90d3d3c25ad0390afa52f4d2c56b348f212aa9c8d072a40eee5"},
+ {file = "SQLAlchemy-2.0.38-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6493bc0eacdbb2c0f0d260d8988e943fee06089cd239bd7f3d0c45d1657a70e2"},
+ {file = "SQLAlchemy-2.0.38-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:0561832b04c6071bac3aad45b0d3bb6d2c4f46a8409f0a7a9c9fa6673b41bc03"},
+ {file = "SQLAlchemy-2.0.38-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:49aa2cdd1e88adb1617c672a09bf4ebf2f05c9448c6dbeba096a3aeeb9d4d443"},
+ {file = "SQLAlchemy-2.0.38-cp310-cp310-win32.whl", hash = "sha256:64aa8934200e222f72fcfd82ee71c0130a9c07d5725af6fe6e919017d095b297"},
+ {file = "SQLAlchemy-2.0.38-cp310-cp310-win_amd64.whl", hash = "sha256:c57b8e0841f3fce7b703530ed70c7c36269c6d180ea2e02e36b34cb7288c50c7"},
+ {file = "SQLAlchemy-2.0.38-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:bf89e0e4a30714b357f5d46b6f20e0099d38b30d45fa68ea48589faf5f12f62d"},
+ {file = "SQLAlchemy-2.0.38-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:8455aa60da49cb112df62b4721bd8ad3654a3a02b9452c783e651637a1f21fa2"},
+ {file = "SQLAlchemy-2.0.38-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f53c0d6a859b2db58332e0e6a921582a02c1677cc93d4cbb36fdf49709b327b2"},
+ {file = "SQLAlchemy-2.0.38-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b3c4817dff8cef5697f5afe5fec6bc1783994d55a68391be24cb7d80d2dbc3a6"},
+ {file = "SQLAlchemy-2.0.38-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c9cea5b756173bb86e2235f2f871b406a9b9d722417ae31e5391ccaef5348f2c"},
+ {file = "SQLAlchemy-2.0.38-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:40e9cdbd18c1f84631312b64993f7d755d85a3930252f6276a77432a2b25a2f3"},
+ {file = "SQLAlchemy-2.0.38-cp311-cp311-win32.whl", hash = "sha256:cb39ed598aaf102251483f3e4675c5dd6b289c8142210ef76ba24aae0a8f8aba"},
+ {file = "SQLAlchemy-2.0.38-cp311-cp311-win_amd64.whl", hash = "sha256:f9d57f1b3061b3e21476b0ad5f0397b112b94ace21d1f439f2db472e568178ae"},
+ {file = "SQLAlchemy-2.0.38-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:12d5b06a1f3aeccf295a5843c86835033797fea292c60e72b07bcb5d820e6dd3"},
+ {file = "SQLAlchemy-2.0.38-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:e036549ad14f2b414c725349cce0772ea34a7ab008e9cd67f9084e4f371d1f32"},
+ {file = "SQLAlchemy-2.0.38-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ee3bee874cb1fadee2ff2b79fc9fc808aa638670f28b2145074538d4a6a5028e"},
+ {file = "SQLAlchemy-2.0.38-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e185ea07a99ce8b8edfc788c586c538c4b1351007e614ceb708fd01b095ef33e"},
+ {file = "SQLAlchemy-2.0.38-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b79ee64d01d05a5476d5cceb3c27b5535e6bb84ee0f872ba60d9a8cd4d0e6579"},
+ {file = "SQLAlchemy-2.0.38-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:afd776cf1ebfc7f9aa42a09cf19feadb40a26366802d86c1fba080d8e5e74bdd"},
+ {file = "SQLAlchemy-2.0.38-cp312-cp312-win32.whl", hash = "sha256:a5645cd45f56895cfe3ca3459aed9ff2d3f9aaa29ff7edf557fa7a23515a3725"},
+ {file = "SQLAlchemy-2.0.38-cp312-cp312-win_amd64.whl", hash = "sha256:1052723e6cd95312f6a6eff9a279fd41bbae67633415373fdac3c430eca3425d"},
+ {file = "SQLAlchemy-2.0.38-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ecef029b69843b82048c5b347d8e6049356aa24ed644006c9a9d7098c3bd3bfd"},
+ {file = "SQLAlchemy-2.0.38-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:9c8bcad7fc12f0cc5896d8e10fdf703c45bd487294a986903fe032c72201596b"},
+ {file = "SQLAlchemy-2.0.38-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2a0ef3f98175d77180ffdc623d38e9f1736e8d86b6ba70bff182a7e68bed7727"},
+ {file = "SQLAlchemy-2.0.38-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b0ac78898c50e2574e9f938d2e5caa8fe187d7a5b69b65faa1ea4648925b096"},
+ {file = "SQLAlchemy-2.0.38-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:9eb4fa13c8c7a2404b6a8e3772c17a55b1ba18bc711e25e4d6c0c9f5f541b02a"},
+ {file = "SQLAlchemy-2.0.38-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:5dba1cdb8f319084f5b00d41207b2079822aa8d6a4667c0f369fce85e34b0c86"},
+ {file = "SQLAlchemy-2.0.38-cp313-cp313-win32.whl", hash = "sha256:eae27ad7580529a427cfdd52c87abb2dfb15ce2b7a3e0fc29fbb63e2ed6f8120"},
+ {file = "SQLAlchemy-2.0.38-cp313-cp313-win_amd64.whl", hash = "sha256:b335a7c958bc945e10c522c069cd6e5804f4ff20f9a744dd38e748eb602cbbda"},
+ {file = "SQLAlchemy-2.0.38-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:40310db77a55512a18827488e592965d3dec6a3f1e3d8af3f8243134029daca3"},
+ {file = "SQLAlchemy-2.0.38-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d3043375dd5bbcb2282894cbb12e6c559654c67b5fffb462fda815a55bf93f7"},
+ {file = "SQLAlchemy-2.0.38-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70065dfabf023b155a9c2a18f573e47e6ca709b9e8619b2e04c54d5bcf193178"},
+ {file = "SQLAlchemy-2.0.38-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:c058b84c3b24812c859300f3b5abf300daa34df20d4d4f42e9652a4d1c48c8a4"},
+ {file = "SQLAlchemy-2.0.38-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:0398361acebb42975deb747a824b5188817d32b5c8f8aba767d51ad0cc7bb08d"},
+ {file = "SQLAlchemy-2.0.38-cp37-cp37m-win32.whl", hash = "sha256:a2bc4e49e8329f3283d99840c136ff2cd1a29e49b5624a46a290f04dff48e079"},
+ {file = "SQLAlchemy-2.0.38-cp37-cp37m-win_amd64.whl", hash = "sha256:9cd136184dd5f58892f24001cdce986f5d7e96059d004118d5410671579834a4"},
+ {file = "SQLAlchemy-2.0.38-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:665255e7aae5f38237b3a6eae49d2358d83a59f39ac21036413fab5d1e810578"},
+ {file = "SQLAlchemy-2.0.38-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:92f99f2623ff16bd4aaf786ccde759c1f676d39c7bf2855eb0b540e1ac4530c8"},
+ {file = "SQLAlchemy-2.0.38-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa498d1392216fae47eaf10c593e06c34476ced9549657fca713d0d1ba5f7248"},
+ {file = "SQLAlchemy-2.0.38-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9afbc3909d0274d6ac8ec891e30210563b2c8bdd52ebbda14146354e7a69373"},
+ {file = "SQLAlchemy-2.0.38-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:57dd41ba32430cbcc812041d4de8d2ca4651aeefad2626921ae2a23deb8cd6ff"},
+ {file = "SQLAlchemy-2.0.38-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:3e35d5565b35b66905b79ca4ae85840a8d40d31e0b3e2990f2e7692071b179ca"},
+ {file = "SQLAlchemy-2.0.38-cp38-cp38-win32.whl", hash = "sha256:f0d3de936b192980209d7b5149e3c98977c3810d401482d05fb6d668d53c1c63"},
+ {file = "SQLAlchemy-2.0.38-cp38-cp38-win_amd64.whl", hash = "sha256:3868acb639c136d98107c9096303d2d8e5da2880f7706f9f8c06a7f961961149"},
+ {file = "SQLAlchemy-2.0.38-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07258341402a718f166618470cde0c34e4cec85a39767dce4e24f61ba5e667ea"},
+ {file = "SQLAlchemy-2.0.38-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0a826f21848632add58bef4f755a33d45105d25656a0c849f2dc2df1c71f6f50"},
+ {file = "SQLAlchemy-2.0.38-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:386b7d136919bb66ced64d2228b92d66140de5fefb3c7df6bd79069a269a7b06"},
+ {file = "SQLAlchemy-2.0.38-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2f2951dc4b4f990a4b394d6b382accb33141d4d3bd3ef4e2b27287135d6bdd68"},
+ {file = "SQLAlchemy-2.0.38-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:8bf312ed8ac096d674c6aa9131b249093c1b37c35db6a967daa4c84746bc1bc9"},
+ {file = "SQLAlchemy-2.0.38-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:6db316d6e340f862ec059dc12e395d71f39746a20503b124edc255973977b728"},
+ {file = "SQLAlchemy-2.0.38-cp39-cp39-win32.whl", hash = "sha256:c09a6ea87658695e527104cf857c70f79f14e9484605e205217aae0ec27b45fc"},
+ {file = "SQLAlchemy-2.0.38-cp39-cp39-win_amd64.whl", hash = "sha256:12f5c9ed53334c3ce719155424dc5407aaa4f6cadeb09c5b627e06abb93933a1"},
+ {file = "SQLAlchemy-2.0.38-py3-none-any.whl", hash = "sha256:63178c675d4c80def39f1febd625a6333f44c0ba269edd8a468b156394b27753"},
+ {file = "sqlalchemy-2.0.38.tar.gz", hash = "sha256:e5a4d82bdb4bf1ac1285a68eab02d253ab73355d9f0fe725a97e1e0fa689decb"},
+]
+
+[package.dependencies]
+greenlet = {version = "!=0.4.17", markers = "python_version < \"3.14\" and (platform_machine == \"aarch64\" or platform_machine == \"ppc64le\" or platform_machine == \"x86_64\" or platform_machine == \"amd64\" or platform_machine == \"AMD64\" or platform_machine == \"win32\" or platform_machine == \"WIN32\")"}
+typing-extensions = ">=4.6.0"
+
+[package.extras]
+aiomysql = ["aiomysql (>=0.2.0)", "greenlet (!=0.4.17)"]
+aioodbc = ["aioodbc", "greenlet (!=0.4.17)"]
+aiosqlite = ["aiosqlite", "greenlet (!=0.4.17)", "typing_extensions (!=3.10.0.1)"]
+asyncio = ["greenlet (!=0.4.17)"]
+asyncmy = ["asyncmy (>=0.2.3,!=0.2.4,!=0.2.6)", "greenlet (!=0.4.17)"]
+mariadb-connector = ["mariadb (>=1.0.1,!=1.1.2,!=1.1.5,!=1.1.10)"]
+mssql = ["pyodbc"]
+mssql-pymssql = ["pymssql"]
+mssql-pyodbc = ["pyodbc"]
+mypy = ["mypy (>=0.910)"]
+mysql = ["mysqlclient (>=1.4.0)"]
+mysql-connector = ["mysql-connector-python"]
+oracle = ["cx_oracle (>=8)"]
+oracle-oracledb = ["oracledb (>=1.0.1)"]
+postgresql = ["psycopg2 (>=2.7)"]
+postgresql-asyncpg = ["asyncpg", "greenlet (!=0.4.17)"]
+postgresql-pg8000 = ["pg8000 (>=1.29.1)"]
+postgresql-psycopg = ["psycopg (>=3.0.7)"]
+postgresql-psycopg2binary = ["psycopg2-binary"]
+postgresql-psycopg2cffi = ["psycopg2cffi"]
+postgresql-psycopgbinary = ["psycopg[binary] (>=3.0.7)"]
+pymysql = ["pymysql"]
+sqlcipher = ["sqlcipher3_binary"]
+
[[package]]
name = "sse-starlette"
version = "2.2.1"
@@ -2623,6 +3362,22 @@ anyio = ">=3.6.2,<5"
[package.extras]
full = ["httpx (>=0.27.0,<0.29.0)", "itsdangerous", "jinja2", "python-multipart (>=0.0.18)", "pyyaml"]
+[[package]]
+name = "tenacity"
+version = "9.0.0"
+description = "Retry code until it succeeds"
+optional = true
+python-versions = ">=3.8"
+groups = ["main"]
+files = [
+ {file = "tenacity-9.0.0-py3-none-any.whl", hash = "sha256:93de0c98785b27fcf659856aa9f54bfbd399e29969b0621bc7f762bd441b4539"},
+ {file = "tenacity-9.0.0.tar.gz", hash = "sha256:807f37ca97d62aa361264d497b0e31e92b8027044942bfa756160d908320d73b"},
+]
+
+[package.extras]
+doc = ["reno", "sphinx"]
+test = ["pytest", "tornado (>=4.5)", "typeguard"]
+
[[package]]
name = "termcolor"
version = "2.5.0"
@@ -2834,13 +3589,29 @@ files = [
{file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"},
]
+[[package]]
+name = "typing-inspect"
+version = "0.9.0"
+description = "Runtime inspection utilities for typing module."
+optional = true
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "typing_inspect-0.9.0-py3-none-any.whl", hash = "sha256:9ee6fc59062311ef8547596ab6b955e1b8aa46242d854bfc78f4f6b0eff35f9f"},
+ {file = "typing_inspect-0.9.0.tar.gz", hash = "sha256:b23fc42ff6f6ef6954e4852c1fb512cdd18dbea03134f91f856a95ccc9461f78"},
+]
+
+[package.dependencies]
+mypy-extensions = ">=0.3.0"
+typing-extensions = ">=3.7.4"
+
[[package]]
name = "urllib3"
version = "2.3.0"
description = "HTTP library with thread-safe connection pooling, file post, and more."
optional = false
python-versions = ">=3.9"
-groups = ["main"]
+groups = ["main", "dev"]
files = [
{file = "urllib3-2.3.0-py3-none-any.whl", hash = "sha256:1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df"},
{file = "urllib3-2.3.0.tar.gz", hash = "sha256:f8c5449b3cf0861679ce7e0503c7b44b5ec981bec0d1d3795a07f1ba96f0204d"},
@@ -3050,7 +3821,120 @@ enabler = ["pytest-enabler (>=2.2)"]
test = ["big-O", "importlib-resources", "jaraco.functools", "jaraco.itertools", "jaraco.test", "more-itertools", "pytest (>=6,!=8.1.*)", "pytest-ignore-flaky"]
type = ["pytest-mypy"]
+[[package]]
+name = "zstandard"
+version = "0.23.0"
+description = "Zstandard bindings for Python"
+optional = true
+python-versions = ">=3.8"
+groups = ["main"]
+files = [
+ {file = "zstandard-0.23.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bf0a05b6059c0528477fba9054d09179beb63744355cab9f38059548fedd46a9"},
+ {file = "zstandard-0.23.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fc9ca1c9718cb3b06634c7c8dec57d24e9438b2aa9a0f02b8bb36bf478538880"},
+ {file = "zstandard-0.23.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77da4c6bfa20dd5ea25cbf12c76f181a8e8cd7ea231c673828d0386b1740b8dc"},
+ {file = "zstandard-0.23.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b2170c7e0367dde86a2647ed5b6f57394ea7f53545746104c6b09fc1f4223573"},
+ {file = "zstandard-0.23.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c16842b846a8d2a145223f520b7e18b57c8f476924bda92aeee3a88d11cfc391"},
+ {file = "zstandard-0.23.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:157e89ceb4054029a289fb504c98c6a9fe8010f1680de0201b3eb5dc20aa6d9e"},
+ {file = "zstandard-0.23.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:203d236f4c94cd8379d1ea61db2fce20730b4c38d7f1c34506a31b34edc87bdd"},
+ {file = "zstandard-0.23.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:dc5d1a49d3f8262be192589a4b72f0d03b72dcf46c51ad5852a4fdc67be7b9e4"},
+ {file = "zstandard-0.23.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:752bf8a74412b9892f4e5b58f2f890a039f57037f52c89a740757ebd807f33ea"},
+ {file = "zstandard-0.23.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:80080816b4f52a9d886e67f1f96912891074903238fe54f2de8b786f86baded2"},
+ {file = "zstandard-0.23.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:84433dddea68571a6d6bd4fbf8ff398236031149116a7fff6f777ff95cad3df9"},
+ {file = "zstandard-0.23.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:ab19a2d91963ed9e42b4e8d77cd847ae8381576585bad79dbd0a8837a9f6620a"},
+ {file = "zstandard-0.23.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:59556bf80a7094d0cfb9f5e50bb2db27fefb75d5138bb16fb052b61b0e0eeeb0"},
+ {file = "zstandard-0.23.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:27d3ef2252d2e62476389ca8f9b0cf2bbafb082a3b6bfe9d90cbcbb5529ecf7c"},
+ {file = "zstandard-0.23.0-cp310-cp310-win32.whl", hash = "sha256:5d41d5e025f1e0bccae4928981e71b2334c60f580bdc8345f824e7c0a4c2a813"},
+ {file = "zstandard-0.23.0-cp310-cp310-win_amd64.whl", hash = "sha256:519fbf169dfac1222a76ba8861ef4ac7f0530c35dd79ba5727014613f91613d4"},
+ {file = "zstandard-0.23.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:34895a41273ad33347b2fc70e1bff4240556de3c46c6ea430a7ed91f9042aa4e"},
+ {file = "zstandard-0.23.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:77ea385f7dd5b5676d7fd943292ffa18fbf5c72ba98f7d09fc1fb9e819b34c23"},
+ {file = "zstandard-0.23.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:983b6efd649723474f29ed42e1467f90a35a74793437d0bc64a5bf482bedfa0a"},
+ {file = "zstandard-0.23.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:80a539906390591dd39ebb8d773771dc4db82ace6372c4d41e2d293f8e32b8db"},
+ {file = "zstandard-0.23.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:445e4cb5048b04e90ce96a79b4b63140e3f4ab5f662321975679b5f6360b90e2"},
+ {file = "zstandard-0.23.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd30d9c67d13d891f2360b2a120186729c111238ac63b43dbd37a5a40670b8ca"},
+ {file = "zstandard-0.23.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d20fd853fbb5807c8e84c136c278827b6167ded66c72ec6f9a14b863d809211c"},
+ {file = "zstandard-0.23.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ed1708dbf4d2e3a1c5c69110ba2b4eb6678262028afd6c6fbcc5a8dac9cda68e"},
+ {file = "zstandard-0.23.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:be9b5b8659dff1f913039c2feee1aca499cfbc19e98fa12bc85e037c17ec6ca5"},
+ {file = "zstandard-0.23.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:65308f4b4890aa12d9b6ad9f2844b7ee42c7f7a4fd3390425b242ffc57498f48"},
+ {file = "zstandard-0.23.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:98da17ce9cbf3bfe4617e836d561e433f871129e3a7ac16d6ef4c680f13a839c"},
+ {file = "zstandard-0.23.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:8ed7d27cb56b3e058d3cf684d7200703bcae623e1dcc06ed1e18ecda39fee003"},
+ {file = "zstandard-0.23.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:b69bb4f51daf461b15e7b3db033160937d3ff88303a7bc808c67bbc1eaf98c78"},
+ {file = "zstandard-0.23.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:034b88913ecc1b097f528e42b539453fa82c3557e414b3de9d5632c80439a473"},
+ {file = "zstandard-0.23.0-cp311-cp311-win32.whl", hash = "sha256:f2d4380bf5f62daabd7b751ea2339c1a21d1c9463f1feb7fc2bdcea2c29c3160"},
+ {file = "zstandard-0.23.0-cp311-cp311-win_amd64.whl", hash = "sha256:62136da96a973bd2557f06ddd4e8e807f9e13cbb0bfb9cc06cfe6d98ea90dfe0"},
+ {file = "zstandard-0.23.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:b4567955a6bc1b20e9c31612e615af6b53733491aeaa19a6b3b37f3b65477094"},
+ {file = "zstandard-0.23.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1e172f57cd78c20f13a3415cc8dfe24bf388614324d25539146594c16d78fcc8"},
+ {file = "zstandard-0.23.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b0e166f698c5a3e914947388c162be2583e0c638a4703fc6a543e23a88dea3c1"},
+ {file = "zstandard-0.23.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12a289832e520c6bd4dcaad68e944b86da3bad0d339ef7989fb7e88f92e96072"},
+ {file = "zstandard-0.23.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d50d31bfedd53a928fed6707b15a8dbeef011bb6366297cc435accc888b27c20"},
+ {file = "zstandard-0.23.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:72c68dda124a1a138340fb62fa21b9bf4848437d9ca60bd35db36f2d3345f373"},
+ {file = "zstandard-0.23.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:53dd9d5e3d29f95acd5de6802e909ada8d8d8cfa37a3ac64836f3bc4bc5512db"},
+ {file = "zstandard-0.23.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:6a41c120c3dbc0d81a8e8adc73312d668cd34acd7725f036992b1b72d22c1772"},
+ {file = "zstandard-0.23.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:40b33d93c6eddf02d2c19f5773196068d875c41ca25730e8288e9b672897c105"},
+ {file = "zstandard-0.23.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:9206649ec587e6b02bd124fb7799b86cddec350f6f6c14bc82a2b70183e708ba"},
+ {file = "zstandard-0.23.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:76e79bc28a65f467e0409098fa2c4376931fd3207fbeb6b956c7c476d53746dd"},
+ {file = "zstandard-0.23.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:66b689c107857eceabf2cf3d3fc699c3c0fe8ccd18df2219d978c0283e4c508a"},
+ {file = "zstandard-0.23.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:9c236e635582742fee16603042553d276cca506e824fa2e6489db04039521e90"},
+ {file = "zstandard-0.23.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a8fffdbd9d1408006baaf02f1068d7dd1f016c6bcb7538682622c556e7b68e35"},
+ {file = "zstandard-0.23.0-cp312-cp312-win32.whl", hash = "sha256:dc1d33abb8a0d754ea4763bad944fd965d3d95b5baef6b121c0c9013eaf1907d"},
+ {file = "zstandard-0.23.0-cp312-cp312-win_amd64.whl", hash = "sha256:64585e1dba664dc67c7cdabd56c1e5685233fbb1fc1966cfba2a340ec0dfff7b"},
+ {file = "zstandard-0.23.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:576856e8594e6649aee06ddbfc738fec6a834f7c85bf7cadd1c53d4a58186ef9"},
+ {file = "zstandard-0.23.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:38302b78a850ff82656beaddeb0bb989a0322a8bbb1bf1ab10c17506681d772a"},
+ {file = "zstandard-0.23.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d2240ddc86b74966c34554c49d00eaafa8200a18d3a5b6ffbf7da63b11d74ee2"},
+ {file = "zstandard-0.23.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2ef230a8fd217a2015bc91b74f6b3b7d6522ba48be29ad4ea0ca3a3775bf7dd5"},
+ {file = "zstandard-0.23.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:774d45b1fac1461f48698a9d4b5fa19a69d47ece02fa469825b442263f04021f"},
+ {file = "zstandard-0.23.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f77fa49079891a4aab203d0b1744acc85577ed16d767b52fc089d83faf8d8ed"},
+ {file = "zstandard-0.23.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ac184f87ff521f4840e6ea0b10c0ec90c6b1dcd0bad2f1e4a9a1b4fa177982ea"},
+ {file = "zstandard-0.23.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c363b53e257246a954ebc7c488304b5592b9c53fbe74d03bc1c64dda153fb847"},
+ {file = "zstandard-0.23.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:e7792606d606c8df5277c32ccb58f29b9b8603bf83b48639b7aedf6df4fe8171"},
+ {file = "zstandard-0.23.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a0817825b900fcd43ac5d05b8b3079937073d2b1ff9cf89427590718b70dd840"},
+ {file = "zstandard-0.23.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:9da6bc32faac9a293ddfdcb9108d4b20416219461e4ec64dfea8383cac186690"},
+ {file = "zstandard-0.23.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:fd7699e8fd9969f455ef2926221e0233f81a2542921471382e77a9e2f2b57f4b"},
+ {file = "zstandard-0.23.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:d477ed829077cd945b01fc3115edd132c47e6540ddcd96ca169facff28173057"},
+ {file = "zstandard-0.23.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:fa6ce8b52c5987b3e34d5674b0ab529a4602b632ebab0a93b07bfb4dfc8f8a33"},
+ {file = "zstandard-0.23.0-cp313-cp313-win32.whl", hash = "sha256:a9b07268d0c3ca5c170a385a0ab9fb7fdd9f5fd866be004c4ea39e44edce47dd"},
+ {file = "zstandard-0.23.0-cp313-cp313-win_amd64.whl", hash = "sha256:f3513916e8c645d0610815c257cbfd3242adfd5c4cfa78be514e5a3ebb42a41b"},
+ {file = "zstandard-0.23.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2ef3775758346d9ac6214123887d25c7061c92afe1f2b354f9388e9e4d48acfc"},
+ {file = "zstandard-0.23.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4051e406288b8cdbb993798b9a45c59a4896b6ecee2f875424ec10276a895740"},
+ {file = "zstandard-0.23.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e2d1a054f8f0a191004675755448d12be47fa9bebbcffa3cdf01db19f2d30a54"},
+ {file = "zstandard-0.23.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f83fa6cae3fff8e98691248c9320356971b59678a17f20656a9e59cd32cee6d8"},
+ {file = "zstandard-0.23.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:32ba3b5ccde2d581b1e6aa952c836a6291e8435d788f656fe5976445865ae045"},
+ {file = "zstandard-0.23.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2f146f50723defec2975fb7e388ae3a024eb7151542d1599527ec2aa9cacb152"},
+ {file = "zstandard-0.23.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1bfe8de1da6d104f15a60d4a8a768288f66aa953bbe00d027398b93fb9680b26"},
+ {file = "zstandard-0.23.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:29a2bc7c1b09b0af938b7a8343174b987ae021705acabcbae560166567f5a8db"},
+ {file = "zstandard-0.23.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:61f89436cbfede4bc4e91b4397eaa3e2108ebe96d05e93d6ccc95ab5714be512"},
+ {file = "zstandard-0.23.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:53ea7cdc96c6eb56e76bb06894bcfb5dfa93b7adcf59d61c6b92674e24e2dd5e"},
+ {file = "zstandard-0.23.0-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:a4ae99c57668ca1e78597d8b06d5af837f377f340f4cce993b551b2d7731778d"},
+ {file = "zstandard-0.23.0-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:379b378ae694ba78cef921581ebd420c938936a153ded602c4fea612b7eaa90d"},
+ {file = "zstandard-0.23.0-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:50a80baba0285386f97ea36239855f6020ce452456605f262b2d33ac35c7770b"},
+ {file = "zstandard-0.23.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:61062387ad820c654b6a6b5f0b94484fa19515e0c5116faf29f41a6bc91ded6e"},
+ {file = "zstandard-0.23.0-cp38-cp38-win32.whl", hash = "sha256:b8c0bd73aeac689beacd4e7667d48c299f61b959475cdbb91e7d3d88d27c56b9"},
+ {file = "zstandard-0.23.0-cp38-cp38-win_amd64.whl", hash = "sha256:a05e6d6218461eb1b4771d973728f0133b2a4613a6779995df557f70794fd60f"},
+ {file = "zstandard-0.23.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3aa014d55c3af933c1315eb4bb06dd0459661cc0b15cd61077afa6489bec63bb"},
+ {file = "zstandard-0.23.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0a7f0804bb3799414af278e9ad51be25edf67f78f916e08afdb983e74161b916"},
+ {file = "zstandard-0.23.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb2b1ecfef1e67897d336de3a0e3f52478182d6a47eda86cbd42504c5cbd009a"},
+ {file = "zstandard-0.23.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:837bb6764be6919963ef41235fd56a6486b132ea64afe5fafb4cb279ac44f259"},
+ {file = "zstandard-0.23.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1516c8c37d3a053b01c1c15b182f3b5f5eef19ced9b930b684a73bad121addf4"},
+ {file = "zstandard-0.23.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:48ef6a43b1846f6025dde6ed9fee0c24e1149c1c25f7fb0a0585572b2f3adc58"},
+ {file = "zstandard-0.23.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11e3bf3c924853a2d5835b24f03eeba7fc9b07d8ca499e247e06ff5676461a15"},
+ {file = "zstandard-0.23.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2fb4535137de7e244c230e24f9d1ec194f61721c86ebea04e1581d9d06ea1269"},
+ {file = "zstandard-0.23.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8c24f21fa2af4bb9f2c492a86fe0c34e6d2c63812a839590edaf177b7398f700"},
+ {file = "zstandard-0.23.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:a8c86881813a78a6f4508ef9daf9d4995b8ac2d147dcb1a450448941398091c9"},
+ {file = "zstandard-0.23.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:fe3b385d996ee0822fd46528d9f0443b880d4d05528fd26a9119a54ec3f91c69"},
+ {file = "zstandard-0.23.0-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:82d17e94d735c99621bf8ebf9995f870a6b3e6d14543b99e201ae046dfe7de70"},
+ {file = "zstandard-0.23.0-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:c7c517d74bea1a6afd39aa612fa025e6b8011982a0897768a2f7c8ab4ebb78a2"},
+ {file = "zstandard-0.23.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:1fd7e0f1cfb70eb2f95a19b472ee7ad6d9a0a992ec0ae53286870c104ca939e5"},
+ {file = "zstandard-0.23.0-cp39-cp39-win32.whl", hash = "sha256:43da0f0092281bf501f9c5f6f3b4c975a8a0ea82de49ba3f7100e64d422a1274"},
+ {file = "zstandard-0.23.0-cp39-cp39-win_amd64.whl", hash = "sha256:f8346bfa098532bc1fb6c7ef06783e969d87a99dd1d2a5a18a892c1d7a643c58"},
+ {file = "zstandard-0.23.0.tar.gz", hash = "sha256:b2d8c62d08e7255f68f7a740bae85b3c9b8e5466baa9cbf7f57f1cde0ac6bc09"},
+]
+
+[package.dependencies]
+cffi = {version = ">=1.11", markers = "platform_python_implementation == \"PyPy\""}
+
+[package.extras]
+cffi = ["cffi (>=1.11)"]
+
[metadata]
lock-version = "2.1"
python-versions = ">= 3.11,<4.0"
-content-hash = "c8806d374640c77818d21dbda597aa90cbadac06a918ba61aae9c10a5f8635ee"
+content-hash = "b8fbc2dfdf89be41444ce64b1b2172f6b67c1ecc19884fc19d1d79e00f3bb24b"
diff --git a/python/pyproject.toml b/python/pyproject.toml
index bc7fc6899..3a93b14bd 100644
--- a/python/pyproject.toml
+++ b/python/pyproject.toml
@@ -1,6 +1,6 @@
[project]
name = "beeai-framework"
-version="0.1.3"
+version="0.1.4"
license = "Apache-2.0"
readme = "README.md"
authors = [{ name = "IBM Corp." }]
@@ -40,6 +40,8 @@ duckduckgo-search = "^7.3.2"
json-repair = "^0.39.0"
wikipedia-api = "^0.8.1"
async-generator = "^1.10"
+langchain-core = {version = "^0.3.41", optional = true}
+langchain-community = {version = "^0.3.19", optional = true}
[tool.poetry.group.dev.dependencies]
pytest = "^8.3.4"
@@ -52,6 +54,8 @@ pytest-asyncio = "^0.25.3"
nbstripout = "^0.8.1"
pytest-cov = "^6.0.0"
types-chevron = "^0.14.2.20250103"
+boto3 = {version = "^1.37.5", optional = true}
+
[tool.mypy]
mypy_path = "$MYPY_CONFIG_FILE_DIR/beeai_framework"
diff --git a/python/tests/backend/test_chatmodel.py b/python/tests/backend/test_chatmodel.py
index 179c6a3bc..1d59f1191 100644
--- a/python/tests/backend/test_chatmodel.py
+++ b/python/tests/backend/test_chatmodel.py
@@ -19,9 +19,11 @@
import pytest_asyncio
from pydantic import BaseModel
+from beeai_framework.adapters.amazon_bedrock.backend.chat import AmazonBedrockChatModel
from beeai_framework.adapters.groq.backend.chat import GroqChatModel
from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
from beeai_framework.adapters.openai.backend.chat import OpenAIChatModel
+from beeai_framework.adapters.vertexai.backend.chat import VertexAIChatModel
from beeai_framework.adapters.watsonx.backend.chat import WatsonxChatModel
from beeai_framework.adapters.xai.backend.chat import XAIChatModel
from beeai_framework.backend.chat import (
@@ -183,3 +185,22 @@ def test_chat_model_from(monkeypatch: pytest.MonkeyPatch) -> None:
xai_chat_model = ChatModel.from_name("xai:grok-2")
assert isinstance(xai_chat_model, XAIChatModel)
+
+ #
+ monkeypatch.setenv("VERTEXAI_PROJECT", "myproject")
+ vertexai_chat_model = ChatModel.from_name("vertexai:gemini-2.0-flash-lite-001")
+ assert isinstance(vertexai_chat_model, VertexAIChatModel)
+
+ monkeypatch.setenv("VERTEXAI_PROJECT", "myproject")
+ vertexai_chat_model = ChatModel.from_name("vertexai:gemini-2.0-flash-lite-001")
+ assert isinstance(vertexai_chat_model, VertexAIChatModel)
+
+ monkeypatch.setenv("VERTEXAI_PROJECT", "myproject")
+ vertexai_chat_model = ChatModel.from_name("vertexai:gemini-2.0-flash-lite-001")
+ assert isinstance(vertexai_chat_model, VertexAIChatModel)
+
+ monkeypatch.setenv("AWS_ACCESS_KEY_ID", "secret1")
+ monkeypatch.setenv("AWS_SECRET_ACCESS_KEY", "secret2")
+ monkeypatch.setenv("AWS_REGION_NAME", "region1")
+ amazon_bedrock_chat_model = ChatModel.from_name("amazon_bedrock:meta.llama3-8b-instruct-v1:0")
+ assert isinstance(amazon_bedrock_chat_model, AmazonBedrockChatModel)
diff --git a/python/tests/examples/test_examples.py b/python/tests/examples/test_examples.py
index 4ac5dff89..8ca5aee5e 100644
--- a/python/tests/examples/test_examples.py
+++ b/python/tests/examples/test_examples.py
@@ -34,6 +34,9 @@
"backend/providers/openai_example.py" if os.getenv("OPENAI_API_KEY") is None else None,
"backend/providers/groq.py" if os.getenv("GROQ_API_KEY") is None else None,
"backend/providers/xai.py" if os.getenv("XAI_API_KEY") is None else None,
+ # Google backend picks up environment variables/google auth credentials directly
+ "backend/providers/vertexai.py" if os.getenv("VERTEXAI_PROJECT") is None else None,
+ "backend/providers/amazon_bedrock.py" if os.getenv("AWS_ACCESS_KEY_ID") is None else None,
# requires Searx instance
"workflows/searx_agent.py",
],
diff --git a/python/tests/runners/test_default_runner.py b/python/tests/runners/test_default_runner.py
index c240bcb6c..6a4056274 100644
--- a/python/tests/runners/test_default_runner.py
+++ b/python/tests/runners/test_default_runner.py
@@ -15,15 +15,17 @@
import pytest
-from beeai_framework.agents.runners.base import BeeRunnerToolInput
-from beeai_framework.agents.runners.default.runner import DefaultRunner
+from beeai_framework.agents.react.runners.base import ReActAgentRunnerToolInput
+from beeai_framework.agents.react.runners.default.runner import DefaultRunner
+from beeai_framework.agents.react.types import (
+ ReActAgentInput,
+ ReActAgentIterationMeta,
+ ReActAgentIterationResult,
+ ReActAgentRunInput,
+ ReActAgentRunOptions,
+)
from beeai_framework.agents.types import (
AgentExecutionConfig,
- BeeInput,
- BeeIterationResult,
- BeeMeta,
- BeeRunInput,
- BeeRunOptions,
)
from beeai_framework.backend.chat import ChatModel
from beeai_framework.memory.token_memory import TokenMemory
@@ -39,23 +41,23 @@
async def test_runner_init() -> None:
llm: ChatModel = ChatModel.from_name("ollama:granite3.1-dense:8b")
- input = BeeInput(
+ input = ReActAgentInput(
llm=llm,
tools=[OpenMeteoTool()],
memory=TokenMemory(llm),
execution=AgentExecutionConfig(max_iterations=10, max_retries_per_step=3, total_max_retries=10),
)
runner = DefaultRunner(
- input=input, options=BeeRunOptions(execution=input.execution, signal=None), run=None
+ input=input, options=ReActAgentRunOptions(execution=input.execution, signal=None), run=None
) # TODO Figure out run
- await runner.init(BeeRunInput(prompt="What is the current weather in White Plains?"))
+ await runner.init(ReActAgentRunInput(prompt="What is the current weather in White Plains?"))
await runner.tool(
- input=BeeRunnerToolInput(
- state=BeeIterationResult(tool_name="OpenMeteoTool", tool_input={"location_name": "White Plains"}),
+ input=ReActAgentRunnerToolInput(
+ state=ReActAgentIterationResult(tool_name="OpenMeteoTool", tool_input={"location_name": "White Plains"}),
emitter=None,
- meta=BeeMeta(iteration=0),
+ meta=ReActAgentIterationMeta(iteration=0),
signal=None,
)
)
diff --git a/python/tests/utils/test_custom_logger.py b/python/tests/test_custom_logger.py
similarity index 88%
rename from python/tests/utils/test_custom_logger.py
rename to python/tests/test_custom_logger.py
index 5856f2ee3..4fd90fb47 100644
--- a/python/tests/utils/test_custom_logger.py
+++ b/python/tests/test_custom_logger.py
@@ -17,7 +17,8 @@
import pytest
from beeai_framework.backend import Role
-from beeai_framework.utils import BeeLogger, MessageEvent
+from beeai_framework.logger import Logger
+from beeai_framework.utils import MessageEvent
"""
Unit Tests
@@ -26,7 +27,7 @@
@pytest.mark.unit
def test_redefine_logging_methods() -> None:
- logger = BeeLogger("app", level=logging.DEBUG)
+ logger = Logger("app", level=logging.DEBUG)
logger.add_logging_level("TEST1", 1, "test") # adds test log level
logger.add_logging_level("TEST2", 2, "test") # does not redefine test log level
logger.add_logging_level("INFO", logging.INFO) # does not redefine info log level
@@ -35,7 +36,7 @@ def test_redefine_logging_methods() -> None:
@pytest.mark.unit
def test_log_events() -> None:
- logger = BeeLogger("app")
+ logger = Logger("app")
event = MessageEvent(source=Role.USER, message="Test")
logger.log_message_events(event)
logger.info("Test", extra={"is_event_message": False})
diff --git a/python/tests/tools/test_decorator.py b/python/tests/tools/test_decorator.py
index 62a22c14a..3adb7eef7 100644
--- a/python/tests/tools/test_decorator.py
+++ b/python/tests/tools/test_decorator.py
@@ -15,7 +15,7 @@
import pytest
-from beeai_framework.tools import tool
+from beeai_framework.tools import StringToolOutput, tool
"""
Unit Tests
@@ -40,8 +40,8 @@ def test_tool(query: str) -> str:
return query
query = "Hello!"
- result = await test_tool.run({"query": query})
- assert result == query
+ result: StringToolOutput = await test_tool.run({"query": query})
+ assert result.get_text_content() == query
@pytest.mark.unit
@@ -55,8 +55,8 @@ def test_tool() -> str:
"""
return "Hello!"
- result = await test_tool.run({})
- assert result == "Hello!"
+ result: StringToolOutput = await test_tool.run({})
+ assert result.get_text_content() == "Hello!"
@pytest.mark.unit
@@ -67,8 +67,8 @@ def test_tool() -> str:
""""""
return "Hello!"
- result = await test_tool.run({})
- assert result == "Hello!"
+ result: StringToolOutput = await test_tool.run({})
+ assert result.get_text_content() == "Hello!"
@pytest.mark.unit
diff --git a/python/tests/tools/test_emitter.py b/python/tests/tools/test_emitter.py
index bdf6abe26..633406e88 100644
--- a/python/tests/tools/test_emitter.py
+++ b/python/tests/tools/test_emitter.py
@@ -19,7 +19,7 @@
from beeai_framework.emitter.emitter import Emitter, EventMeta
from beeai_framework.emitter.types import EmitterOptions
-from beeai_framework.tools import tool
+from beeai_framework.tools import StringToolOutput, tool
"""
Unit Tests
@@ -51,5 +51,5 @@ def test_tool(query: str) -> str:
return query
query = "Hello!"
- result = await test_tool.run({"query": query}).observe(observer)
- assert result == query
+ result: StringToolOutput = await test_tool.run({"query": query}).observe(observer)
+ assert result.get_text_content() == query
diff --git a/python/tests/tools/test_mcp_tool.py b/python/tests/tools/test_mcp_tool.py
index fd7746baa..1688497f0 100644
--- a/python/tests/tools/test_mcp_tool.py
+++ b/python/tests/tools/test_mcp_tool.py
@@ -14,14 +14,14 @@
from collections.abc import Callable
-from unittest.mock import AsyncMock, MagicMock
+from unittest.mock import AsyncMock, MagicMock, patch
import pytest
-from mcp.client.session import ClientSession
+from mcp import ClientSession, StdioServerParameters
from mcp.types import CallToolResult, TextContent
from mcp.types import Tool as MCPToolInfo
-from beeai_framework.tools.mcp_tools import MCPTool, MCPToolOutput, Tool
+from beeai_framework.tools.mcp_tools import MCPTool, Tool
"""
Utility functions and classes
@@ -34,13 +34,22 @@ def mock_client_session() -> AsyncMock:
return AsyncMock(spec=ClientSession)
+@pytest.fixture
+def mock_server_params() -> AsyncMock:
+ return AsyncMock(spec=StdioServerParameters)
+
+
# Basic Tool Test Fixtures
@pytest.fixture
def mock_tool_info() -> MCPToolInfo:
return MCPToolInfo(
name="test_tool",
description="A test tool",
- inputSchema={},
+ inputSchema={
+ "type": "object",
+ "properties": {"a": {"type": "number"}, "b": {"type": "number"}},
+ "required": ["a", "b"],
+ },
)
@@ -79,45 +88,47 @@ def add_result() -> CallToolResult:
)
-"""
-Unit Tests
-"""
-
-
# Basic Tool Tests
class TestMCPTool:
@pytest.mark.asyncio
@pytest.mark.unit
- async def test_mcp_tool_initialization(self, mock_client_session: ClientSession, mock_tool_info: Tool) -> None:
- tool = MCPTool(client=mock_client_session, tool=mock_tool_info)
+ async def test_mcp_tool_initialization(
+ self, mock_server_params: StdioServerParameters, mock_tool_info: Tool
+ ) -> None:
+ tool = MCPTool(server_params=StdioServerParameters, tool=mock_tool_info)
assert tool.name == "test_tool"
assert tool.description == "A test tool"
- assert tool.input_schema() == {}
@pytest.mark.asyncio
@pytest.mark.unit
+ @patch.object(MCPTool, "_run")
async def test_mcp_tool_run(
- self, mock_client_session: ClientSession, mock_tool_info: Tool, call_tool_result: MCPToolOutput
+ self,
+ mock__run, # noqa: ANN001
+ mock_server_params: StdioServerParameters,
+ mock_tool_info: Tool,
+ call_tool_result: str,
) -> None:
- mock_client_session.call_tool = AsyncMock(return_value=call_tool_result)
- tool = MCPTool(client=mock_client_session, tool=mock_tool_info)
- input_data = {"key": "value"}
+ mock__run.return_value = str(call_tool_result)
+ tool = MCPTool(server_params=StdioServerParameters, tool=mock_tool_info)
+ input_data = {"a": 1, "b": 2}
- result = await tool._run(input_data)
+ result = await tool.run(input_data)
- mock_client_session.call_tool.assert_awaited_once_with(name="test_tool", arguments=input_data)
- assert isinstance(result, MCPToolOutput)
- assert result.result == call_tool_result
+ assert isinstance(result, str)
+ assert result == str(call_tool_result)
@pytest.mark.asyncio
@pytest.mark.unit
- async def test_mcp_tool_from_client(self, mock_client_session: ClientSession, mock_tool_info: Tool) -> None:
+ async def test_mcp_tool_from_client(
+ self, mock_client_session: ClientSession, mock_server_params: StdioServerParameters, mock_tool_info: Tool
+ ) -> None:
tools_result = MagicMock()
tools_result.tools = [mock_tool_info]
mock_client_session.list_tools = AsyncMock(return_value=tools_result)
- tools = await MCPTool.from_client(mock_client_session)
+ tools = await MCPTool.from_client(mock_client_session, server_params=StdioServerParameters)
mock_client_session.list_tools.assert_awaited_once()
assert len(tools) == 1
@@ -129,30 +140,35 @@ async def test_mcp_tool_from_client(self, mock_client_session: ClientSession, mo
class TestAddNumbersTool:
@pytest.mark.asyncio
@pytest.mark.unit
+ @patch.object(MCPTool, "_run")
async def test_add_numbers_mcp(
- self, mock_client_session: ClientSession, add_numbers_tool_info: MCPToolInfo, add_result: Callable
+ self,
+ mock__run, # noqa: ANN001
+ mock_server_params: StdioServerParameters,
+ add_numbers_tool_info: MCPToolInfo,
+ add_result: Callable,
) -> None:
- mock_client_session.call_tool = AsyncMock(return_value=add_result)
- tool = MCPTool(client=mock_client_session, tool=add_numbers_tool_info)
+ mock__run.return_value = str(add_result)
+ tool = MCPTool(server_params=StdioServerParameters, tool=add_numbers_tool_info)
input_data = {"a": 5, "b": 3}
- result = await tool._run(input_data)
+ result = await tool.run(input_data)
- mock_client_session.call_tool.assert_awaited_once_with(name="add_numbers", arguments=input_data)
- assert isinstance(result, MCPToolOutput)
- assert result.result.output == "8"
- assert result.result.content[0].text == "8"
+ assert isinstance(result, str)
@pytest.mark.asyncio
@pytest.mark.unit
async def test_add_numbers_from_client(
- self, mock_client_session: ClientSession, add_numbers_tool_info: MCPToolInfo
+ self,
+ mock_client_session: ClientSession,
+ mock_server_params: StdioServerParameters,
+ add_numbers_tool_info: MCPToolInfo,
) -> None:
tools_result = MagicMock()
tools_result.tools = [add_numbers_tool_info]
mock_client_session.list_tools = AsyncMock(return_value=tools_result)
- tools = await MCPTool.from_client(mock_client_session)
+ tools = await MCPTool.from_client(mock_client_session, server_params=StdioServerParameters)
mock_client_session.list_tools.assert_awaited_once()
assert len(tools) == 1
diff --git a/python/tests/workflows/test_multi_agents.py b/python/tests/workflows/test_multi_agents.py
index 5dfc75550..418a95d8b 100644
--- a/python/tests/workflows/test_multi_agents.py
+++ b/python/tests/workflows/test_multi_agents.py
@@ -15,7 +15,7 @@
import pytest
from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel
-from beeai_framework.agents.bee import BeeAgent
+from beeai_framework.agents.react import ReActAgent
from beeai_framework.backend.message import UserMessage
from beeai_framework.memory import TokenMemory, UnconstrainedMemory
from beeai_framework.workflows.agent import AgentFactoryInput, AgentWorkflow
@@ -46,8 +46,8 @@ async def test_multi_agents_workflow_creation() -> None:
chat_model = OllamaChatModel()
workflow: AgentWorkflow = AgentWorkflow()
- workflow.add_agent(BeeAgent(llm=chat_model, tools=[], memory=TokenMemory(chat_model)))
- workflow.add_agent(agent=lambda mem: BeeAgent(llm=chat_model, tools=[], memory=mem))
+ workflow.add_agent(ReActAgent(llm=chat_model, tools=[], memory=TokenMemory(chat_model)))
+ workflow.add_agent(agent=lambda mem: ReActAgent(llm=chat_model, tools=[], memory=mem))
assert len(workflow.workflow.step_names) == 2
@@ -63,8 +63,8 @@ async def test_multi_agents_workflow_agent_delete() -> None:
chat_model = OllamaChatModel()
workflow: AgentWorkflow = AgentWorkflow()
- workflow.add_agent(BeeAgent(llm=chat_model, tools=[], memory=UnconstrainedMemory()))
- workflow.del_agent("Bee")
- workflow.add_agent(BeeAgent(llm=chat_model, tools=[], memory=UnconstrainedMemory()))
+ workflow.add_agent(ReActAgent(llm=chat_model, tools=[], memory=UnconstrainedMemory()))
+ workflow.del_agent("ReAct")
+ workflow.add_agent(ReActAgent(llm=chat_model, tools=[], memory=UnconstrainedMemory()))
assert len(workflow.workflow.step_names) == 1
diff --git a/typescript/examples/vitest.examples.config.ts b/typescript/examples/vitest.examples.config.ts
index 910f160c0..31faaa1e4 100644
--- a/typescript/examples/vitest.examples.config.ts
+++ b/typescript/examples/vitest.examples.config.ts
@@ -1,6 +1,6 @@
import { defineConfig } from "vitest/config";
import tsConfigPaths from "vite-tsconfig-paths";
-import packageJson from "../package.json" assert { type: "json" };
+import packageJson from "../package.json" with { type: "json" };
export default defineConfig({
test: {
diff --git a/typescript/package.json b/typescript/package.json
index 66619e042..e91a3b995 100644
--- a/typescript/package.json
+++ b/typescript/package.json
@@ -176,30 +176,30 @@
"dependencies": {
"@ai-zen/node-fetch-event-source": "^2.1.4",
"@opentelemetry/api": "^1.9.0",
- "@streamparser/json": "^0.0.21",
- "ai": "^4.1.24",
+ "@streamparser/json": "^0.0.22",
+ "ai": "^4.1.54",
"ajv": "^8.17.1",
"ajv-formats": "^3.0.1",
"duck-duck-scrape": "^2.2.7",
- "fast-xml-parser": "^4.5.0",
- "header-generator": "^2.1.57",
+ "fast-xml-parser": "^5.0.8",
+ "header-generator": "^2.1.62",
"joplin-turndown-plugin-gfm": "^1.0.12",
- "jsonrepair": "^3.11.1",
+ "jsonrepair": "^3.12.0",
"mathjs": "^14.0.0",
"mustache": "^4.2.0",
"object-hash": "^3.0.0",
- "p-queue-compat": "^1.0.227",
+ "p-queue-compat": "^1.0.229",
"p-throttle": "^7.0.0",
- "pino": "^9.5.0",
+ "pino": "^9.6.0",
"promise-based-task": "^3.1.1",
- "remeda": "^2.17.4",
- "serialize-error": "^11.0.3",
+ "remeda": "^2.21.0",
+ "serialize-error-cjs": "^0.2.0",
"string-comparison": "^1.3.0",
- "string-strip-html": "^13.4.8",
+ "string-strip-html": "^13.4.12",
"turndown": "^7.2.0",
"wikipedia": "^2.1.2",
- "zod": "^3.23.8",
- "zod-to-json-schema": "^3.23.5"
+ "zod": "~3.23.8",
+ "zod-to-json-schema": "^3.24.3"
},
"peerDependencies": {
"@ai-sdk/amazon-bedrock": "^1.1.5",
@@ -283,9 +283,9 @@
"@eslint/markdown": "^6.2.1",
"@googleapis/customsearch": "^3.2.0",
"@ibm-cloud/watsonx-ai": "^1.5.1",
- "@langchain/community": "~0.3.28",
- "@langchain/core": "~0.3.37",
- "@langchain/langgraph": "^0.2.44",
+ "@langchain/community": "0.3.28",
+ "@langchain/core": "0.3.37",
+ "@langchain/langgraph": "0.2.44",
"@langchain/ollama": "^0.1.5",
"@modelcontextprotocol/sdk": "^1.0.4",
"@opentelemetry/instrumentation": "^0.56.0",
@@ -316,7 +316,7 @@
"eslint-plugin-unused-imports": "^4.1.4",
"glob": "^11.0.0",
"ibm-cloud-sdk-core": "^5.1.3",
- "langchain": "~0.3.6",
+ "langchain": "0.3.6",
"linkinator": "^6.1.2",
"lint-staged": "^15.2.10",
"ollama-ai-provider": "^1.2.0",
@@ -329,9 +329,9 @@
"sequelize": "^6.37.5",
"sqlite3": "^5.1.7",
"strip-ansi": "^7.1.0",
- "tsup": "^8.3.6",
+ "tsup": "^8.4.0",
"tsx": "^4.19.2",
- "typescript": "^5.7.3",
+ "typescript": "^5.8.2",
"typescript-eslint": "^8.18.1",
"vite-tsconfig-paths": "^5.1.4",
"vitest": "^2.1.8",
diff --git a/typescript/src/adapters/vercel/backend/chat.ts b/typescript/src/adapters/vercel/backend/chat.ts
index 716e8cd26..bfbf8c951 100644
--- a/typescript/src/adapters/vercel/backend/chat.ts
+++ b/typescript/src/adapters/vercel/backend/chat.ts
@@ -31,6 +31,8 @@ import {
jsonSchema,
LanguageModelV1,
streamText,
+ TextPart,
+ ToolCallPart,
} from "ai";
import { Emitter } from "@/emitter/emitter.js";
import { AssistantMessage, Message, ToolMessage } from "@/backend/message.js";
@@ -185,9 +187,12 @@ export abstract class VercelChatModel<
protected transformMessages(messages: (CoreAssistantMessage | CoreToolMessage)[]): Message[] {
return messages.flatMap((msg) => {
if (msg.role === "tool") {
- return new ToolMessage(msg.content, msg.experimental_providerMetadata);
+ return new ToolMessage(msg.content, msg.providerOptions);
}
- return new AssistantMessage(msg.content, msg.experimental_providerMetadata);
+ return new AssistantMessage(
+ msg.content as TextPart | ToolCallPart | string,
+ msg.providerOptions,
+ );
});
}
diff --git a/typescript/src/serializer/serializer.ts b/typescript/src/serializer/serializer.ts
index cd7199faf..3252e659b 100644
--- a/typescript/src/serializer/serializer.ts
+++ b/typescript/src/serializer/serializer.ts
@@ -18,7 +18,7 @@ import * as R from "remeda";
import { Serializable, SerializableClass } from "@/internals/serializable.js";
import { AnyConstructable, ClassConstructor, NamedFunction } from "@/internals/types.js";
import { SafeWeakMap, SafeWeakSet } from "@/internals/helpers/weakRef.js";
-import { deserializeError, serializeError } from "serialize-error";
+import { deserializeError, serializeError } from "serialize-error-cjs";
import { Version } from "@/version.js";
import {
extractClassName,
diff --git a/typescript/src/tools/search/duckDuckGoSearch.ts b/typescript/src/tools/search/duckDuckGoSearch.ts
index 5920e494c..7f201389c 100644
--- a/typescript/src/tools/search/duckDuckGoSearch.ts
+++ b/typescript/src/tools/search/duckDuckGoSearch.ts
@@ -15,8 +15,7 @@
*/
import { SearchOptions, search as rawDDGSearch, SafeSearchType } from "duck-duck-scrape";
-import { stripHtml } from "string-strip-html";
-import pThrottle, { Options as ThrottleOptions } from "p-throttle";
+import { Options as ThrottleOptions } from "p-throttle";
import {
SearchToolOptions,
SearchToolOutput,
@@ -83,8 +82,6 @@ export class DuckDuckGoSearchTool extends Tool<
creator: this,
});
- protected readonly client: typeof rawDDGSearch;
-
@Cache()
inputSchema() {
return z.object({
@@ -94,24 +91,25 @@ export class DuckDuckGoSearchTool extends Tool<
public constructor(options: Partial = {}) {
super({ ...options, maxResults: options?.maxResults ?? 15 });
-
- this.client = this._createClient();
}
static {
this.register();
}
- protected _createClient() {
+ @Cache({ enumerable: false })
+ protected async _createClient() {
const { throttle } = this.options;
-
- return throttle === false
- ? rawDDGSearch
- : pThrottle({
- ...throttle,
- limit: throttle?.limit ?? 1,
- interval: throttle?.interval ?? 3000,
- })(rawDDGSearch);
+ if (throttle === false) {
+ return rawDDGSearch;
+ }
+
+ const { default: pThrottle } = await import("p-throttle");
+ return pThrottle({
+ ...throttle,
+ limit: throttle?.limit ?? 1,
+ interval: throttle?.interval ?? 3000,
+ })(rawDDGSearch);
}
protected async _run(
@@ -120,11 +118,12 @@ export class DuckDuckGoSearchTool extends Tool<
run: RunContext,
) {
const headers = new HeaderGenerator().getHeaders();
+ const client = await this._createClient();
const results = await paginate({
size: this.options.maxResults,
handler: async ({ cursor = 0 }) => {
- const { results: data, noResults: done } = await this.client(
+ const { results: data, noResults: done } = await client(
input,
{
safeSearch: SafeSearchType.MODERATE,
@@ -153,6 +152,8 @@ export class DuckDuckGoSearchTool extends Tool<
},
});
+ const { stripHtml } = await import("string-strip-html");
+
return new DuckDuckGoSearchToolOutput(
results.map((result) => ({
title: stripHtml(result.title).result,
@@ -161,9 +162,4 @@ export class DuckDuckGoSearchTool extends Tool<
})),
);
}
-
- loadSnapshot(snapshot: ReturnType): void {
- super.loadSnapshot(snapshot);
- Object.assign(this, { client: this._createClient() });
- }
}
diff --git a/typescript/tsup.config.ts b/typescript/tsup.config.ts
index b55444146..bb998597b 100644
--- a/typescript/tsup.config.ts
+++ b/typescript/tsup.config.ts
@@ -1,9 +1,9 @@
import { defineConfig } from "tsup";
-import packageJson from "./package.json" assert { type: "json" };
+import packageJson from "./package.json" with { type: "json" };
import swc, { JscConfig } from "@swc/core";
import path from "node:path";
-import tsConfig from "./tsconfig.json" assert { type: "json" };
+import tsConfig from "./tsconfig.json" with { type: "json" };
import { JscTarget } from "@swc/types";
export default defineConfig({
diff --git a/typescript/vitest.config.ts b/typescript/vitest.config.ts
index cd69a14e4..b071485ab 100644
--- a/typescript/vitest.config.ts
+++ b/typescript/vitest.config.ts
@@ -1,6 +1,6 @@
import { defineConfig } from "vitest/config";
import tsConfigPaths from "vite-tsconfig-paths";
-import packageJson from "./package.json" assert { type: "json" };
+import packageJson from "./package.json" with { type: "json" };
export default defineConfig({
test: {
diff --git a/typescript/yarn.lock b/typescript/yarn.lock
index ed714dad7..300e30aa6 100644
--- a/typescript/yarn.lock
+++ b/typescript/yarn.lock
@@ -106,6 +106,23 @@ __metadata:
languageName: node
linkType: hard
+"@ai-sdk/provider-utils@npm:2.1.11":
+ version: 2.1.11
+ resolution: "@ai-sdk/provider-utils@npm:2.1.11"
+ dependencies:
+ "@ai-sdk/provider": "npm:1.0.10"
+ eventsource-parser: "npm:^3.0.0"
+ nanoid: "npm:^3.3.8"
+ secure-json-parse: "npm:^2.7.0"
+ peerDependencies:
+ zod: ^3.0.0
+ peerDependenciesMeta:
+ zod:
+ optional: true
+ checksum: 10c0/e683bbc5cfd3c58b497d3b0e59daf92728d6c24139d90b8ce911bded8c5120b0ff307a24e9f34c8f627058a21473ed09e97550cc3a844b1434142cf69ff48acf
+ languageName: node
+ linkType: hard
+
"@ai-sdk/provider-utils@npm:2.1.2":
version: 2.1.2
resolution: "@ai-sdk/provider-utils@npm:2.1.2"
@@ -157,6 +174,15 @@ __metadata:
languageName: node
linkType: hard
+"@ai-sdk/provider@npm:1.0.10":
+ version: 1.0.10
+ resolution: "@ai-sdk/provider@npm:1.0.10"
+ dependencies:
+ json-schema: "npm:^0.4.0"
+ checksum: 10c0/b18ceff3c105c6f4c432902e1a69e5bb3b6b85718057c39007f7ea9a5a9ecdd9162d0b6aaeda299c45b1080f60ae828168c33c624be80c918f5302647feb6b89
+ languageName: node
+ linkType: hard
+
"@ai-sdk/provider@npm:1.0.6, @ai-sdk/provider@npm:^1.0.0":
version: 1.0.6
resolution: "@ai-sdk/provider@npm:1.0.6"
@@ -175,12 +201,12 @@ __metadata:
languageName: node
linkType: hard
-"@ai-sdk/react@npm:1.1.10":
- version: 1.1.10
- resolution: "@ai-sdk/react@npm:1.1.10"
+"@ai-sdk/react@npm:1.1.21":
+ version: 1.1.21
+ resolution: "@ai-sdk/react@npm:1.1.21"
dependencies:
- "@ai-sdk/provider-utils": "npm:2.1.6"
- "@ai-sdk/ui-utils": "npm:1.1.10"
+ "@ai-sdk/provider-utils": "npm:2.1.11"
+ "@ai-sdk/ui-utils": "npm:1.1.17"
swr: "npm:^2.2.5"
throttleit: "npm:2.1.0"
peerDependencies:
@@ -191,23 +217,23 @@ __metadata:
optional: true
zod:
optional: true
- checksum: 10c0/bdfe767a3c4b9e82fdb3327dd1672922451b5276474be998decad0a6abd89ff7f784dadeffd14db2631c5b5eb34ff7467a391597ca9ff735ba958715c82ab463
+ checksum: 10c0/10cba04edc9cc0412709700ed7a9c3c341054179f2322debe89f90997f202fd42c95b3d70b599f4c5f1dc9a1c39d592d6b860b9a1bb4d3278f9b4329a0335443
languageName: node
linkType: hard
-"@ai-sdk/ui-utils@npm:1.1.10":
- version: 1.1.10
- resolution: "@ai-sdk/ui-utils@npm:1.1.10"
+"@ai-sdk/ui-utils@npm:1.1.17":
+ version: 1.1.17
+ resolution: "@ai-sdk/ui-utils@npm:1.1.17"
dependencies:
- "@ai-sdk/provider": "npm:1.0.7"
- "@ai-sdk/provider-utils": "npm:2.1.6"
+ "@ai-sdk/provider": "npm:1.0.10"
+ "@ai-sdk/provider-utils": "npm:2.1.11"
zod-to-json-schema: "npm:^3.24.1"
peerDependencies:
zod: ^3.0.0
peerDependenciesMeta:
zod:
optional: true
- checksum: 10c0/d48a7d0ae5149792d7607ad43c8be34aa47ac955aaef8b115ea29df4acc7785bae56bee5a7fa326581224022d8f819239035576983270701b72253566793da71
+ checksum: 10c0/81eb188f6e2630f2975e3b6de88759ec155ff9ac46f9733f128e4c6b9f86d5981de0d8f25c5becbcdf9b6000c8f16bba14fbcce2b7100b99d31e8ef70ace57cd
languageName: node
linkType: hard
@@ -1036,9 +1062,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/aix-ppc64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/aix-ppc64@npm:0.24.0"
+"@esbuild/aix-ppc64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/aix-ppc64@npm:0.25.0"
conditions: os=aix & cpu=ppc64
languageName: node
linkType: hard
@@ -1057,9 +1083,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/android-arm64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/android-arm64@npm:0.24.0"
+"@esbuild/android-arm64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/android-arm64@npm:0.25.0"
conditions: os=android & cpu=arm64
languageName: node
linkType: hard
@@ -1078,9 +1104,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/android-arm@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/android-arm@npm:0.24.0"
+"@esbuild/android-arm@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/android-arm@npm:0.25.0"
conditions: os=android & cpu=arm
languageName: node
linkType: hard
@@ -1099,9 +1125,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/android-x64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/android-x64@npm:0.24.0"
+"@esbuild/android-x64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/android-x64@npm:0.25.0"
conditions: os=android & cpu=x64
languageName: node
linkType: hard
@@ -1120,9 +1146,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/darwin-arm64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/darwin-arm64@npm:0.24.0"
+"@esbuild/darwin-arm64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/darwin-arm64@npm:0.25.0"
conditions: os=darwin & cpu=arm64
languageName: node
linkType: hard
@@ -1141,9 +1167,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/darwin-x64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/darwin-x64@npm:0.24.0"
+"@esbuild/darwin-x64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/darwin-x64@npm:0.25.0"
conditions: os=darwin & cpu=x64
languageName: node
linkType: hard
@@ -1162,9 +1188,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/freebsd-arm64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/freebsd-arm64@npm:0.24.0"
+"@esbuild/freebsd-arm64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/freebsd-arm64@npm:0.25.0"
conditions: os=freebsd & cpu=arm64
languageName: node
linkType: hard
@@ -1183,9 +1209,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/freebsd-x64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/freebsd-x64@npm:0.24.0"
+"@esbuild/freebsd-x64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/freebsd-x64@npm:0.25.0"
conditions: os=freebsd & cpu=x64
languageName: node
linkType: hard
@@ -1204,9 +1230,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/linux-arm64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/linux-arm64@npm:0.24.0"
+"@esbuild/linux-arm64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/linux-arm64@npm:0.25.0"
conditions: os=linux & cpu=arm64
languageName: node
linkType: hard
@@ -1225,9 +1251,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/linux-arm@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/linux-arm@npm:0.24.0"
+"@esbuild/linux-arm@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/linux-arm@npm:0.25.0"
conditions: os=linux & cpu=arm
languageName: node
linkType: hard
@@ -1246,9 +1272,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/linux-ia32@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/linux-ia32@npm:0.24.0"
+"@esbuild/linux-ia32@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/linux-ia32@npm:0.25.0"
conditions: os=linux & cpu=ia32
languageName: node
linkType: hard
@@ -1267,9 +1293,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/linux-loong64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/linux-loong64@npm:0.24.0"
+"@esbuild/linux-loong64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/linux-loong64@npm:0.25.0"
conditions: os=linux & cpu=loong64
languageName: node
linkType: hard
@@ -1288,9 +1314,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/linux-mips64el@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/linux-mips64el@npm:0.24.0"
+"@esbuild/linux-mips64el@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/linux-mips64el@npm:0.25.0"
conditions: os=linux & cpu=mips64el
languageName: node
linkType: hard
@@ -1309,9 +1335,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/linux-ppc64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/linux-ppc64@npm:0.24.0"
+"@esbuild/linux-ppc64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/linux-ppc64@npm:0.25.0"
conditions: os=linux & cpu=ppc64
languageName: node
linkType: hard
@@ -1330,9 +1356,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/linux-riscv64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/linux-riscv64@npm:0.24.0"
+"@esbuild/linux-riscv64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/linux-riscv64@npm:0.25.0"
conditions: os=linux & cpu=riscv64
languageName: node
linkType: hard
@@ -1351,9 +1377,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/linux-s390x@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/linux-s390x@npm:0.24.0"
+"@esbuild/linux-s390x@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/linux-s390x@npm:0.25.0"
conditions: os=linux & cpu=s390x
languageName: node
linkType: hard
@@ -1372,13 +1398,20 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/linux-x64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/linux-x64@npm:0.24.0"
+"@esbuild/linux-x64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/linux-x64@npm:0.25.0"
conditions: os=linux & cpu=x64
languageName: node
linkType: hard
+"@esbuild/netbsd-arm64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/netbsd-arm64@npm:0.25.0"
+ conditions: os=netbsd & cpu=arm64
+ languageName: node
+ linkType: hard
+
"@esbuild/netbsd-x64@npm:0.21.5":
version: 0.21.5
resolution: "@esbuild/netbsd-x64@npm:0.21.5"
@@ -1393,9 +1426,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/netbsd-x64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/netbsd-x64@npm:0.24.0"
+"@esbuild/netbsd-x64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/netbsd-x64@npm:0.25.0"
conditions: os=netbsd & cpu=x64
languageName: node
linkType: hard
@@ -1407,9 +1440,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/openbsd-arm64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/openbsd-arm64@npm:0.24.0"
+"@esbuild/openbsd-arm64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/openbsd-arm64@npm:0.25.0"
conditions: os=openbsd & cpu=arm64
languageName: node
linkType: hard
@@ -1428,9 +1461,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/openbsd-x64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/openbsd-x64@npm:0.24.0"
+"@esbuild/openbsd-x64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/openbsd-x64@npm:0.25.0"
conditions: os=openbsd & cpu=x64
languageName: node
linkType: hard
@@ -1449,9 +1482,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/sunos-x64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/sunos-x64@npm:0.24.0"
+"@esbuild/sunos-x64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/sunos-x64@npm:0.25.0"
conditions: os=sunos & cpu=x64
languageName: node
linkType: hard
@@ -1470,9 +1503,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/win32-arm64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/win32-arm64@npm:0.24.0"
+"@esbuild/win32-arm64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/win32-arm64@npm:0.25.0"
conditions: os=win32 & cpu=arm64
languageName: node
linkType: hard
@@ -1491,9 +1524,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/win32-ia32@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/win32-ia32@npm:0.24.0"
+"@esbuild/win32-ia32@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/win32-ia32@npm:0.25.0"
conditions: os=win32 & cpu=ia32
languageName: node
linkType: hard
@@ -1512,9 +1545,9 @@ __metadata:
languageName: node
linkType: hard
-"@esbuild/win32-x64@npm:0.24.0":
- version: 0.24.0
- resolution: "@esbuild/win32-x64@npm:0.24.0"
+"@esbuild/win32-x64@npm:0.25.0":
+ version: 0.25.0
+ resolution: "@esbuild/win32-x64@npm:0.25.0"
conditions: os=win32 & cpu=x64
languageName: node
linkType: hard
@@ -1800,7 +1833,7 @@ __metadata:
languageName: node
linkType: hard
-"@langchain/community@npm:~0.3.28":
+"@langchain/community@npm:0.3.28":
version: 0.3.28
resolution: "@langchain/community@npm:0.3.28"
dependencies:
@@ -2188,7 +2221,7 @@ __metadata:
languageName: node
linkType: hard
-"@langchain/core@npm:~0.3.37":
+"@langchain/core@npm:0.3.37":
version: 0.3.37
resolution: "@langchain/core@npm:0.3.37"
dependencies:
@@ -2231,7 +2264,7 @@ __metadata:
languageName: node
linkType: hard
-"@langchain/langgraph@npm:^0.2.44":
+"@langchain/langgraph@npm:0.2.44":
version: 0.2.44
resolution: "@langchain/langgraph@npm:0.2.44"
dependencies:
@@ -3079,13 +3112,6 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-android-arm-eabi@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-android-arm-eabi@npm:4.28.1"
- conditions: os=android & cpu=arm
- languageName: node
- linkType: hard
-
"@rollup/rollup-android-arm-eabi@npm:4.30.1":
version: 4.30.1
resolution: "@rollup/rollup-android-arm-eabi@npm:4.30.1"
@@ -3093,10 +3119,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-android-arm64@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-android-arm64@npm:4.28.1"
- conditions: os=android & cpu=arm64
+"@rollup/rollup-android-arm-eabi@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-android-arm-eabi@npm:4.34.9"
+ conditions: os=android & cpu=arm
languageName: node
linkType: hard
@@ -3107,10 +3133,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-darwin-arm64@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-darwin-arm64@npm:4.28.1"
- conditions: os=darwin & cpu=arm64
+"@rollup/rollup-android-arm64@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-android-arm64@npm:4.34.9"
+ conditions: os=android & cpu=arm64
languageName: node
linkType: hard
@@ -3121,10 +3147,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-darwin-x64@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-darwin-x64@npm:4.28.1"
- conditions: os=darwin & cpu=x64
+"@rollup/rollup-darwin-arm64@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-darwin-arm64@npm:4.34.9"
+ conditions: os=darwin & cpu=arm64
languageName: node
linkType: hard
@@ -3135,10 +3161,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-freebsd-arm64@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-freebsd-arm64@npm:4.28.1"
- conditions: os=freebsd & cpu=arm64
+"@rollup/rollup-darwin-x64@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-darwin-x64@npm:4.34.9"
+ conditions: os=darwin & cpu=x64
languageName: node
linkType: hard
@@ -3149,10 +3175,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-freebsd-x64@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-freebsd-x64@npm:4.28.1"
- conditions: os=freebsd & cpu=x64
+"@rollup/rollup-freebsd-arm64@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-freebsd-arm64@npm:4.34.9"
+ conditions: os=freebsd & cpu=arm64
languageName: node
linkType: hard
@@ -3163,10 +3189,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-linux-arm-gnueabihf@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-linux-arm-gnueabihf@npm:4.28.1"
- conditions: os=linux & cpu=arm & libc=glibc
+"@rollup/rollup-freebsd-x64@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-freebsd-x64@npm:4.34.9"
+ conditions: os=freebsd & cpu=x64
languageName: node
linkType: hard
@@ -3177,10 +3203,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-linux-arm-musleabihf@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-linux-arm-musleabihf@npm:4.28.1"
- conditions: os=linux & cpu=arm & libc=musl
+"@rollup/rollup-linux-arm-gnueabihf@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-linux-arm-gnueabihf@npm:4.34.9"
+ conditions: os=linux & cpu=arm & libc=glibc
languageName: node
linkType: hard
@@ -3191,10 +3217,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-linux-arm64-gnu@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-linux-arm64-gnu@npm:4.28.1"
- conditions: os=linux & cpu=arm64 & libc=glibc
+"@rollup/rollup-linux-arm-musleabihf@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-linux-arm-musleabihf@npm:4.34.9"
+ conditions: os=linux & cpu=arm & libc=musl
languageName: node
linkType: hard
@@ -3205,10 +3231,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-linux-arm64-musl@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-linux-arm64-musl@npm:4.28.1"
- conditions: os=linux & cpu=arm64 & libc=musl
+"@rollup/rollup-linux-arm64-gnu@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-linux-arm64-gnu@npm:4.34.9"
+ conditions: os=linux & cpu=arm64 & libc=glibc
languageName: node
linkType: hard
@@ -3219,10 +3245,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-linux-loongarch64-gnu@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-linux-loongarch64-gnu@npm:4.28.1"
- conditions: os=linux & cpu=loong64 & libc=glibc
+"@rollup/rollup-linux-arm64-musl@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-linux-arm64-musl@npm:4.34.9"
+ conditions: os=linux & cpu=arm64 & libc=musl
languageName: node
linkType: hard
@@ -3233,10 +3259,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-linux-powerpc64le-gnu@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-linux-powerpc64le-gnu@npm:4.28.1"
- conditions: os=linux & cpu=ppc64 & libc=glibc
+"@rollup/rollup-linux-loongarch64-gnu@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-linux-loongarch64-gnu@npm:4.34.9"
+ conditions: os=linux & cpu=loong64 & libc=glibc
languageName: node
linkType: hard
@@ -3247,10 +3273,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-linux-riscv64-gnu@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-linux-riscv64-gnu@npm:4.28.1"
- conditions: os=linux & cpu=riscv64 & libc=glibc
+"@rollup/rollup-linux-powerpc64le-gnu@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-linux-powerpc64le-gnu@npm:4.34.9"
+ conditions: os=linux & cpu=ppc64 & libc=glibc
languageName: node
linkType: hard
@@ -3261,10 +3287,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-linux-s390x-gnu@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-linux-s390x-gnu@npm:4.28.1"
- conditions: os=linux & cpu=s390x & libc=glibc
+"@rollup/rollup-linux-riscv64-gnu@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-linux-riscv64-gnu@npm:4.34.9"
+ conditions: os=linux & cpu=riscv64 & libc=glibc
languageName: node
linkType: hard
@@ -3275,10 +3301,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-linux-x64-gnu@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-linux-x64-gnu@npm:4.28.1"
- conditions: os=linux & cpu=x64 & libc=glibc
+"@rollup/rollup-linux-s390x-gnu@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-linux-s390x-gnu@npm:4.34.9"
+ conditions: os=linux & cpu=s390x & libc=glibc
languageName: node
linkType: hard
@@ -3289,10 +3315,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-linux-x64-musl@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-linux-x64-musl@npm:4.28.1"
- conditions: os=linux & cpu=x64 & libc=musl
+"@rollup/rollup-linux-x64-gnu@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-linux-x64-gnu@npm:4.34.9"
+ conditions: os=linux & cpu=x64 & libc=glibc
languageName: node
linkType: hard
@@ -3303,10 +3329,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-win32-arm64-msvc@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-win32-arm64-msvc@npm:4.28.1"
- conditions: os=win32 & cpu=arm64
+"@rollup/rollup-linux-x64-musl@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-linux-x64-musl@npm:4.34.9"
+ conditions: os=linux & cpu=x64 & libc=musl
languageName: node
linkType: hard
@@ -3317,10 +3343,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-win32-ia32-msvc@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-win32-ia32-msvc@npm:4.28.1"
- conditions: os=win32 & cpu=ia32
+"@rollup/rollup-win32-arm64-msvc@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-win32-arm64-msvc@npm:4.34.9"
+ conditions: os=win32 & cpu=arm64
languageName: node
linkType: hard
@@ -3331,10 +3357,10 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-win32-x64-msvc@npm:4.28.1":
- version: 4.28.1
- resolution: "@rollup/rollup-win32-x64-msvc@npm:4.28.1"
- conditions: os=win32 & cpu=x64
+"@rollup/rollup-win32-ia32-msvc@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-win32-ia32-msvc@npm:4.34.9"
+ conditions: os=win32 & cpu=ia32
languageName: node
linkType: hard
@@ -3345,6 +3371,13 @@ __metadata:
languageName: node
linkType: hard
+"@rollup/rollup-win32-x64-msvc@npm:4.34.9":
+ version: 4.34.9
+ resolution: "@rollup/rollup-win32-x64-msvc@npm:4.34.9"
+ conditions: os=win32 & cpu=x64
+ languageName: node
+ linkType: hard
+
"@sindresorhus/is@npm:^0.14.0":
version: 0.14.0
resolution: "@sindresorhus/is@npm:0.14.0"
@@ -3907,10 +3940,10 @@ __metadata:
languageName: node
linkType: hard
-"@streamparser/json@npm:^0.0.21":
- version: 0.0.21
- resolution: "@streamparser/json@npm:0.0.21"
- checksum: 10c0/1dc27367e97f8e165f1268c29c4c463d5b644d8776bc5dcc07820dcdf62bae14c5eb3128c34d142b4e92a064a2c3563a127ec0dd3129c6425d06411838bb476a
+"@streamparser/json@npm:^0.0.22":
+ version: 0.0.22
+ resolution: "@streamparser/json@npm:0.0.22"
+ checksum: 10c0/4496e286d607e37552c75e1e63ed7e722392d4d91939401ec11eb63c8c7d1fb7a2cf5733f1a60dc63ad95996f12d62f5639724b2c31584bb8ace51b1f3910ccc
languageName: node
linkType: hard
@@ -4741,14 +4774,14 @@ __metadata:
languageName: node
linkType: hard
-"ai@npm:^4.1.24":
- version: 4.1.24
- resolution: "ai@npm:4.1.24"
+"ai@npm:^4.1.54":
+ version: 4.1.54
+ resolution: "ai@npm:4.1.54"
dependencies:
- "@ai-sdk/provider": "npm:1.0.7"
- "@ai-sdk/provider-utils": "npm:2.1.6"
- "@ai-sdk/react": "npm:1.1.10"
- "@ai-sdk/ui-utils": "npm:1.1.10"
+ "@ai-sdk/provider": "npm:1.0.10"
+ "@ai-sdk/provider-utils": "npm:2.1.11"
+ "@ai-sdk/react": "npm:1.1.21"
+ "@ai-sdk/ui-utils": "npm:1.1.17"
"@opentelemetry/api": "npm:1.9.0"
jsondiffpatch: "npm:0.6.0"
peerDependencies:
@@ -4759,7 +4792,7 @@ __metadata:
optional: true
zod:
optional: true
- checksum: 10c0/3f6890bcb1ea5335db4a0852edab94c5235821b28b8ae78bddd634c808b97b1722ce55ac2e0b9f9d0f4a7386a43f2026d850e27a8ac371fcfcf114fa0a3ecea6
+ checksum: 10c0/5acc99675263605a217f14695509b42886d84e54d25b51967f7689b6f89098cb4d72ce60a195d18a5e05d2262c7a834ca3f7fa54716c841e20e5e2e4e7071143
languageName: node
linkType: hard
@@ -5093,9 +5126,9 @@ __metadata:
"@eslint/markdown": "npm:^6.2.1"
"@googleapis/customsearch": "npm:^3.2.0"
"@ibm-cloud/watsonx-ai": "npm:^1.5.1"
- "@langchain/community": "npm:~0.3.28"
- "@langchain/core": "npm:~0.3.37"
- "@langchain/langgraph": "npm:^0.2.44"
+ "@langchain/community": "npm:0.3.28"
+ "@langchain/core": "npm:0.3.37"
+ "@langchain/langgraph": "npm:0.2.44"
"@langchain/ollama": "npm:^0.1.5"
"@modelcontextprotocol/sdk": "npm:^1.0.4"
"@opentelemetry/api": "npm:^1.9.0"
@@ -5106,7 +5139,7 @@ __metadata:
"@opentelemetry/semantic-conventions": "npm:^1.28.0"
"@release-it/conventional-changelog": "npm:^8.0.2"
"@rollup/plugin-commonjs": "npm:^28.0.1"
- "@streamparser/json": "npm:^0.0.21"
+ "@streamparser/json": "npm:^0.0.22"
"@stylistic/eslint-plugin-js": "npm:^2.12.1"
"@swc/core": "npm:^1.10.0"
"@types/eslint": "npm:^9.6.1"
@@ -5120,7 +5153,7 @@ __metadata:
"@types/object-hash": "npm:^3.0.6"
"@types/turndown": "npm:^5.0.5"
"@zilliz/milvus2-sdk-node": "npm:^2.5.1"
- ai: "npm:^4.1.24"
+ ai: "npm:^4.1.54"
ajv: "npm:^8.17.1"
ajv-formats: "npm:^3.0.1"
docsify-cli: "npm:^4.4.4"
@@ -5130,47 +5163,47 @@ __metadata:
eslint: "npm:^9.17.0"
eslint-config-prettier: "npm:^9.1.0"
eslint-plugin-unused-imports: "npm:^4.1.4"
- fast-xml-parser: "npm:^4.5.0"
+ fast-xml-parser: "npm:^5.0.8"
glob: "npm:^11.0.0"
- header-generator: "npm:^2.1.57"
+ header-generator: "npm:^2.1.62"
ibm-cloud-sdk-core: "npm:^5.1.3"
joplin-turndown-plugin-gfm: "npm:^1.0.12"
- jsonrepair: "npm:^3.11.1"
- langchain: "npm:~0.3.6"
+ jsonrepair: "npm:^3.12.0"
+ langchain: "npm:0.3.6"
linkinator: "npm:^6.1.2"
lint-staged: "npm:^15.2.10"
mathjs: "npm:^14.0.0"
mustache: "npm:^4.2.0"
object-hash: "npm:^3.0.0"
ollama-ai-provider: "npm:^1.2.0"
- p-queue-compat: "npm:^1.0.227"
+ p-queue-compat: "npm:^1.0.229"
p-throttle: "npm:^7.0.0"
picocolors: "npm:^1.1.1"
- pino: "npm:^9.5.0"
+ pino: "npm:^9.6.0"
pino-pretty: "npm:^13.0.0"
pino-test: "npm:^1.1.0"
prettier: "npm:^3.4.2"
promise-based-task: "npm:^3.1.1"
release-it: "npm:^17.10.0"
- remeda: "npm:^2.17.4"
+ remeda: "npm:^2.21.0"
rimraf: "npm:^6.0.1"
sequelize: "npm:^6.37.5"
- serialize-error: "npm:^11.0.3"
+ serialize-error-cjs: "npm:^0.2.0"
sqlite3: "npm:^5.1.7"
string-comparison: "npm:^1.3.0"
- string-strip-html: "npm:^13.4.8"
+ string-strip-html: "npm:^13.4.12"
strip-ansi: "npm:^7.1.0"
- tsup: "npm:^8.3.6"
+ tsup: "npm:^8.4.0"
tsx: "npm:^4.19.2"
turndown: "npm:^7.2.0"
- typescript: "npm:^5.7.3"
+ typescript: "npm:^5.8.2"
typescript-eslint: "npm:^8.18.1"
vite-tsconfig-paths: "npm:^5.1.4"
vitest: "npm:^2.1.8"
wikipedia: "npm:^2.1.2"
yaml: "npm:^2.6.1"
- zod: "npm:^3.23.8"
- zod-to-json-schema: "npm:^3.23.5"
+ zod: "npm:~3.23.8"
+ zod-to-json-schema: "npm:^3.24.3"
peerDependencies:
"@ai-sdk/amazon-bedrock": ^1.1.5
"@ai-sdk/anthropic": ^1.1.6
@@ -5379,14 +5412,14 @@ __metadata:
languageName: node
linkType: hard
-"bundle-require@npm:^5.0.0":
- version: 5.0.0
- resolution: "bundle-require@npm:5.0.0"
+"bundle-require@npm:^5.1.0":
+ version: 5.1.0
+ resolution: "bundle-require@npm:5.1.0"
dependencies:
load-tsconfig: "npm:^0.2.3"
peerDependencies:
esbuild: ">=0.18"
- checksum: 10c0/92c46df02586e0ebd66ee4831c9b5775adb3c32a43fe2b2aaf7bc675135c141f751de6a9a26b146d64c607c5b40f9eef5f10dce3c364f602d4bed268444c32c6
+ checksum: 10c0/8bff9df68eb686f05af952003c78e70ffed2817968f92aebb2af620cc0b7428c8154df761d28f1b38508532204278950624ef86ce63644013dc57660a9d1810f
languageName: node
linkType: hard
@@ -5647,12 +5680,12 @@ __metadata:
languageName: node
linkType: hard
-"chokidar@npm:^4.0.1":
- version: 4.0.1
- resolution: "chokidar@npm:4.0.1"
+"chokidar@npm:^4.0.3":
+ version: 4.0.3
+ resolution: "chokidar@npm:4.0.3"
dependencies:
readdirp: "npm:^4.0.1"
- checksum: 10c0/4bb7a3adc304059810bb6c420c43261a15bb44f610d77c35547addc84faa0374265c3adc67f25d06f363d9a4571962b02679268c40de07676d260de1986efea9
+ checksum: 10c0/a58b9df05bb452f7d105d9e7229ac82fa873741c0c40ddcc7bb82f8a909fbe3f7814c9ebe9bc9a2bef9b737c0ec6e2d699d179048ef06ad3ec46315df0ebe6ad
languageName: node
linkType: hard
@@ -5792,12 +5825,12 @@ __metadata:
languageName: node
linkType: hard
-"codsen-utils@npm:^1.6.4":
- version: 1.6.4
- resolution: "codsen-utils@npm:1.6.4"
+"codsen-utils@npm:^1.6.7":
+ version: 1.6.7
+ resolution: "codsen-utils@npm:1.6.7"
dependencies:
- rfdc: "npm:^1.3.1"
- checksum: 10c0/c2ca709b2c7cade614f9266df8542ecb770cad89dfcf4f17637134d4a8084554f97fbc05bce65b314f5199dfa9fba2f28cf9b704abb4e7525d4d57bec8c6b29a
+ rfdc: "npm:^1.4.1"
+ checksum: 10c0/e1371ff6e9c1f8aa5918b4cfc4366297c0e90986a208da2abdab19d49dbb29ae2ef42010b12c64d897fc8231c421ed8a3f4ec7e6e9f26acb4440df36233f5b72
languageName: node
linkType: hard
@@ -6045,10 +6078,10 @@ __metadata:
languageName: node
linkType: hard
-"consola@npm:^3.2.3":
- version: 3.2.3
- resolution: "consola@npm:3.2.3"
- checksum: 10c0/c606220524ec88a05bb1baf557e9e0e04a0c08a9c35d7a08652d99de195c4ddcb6572040a7df57a18ff38bbc13ce9880ad032d56630cef27bef72768ef0ac078
+"consola@npm:^3.4.0":
+ version: 3.4.0
+ resolution: "consola@npm:3.4.0"
+ checksum: 10c0/bc7f7ad46514375109a80f3ae8330097eb1e5d89232a24eb830f3ac383e22036a62c53d33561cd73d7cda4b3691fba85e3dcf35229ef7721b324aae291ceb40c
languageName: node
linkType: hard
@@ -6371,7 +6404,7 @@ __metadata:
languageName: node
linkType: hard
-"debug@npm:^4.3.7":
+"debug@npm:^4.3.7, debug@npm:^4.4.0":
version: 4.4.0
resolution: "debug@npm:4.4.0"
dependencies:
@@ -6976,34 +7009,35 @@ __metadata:
languageName: node
linkType: hard
-"esbuild@npm:^0.24.0":
- version: 0.24.0
- resolution: "esbuild@npm:0.24.0"
- dependencies:
- "@esbuild/aix-ppc64": "npm:0.24.0"
- "@esbuild/android-arm": "npm:0.24.0"
- "@esbuild/android-arm64": "npm:0.24.0"
- "@esbuild/android-x64": "npm:0.24.0"
- "@esbuild/darwin-arm64": "npm:0.24.0"
- "@esbuild/darwin-x64": "npm:0.24.0"
- "@esbuild/freebsd-arm64": "npm:0.24.0"
- "@esbuild/freebsd-x64": "npm:0.24.0"
- "@esbuild/linux-arm": "npm:0.24.0"
- "@esbuild/linux-arm64": "npm:0.24.0"
- "@esbuild/linux-ia32": "npm:0.24.0"
- "@esbuild/linux-loong64": "npm:0.24.0"
- "@esbuild/linux-mips64el": "npm:0.24.0"
- "@esbuild/linux-ppc64": "npm:0.24.0"
- "@esbuild/linux-riscv64": "npm:0.24.0"
- "@esbuild/linux-s390x": "npm:0.24.0"
- "@esbuild/linux-x64": "npm:0.24.0"
- "@esbuild/netbsd-x64": "npm:0.24.0"
- "@esbuild/openbsd-arm64": "npm:0.24.0"
- "@esbuild/openbsd-x64": "npm:0.24.0"
- "@esbuild/sunos-x64": "npm:0.24.0"
- "@esbuild/win32-arm64": "npm:0.24.0"
- "@esbuild/win32-ia32": "npm:0.24.0"
- "@esbuild/win32-x64": "npm:0.24.0"
+"esbuild@npm:^0.25.0":
+ version: 0.25.0
+ resolution: "esbuild@npm:0.25.0"
+ dependencies:
+ "@esbuild/aix-ppc64": "npm:0.25.0"
+ "@esbuild/android-arm": "npm:0.25.0"
+ "@esbuild/android-arm64": "npm:0.25.0"
+ "@esbuild/android-x64": "npm:0.25.0"
+ "@esbuild/darwin-arm64": "npm:0.25.0"
+ "@esbuild/darwin-x64": "npm:0.25.0"
+ "@esbuild/freebsd-arm64": "npm:0.25.0"
+ "@esbuild/freebsd-x64": "npm:0.25.0"
+ "@esbuild/linux-arm": "npm:0.25.0"
+ "@esbuild/linux-arm64": "npm:0.25.0"
+ "@esbuild/linux-ia32": "npm:0.25.0"
+ "@esbuild/linux-loong64": "npm:0.25.0"
+ "@esbuild/linux-mips64el": "npm:0.25.0"
+ "@esbuild/linux-ppc64": "npm:0.25.0"
+ "@esbuild/linux-riscv64": "npm:0.25.0"
+ "@esbuild/linux-s390x": "npm:0.25.0"
+ "@esbuild/linux-x64": "npm:0.25.0"
+ "@esbuild/netbsd-arm64": "npm:0.25.0"
+ "@esbuild/netbsd-x64": "npm:0.25.0"
+ "@esbuild/openbsd-arm64": "npm:0.25.0"
+ "@esbuild/openbsd-x64": "npm:0.25.0"
+ "@esbuild/sunos-x64": "npm:0.25.0"
+ "@esbuild/win32-arm64": "npm:0.25.0"
+ "@esbuild/win32-ia32": "npm:0.25.0"
+ "@esbuild/win32-x64": "npm:0.25.0"
dependenciesMeta:
"@esbuild/aix-ppc64":
optional: true
@@ -7039,6 +7073,8 @@ __metadata:
optional: true
"@esbuild/linux-x64":
optional: true
+ "@esbuild/netbsd-arm64":
+ optional: true
"@esbuild/netbsd-x64":
optional: true
"@esbuild/openbsd-arm64":
@@ -7055,7 +7091,7 @@ __metadata:
optional: true
bin:
esbuild: bin/esbuild
- checksum: 10c0/9f1aadd8d64f3bff422ae78387e66e51a5e09de6935a6f987b6e4e189ed00fdc2d1bc03d2e33633b094008529c8b6e06c7ad1a9782fb09fec223bf95998c0683
+ checksum: 10c0/5767b72da46da3cfec51661647ec850ddbf8a8d0662771139f10ef0692a8831396a0004b2be7966cecdb08264fb16bdc16290dcecd92396fac5f12d722fa013d
languageName: node
linkType: hard
@@ -7587,14 +7623,14 @@ __metadata:
languageName: node
linkType: hard
-"fast-xml-parser@npm:^4.5.0":
- version: 4.5.0
- resolution: "fast-xml-parser@npm:4.5.0"
+"fast-xml-parser@npm:^5.0.8":
+ version: 5.0.8
+ resolution: "fast-xml-parser@npm:5.0.8"
dependencies:
- strnum: "npm:^1.0.5"
+ strnum: "npm:^2.0.5"
bin:
fxparser: src/cli/cli.js
- checksum: 10c0/71d206c9e137f5c26af88d27dde0108068a5d074401901d643c500c36e95dfd828333a98bda020846c41f5b9b364e2b0e9be5b19b0bdcab5cf31559c07b80a95
+ checksum: 10c0/c99b1df141fd41990246985577f6d94f5604f48049aa620e1d9857c33e428bad8787d6be3f40c9766852c888e9caf4e344bbf7898d43ffdfebfaddf0c597eca0
languageName: node
linkType: hard
@@ -7607,7 +7643,7 @@ __metadata:
languageName: node
linkType: hard
-"fdir@npm:^6.2.0, fdir@npm:^6.4.2":
+"fdir@npm:^6.2.0":
version: 6.4.2
resolution: "fdir@npm:6.4.2"
peerDependencies:
@@ -7619,6 +7655,18 @@ __metadata:
languageName: node
linkType: hard
+"fdir@npm:^6.4.3":
+ version: 6.4.3
+ resolution: "fdir@npm:6.4.3"
+ peerDependencies:
+ picomatch: ^3 || ^4
+ peerDependenciesMeta:
+ picomatch:
+ optional: true
+ checksum: 10c0/d13c10120e9625adf21d8d80481586200759928c19405a816b77dd28eaeb80e7c59c5def3e2941508045eb06d34eb47fad865ccc8bf98e6ab988bb0ed160fb6f
+ languageName: node
+ linkType: hard
+
"fecha@npm:^4.2.0":
version: 4.2.3
resolution: "fecha@npm:4.2.3"
@@ -7957,13 +8005,13 @@ __metadata:
languageName: node
linkType: hard
-"generative-bayesian-network@npm:^2.1.57":
- version: 2.1.57
- resolution: "generative-bayesian-network@npm:2.1.57"
+"generative-bayesian-network@npm:^2.1.62":
+ version: 2.1.62
+ resolution: "generative-bayesian-network@npm:2.1.62"
dependencies:
adm-zip: "npm:^0.5.9"
tslib: "npm:^2.4.0"
- checksum: 10c0/2396e569c3bd1c8823f16a81aeb3039a2a1e5f5c822ecf41827934d53cde1b5a15df51852c9e530da7f851d94c54ed173e3da8d7895eb209f3331a40a054dbeb
+ checksum: 10c0/bf1fee5976aa2001f35c8bed61293aaf126e7339c4dd57dd8f759d8a318289ef0a3fe7a99969f98b2a3c39e213a3ac3ab38b46b921b071ab6a5b5b3616016ae5
languageName: node
linkType: hard
@@ -8443,15 +8491,15 @@ __metadata:
languageName: node
linkType: hard
-"header-generator@npm:^2.1.57":
- version: 2.1.57
- resolution: "header-generator@npm:2.1.57"
+"header-generator@npm:^2.1.62":
+ version: 2.1.62
+ resolution: "header-generator@npm:2.1.62"
dependencies:
browserslist: "npm:^4.21.1"
- generative-bayesian-network: "npm:^2.1.57"
+ generative-bayesian-network: "npm:^2.1.62"
ow: "npm:^0.28.1"
tslib: "npm:^2.4.0"
- checksum: 10c0/f884a06930a1d7f21df85b36d4e3275788f7dfa7086f6844d9292252399ff2d410cb383700aad3ccf821d70eb385bab45201388f752990f0d7e058c001a1e6ad
+ checksum: 10c0/6b6e46c8fa4cd0a5efcce956021f54507c3be32a4b5e5aa82cadd4184daff3c95ee8cae0c94b019460d1d4d0ca7c47f94fcf9ce9304bd6b543291757136fb813
languageName: node
linkType: hard
@@ -9358,12 +9406,12 @@ __metadata:
languageName: node
linkType: hard
-"jsonrepair@npm:^3.11.1":
- version: 3.11.1
- resolution: "jsonrepair@npm:3.11.1"
+"jsonrepair@npm:^3.12.0":
+ version: 3.12.0
+ resolution: "jsonrepair@npm:3.12.0"
bin:
jsonrepair: bin/cli.js
- checksum: 10c0/9bcc739d8eb691d78ebfa04fe565891860cffdcc8970ad37124556e8634d77833dd046b2867eb3758f596754f81932a8f6f0f57a2575a1ea1d6b3eb551076f28
+ checksum: 10c0/f61bea017e9675c888dc8087bec6f868595ab0103a211827f9fc6a327abd9c70fa4d3241ee2846b06e229a628d51e5a5448c516d7be3040862863a2f2c3f78cd
languageName: node
linkType: hard
@@ -9459,16 +9507,16 @@ __metadata:
languageName: node
linkType: hard
-"langchain@npm:>=0.2.3 <0.3.0 || >=0.3.4 <0.4.0":
- version: 0.3.10
- resolution: "langchain@npm:0.3.10"
+"langchain@npm:0.3.6":
+ version: 0.3.6
+ resolution: "langchain@npm:0.3.6"
dependencies:
"@langchain/openai": "npm:>=0.1.0 <0.4.0"
"@langchain/textsplitters": "npm:>=0.0.0 <0.2.0"
js-tiktoken: "npm:^1.0.12"
js-yaml: "npm:^4.1.0"
jsonpointer: "npm:^5.0.1"
- langsmith: "npm:^0.2.8"
+ langsmith: "npm:^0.2.0"
openapi-types: "npm:^12.1.3"
p-retry: "npm:4"
uuid: "npm:^10.0.0"
@@ -9478,12 +9526,10 @@ __metadata:
peerDependencies:
"@langchain/anthropic": "*"
"@langchain/aws": "*"
- "@langchain/cerebras": "*"
"@langchain/cohere": "*"
"@langchain/core": ">=0.2.21 <0.4.0"
"@langchain/google-genai": "*"
"@langchain/google-vertexai": "*"
- "@langchain/google-vertexai-web": "*"
"@langchain/groq": "*"
"@langchain/mistralai": "*"
"@langchain/ollama": "*"
@@ -9497,16 +9543,12 @@ __metadata:
optional: true
"@langchain/aws":
optional: true
- "@langchain/cerebras":
- optional: true
"@langchain/cohere":
optional: true
"@langchain/google-genai":
optional: true
"@langchain/google-vertexai":
optional: true
- "@langchain/google-vertexai-web":
- optional: true
"@langchain/groq":
optional: true
"@langchain/mistralai":
@@ -9523,20 +9565,20 @@ __metadata:
optional: true
typeorm:
optional: true
- checksum: 10c0/4120bc57dfb4d9ca1235e7c6f35227ada5be6e92e818b7a7d7e01f65fe937b1ffc172c8bb58f4861067e1475f72921dbae07ecb0b006077fa52f3cdfe9702d17
+ checksum: 10c0/410f5d6c9b4eb24e6d632182e37e73830f934af9dbed5d4df6056e18cbf2caeedb9d46eec49c950db65b8fde0ea429cd82ac72fdfd0ae837e4a0f889a873b9e4
languageName: node
linkType: hard
-"langchain@npm:~0.3.6":
- version: 0.3.6
- resolution: "langchain@npm:0.3.6"
+"langchain@npm:>=0.2.3 <0.3.0 || >=0.3.4 <0.4.0":
+ version: 0.3.10
+ resolution: "langchain@npm:0.3.10"
dependencies:
"@langchain/openai": "npm:>=0.1.0 <0.4.0"
"@langchain/textsplitters": "npm:>=0.0.0 <0.2.0"
js-tiktoken: "npm:^1.0.12"
js-yaml: "npm:^4.1.0"
jsonpointer: "npm:^5.0.1"
- langsmith: "npm:^0.2.0"
+ langsmith: "npm:^0.2.8"
openapi-types: "npm:^12.1.3"
p-retry: "npm:4"
uuid: "npm:^10.0.0"
@@ -9546,10 +9588,12 @@ __metadata:
peerDependencies:
"@langchain/anthropic": "*"
"@langchain/aws": "*"
+ "@langchain/cerebras": "*"
"@langchain/cohere": "*"
"@langchain/core": ">=0.2.21 <0.4.0"
"@langchain/google-genai": "*"
"@langchain/google-vertexai": "*"
+ "@langchain/google-vertexai-web": "*"
"@langchain/groq": "*"
"@langchain/mistralai": "*"
"@langchain/ollama": "*"
@@ -9563,12 +9607,16 @@ __metadata:
optional: true
"@langchain/aws":
optional: true
+ "@langchain/cerebras":
+ optional: true
"@langchain/cohere":
optional: true
"@langchain/google-genai":
optional: true
"@langchain/google-vertexai":
optional: true
+ "@langchain/google-vertexai-web":
+ optional: true
"@langchain/groq":
optional: true
"@langchain/mistralai":
@@ -9585,7 +9633,7 @@ __metadata:
optional: true
typeorm:
optional: true
- checksum: 10c0/410f5d6c9b4eb24e6d632182e37e73830f934af9dbed5d4df6056e18cbf2caeedb9d46eec49c950db65b8fde0ea429cd82ac72fdfd0ae837e4a0f889a873b9e4
+ checksum: 10c0/4120bc57dfb4d9ca1235e7c6f35227ada5be6e92e818b7a7d7e01f65fe937b1ffc172c8bb58f4861067e1475f72921dbae07ecb0b006077fa52f3cdfe9702d17
languageName: node
linkType: hard
@@ -11638,13 +11686,13 @@ __metadata:
languageName: node
linkType: hard
-"p-queue-compat@npm:^1.0.227":
- version: 1.0.227
- resolution: "p-queue-compat@npm:1.0.227"
+"p-queue-compat@npm:^1.0.229":
+ version: 1.0.229
+ resolution: "p-queue-compat@npm:1.0.229"
dependencies:
eventemitter3: "npm:5.x"
p-timeout-compat: "npm:^1.0.3"
- checksum: 10c0/4b1d241e0734f2dad9669b2d71e28c62218f2f8d29bd575080975154ebf2f9aba9b47ce11714e16763f7c55e89bcf2213ea7a7cd7b90cce5d0246f34fbc8ac8d
+ checksum: 10c0/f9882127cf9a16a33e7b31142aeb73d6c6f1a11c6c7230eb2328bf8dc679dbb35f2b153cbfc49e19090d7c6226f6c2391ee0a93e3d8bd9ef1e88be63f7579f09
languageName: node
linkType: hard
@@ -12017,9 +12065,9 @@ __metadata:
languageName: node
linkType: hard
-"pino@npm:^9.5.0":
- version: 9.5.0
- resolution: "pino@npm:9.5.0"
+"pino@npm:^9.6.0":
+ version: 9.6.0
+ resolution: "pino@npm:9.6.0"
dependencies:
atomic-sleep: "npm:^1.0.0"
fast-redact: "npm:^3.1.1"
@@ -12034,7 +12082,7 @@ __metadata:
thread-stream: "npm:^3.0.0"
bin:
pino: bin.js
- checksum: 10c0/b06590c5f4da43df59905af1aac344432b43154c4c1569ebea168e7ae7fd0a4181ccabb769a6568cf3e781e1d1b9df13d65b3603e25ebb05539bcb02ea04215e
+ checksum: 10c0/bcd1e9d9b301bea13b95689ca9ad7105ae9451928fb6c0b67b3e58c5fe37cea1d40665f3d6641e3da00be0bbc17b89031e67abbc8ea6aac6164f399309fd78e7
languageName: node
linkType: hard
@@ -12307,42 +12355,42 @@ __metadata:
languageName: node
linkType: hard
-"ranges-apply@npm:^7.0.16":
- version: 7.0.16
- resolution: "ranges-apply@npm:7.0.16"
+"ranges-apply@npm:^7.0.19":
+ version: 7.0.19
+ resolution: "ranges-apply@npm:7.0.19"
dependencies:
- ranges-merge: "npm:^9.0.15"
+ ranges-merge: "npm:^9.0.18"
tiny-invariant: "npm:^1.3.3"
- checksum: 10c0/0d8796f6b72170c6c08ecf57b2df8a12ab645416176bea0d0dc3b7cc2aa68142843f25bbc5e256d3569b2e74648e5f0821f88f732a77a5d3483385426428eaa2
+ checksum: 10c0/179335d25334a5d6d9869341f3f89f63ef0a309ac1b5c601ca7028228da9bf5e765356f6765c73e6b682de70731020b5339541bcb15959f9e1e72c0c9a48a479
languageName: node
linkType: hard
-"ranges-merge@npm:^9.0.15":
- version: 9.0.15
- resolution: "ranges-merge@npm:9.0.15"
+"ranges-merge@npm:^9.0.18":
+ version: 9.0.18
+ resolution: "ranges-merge@npm:9.0.18"
dependencies:
- ranges-push: "npm:^7.0.15"
- ranges-sort: "npm:^6.0.11"
- checksum: 10c0/2963c3dcd149cd7c684d1f3ec190f4850fd2d34b7e0611263a87712c45dc92e136fd58c8be5e9136c3c4513a90f22de524d0b9da99bb15354d9f480c5f6409f5
+ ranges-push: "npm:^7.0.18"
+ ranges-sort: "npm:^6.0.13"
+ checksum: 10c0/b4de993f81860f2689e24b10af7098ee351d0599658edc788d354ad2cc5f09e84d6e9cc98dc86fa505aa66bf1eb76c4e3886b67f2243d5459671182a8dbb4bab
languageName: node
linkType: hard
-"ranges-push@npm:^7.0.15":
- version: 7.0.15
- resolution: "ranges-push@npm:7.0.15"
+"ranges-push@npm:^7.0.18":
+ version: 7.0.18
+ resolution: "ranges-push@npm:7.0.18"
dependencies:
- codsen-utils: "npm:^1.6.4"
- ranges-sort: "npm:^6.0.11"
- string-collapse-leading-whitespace: "npm:^7.0.7"
- string-trim-spaces-only: "npm:^5.0.10"
- checksum: 10c0/b83e514243bc21bfd3b80f6757faf9b0850933b8f7ae9130d2bb09dedfc3d57a8721163af57b6d904b402f0f9167cf34f67e36d56b1222ccebb7b20621e277be
+ codsen-utils: "npm:^1.6.7"
+ ranges-sort: "npm:^6.0.13"
+ string-collapse-leading-whitespace: "npm:^7.0.9"
+ string-trim-spaces-only: "npm:^5.0.12"
+ checksum: 10c0/6f0137a79472d5ed31d79e191e1fceccda16e8528db5f899fc4842b7a330d7d0a8553f8650c78dfe9f954fcd929e34a33aed3895c989392e314f3991eecf730f
languageName: node
linkType: hard
-"ranges-sort@npm:^6.0.11":
- version: 6.0.11
- resolution: "ranges-sort@npm:6.0.11"
- checksum: 10c0/fb4f80a29a49e1bbad5cc5ce2c6371f807c82bebd1ca4f8da01b6fd5131aa5cc19ae333b468d5f1c7a3601da150770e1a2995e7036a3d79a4dbace015d4676fd
+"ranges-sort@npm:^6.0.13":
+ version: 6.0.13
+ resolution: "ranges-sort@npm:6.0.13"
+ checksum: 10c0/ff57dff6b25a2e969a2753ebf1c9b469a58e8728b138e3a6f92cb2664520607f59cb09132d39769400e08a98dd02b3ef701adc1d06c6019865223c017e2136ef
languageName: node
linkType: hard
@@ -12524,12 +12572,12 @@ __metadata:
languageName: node
linkType: hard
-"remeda@npm:^2.17.4":
- version: 2.17.4
- resolution: "remeda@npm:2.17.4"
+"remeda@npm:^2.21.0":
+ version: 2.21.0
+ resolution: "remeda@npm:2.21.0"
dependencies:
- type-fest: "npm:^4.27.0"
- checksum: 10c0/055722865b0016e620b6c35e43fac0f5fbd3694f39ffdf44b6c0b6c15bfdc65d66b27217bcb0484a009017918aae94e3494aea2258182a266be5e6d86ac9449a
+ type-fest: "npm:^4.35.0"
+ checksum: 10c0/003de949953de25ddf4b86d50d2dcea992ced9cca10ff163b240bd1165d04ddd2b871893f55a18224f0bb3736c8549145fe5cb89cf73d1c4f8593fa6545c4f30
languageName: node
linkType: hard
@@ -12692,7 +12740,7 @@ __metadata:
languageName: node
linkType: hard
-"rfdc@npm:^1.3.1, rfdc@npm:^1.4.1":
+"rfdc@npm:^1.4.1":
version: 1.4.1
resolution: "rfdc@npm:1.4.1"
checksum: 10c0/4614e4292356cafade0b6031527eea9bc90f2372a22c012313be1dcc69a3b90c7338158b414539be863fa95bfcb2ddcd0587be696841af4e6679d85e62c060c7
@@ -12794,29 +12842,29 @@ __metadata:
languageName: node
linkType: hard
-"rollup@npm:^4.24.0":
- version: 4.28.1
- resolution: "rollup@npm:4.28.1"
- dependencies:
- "@rollup/rollup-android-arm-eabi": "npm:4.28.1"
- "@rollup/rollup-android-arm64": "npm:4.28.1"
- "@rollup/rollup-darwin-arm64": "npm:4.28.1"
- "@rollup/rollup-darwin-x64": "npm:4.28.1"
- "@rollup/rollup-freebsd-arm64": "npm:4.28.1"
- "@rollup/rollup-freebsd-x64": "npm:4.28.1"
- "@rollup/rollup-linux-arm-gnueabihf": "npm:4.28.1"
- "@rollup/rollup-linux-arm-musleabihf": "npm:4.28.1"
- "@rollup/rollup-linux-arm64-gnu": "npm:4.28.1"
- "@rollup/rollup-linux-arm64-musl": "npm:4.28.1"
- "@rollup/rollup-linux-loongarch64-gnu": "npm:4.28.1"
- "@rollup/rollup-linux-powerpc64le-gnu": "npm:4.28.1"
- "@rollup/rollup-linux-riscv64-gnu": "npm:4.28.1"
- "@rollup/rollup-linux-s390x-gnu": "npm:4.28.1"
- "@rollup/rollup-linux-x64-gnu": "npm:4.28.1"
- "@rollup/rollup-linux-x64-musl": "npm:4.28.1"
- "@rollup/rollup-win32-arm64-msvc": "npm:4.28.1"
- "@rollup/rollup-win32-ia32-msvc": "npm:4.28.1"
- "@rollup/rollup-win32-x64-msvc": "npm:4.28.1"
+"rollup@npm:^4.34.8":
+ version: 4.34.9
+ resolution: "rollup@npm:4.34.9"
+ dependencies:
+ "@rollup/rollup-android-arm-eabi": "npm:4.34.9"
+ "@rollup/rollup-android-arm64": "npm:4.34.9"
+ "@rollup/rollup-darwin-arm64": "npm:4.34.9"
+ "@rollup/rollup-darwin-x64": "npm:4.34.9"
+ "@rollup/rollup-freebsd-arm64": "npm:4.34.9"
+ "@rollup/rollup-freebsd-x64": "npm:4.34.9"
+ "@rollup/rollup-linux-arm-gnueabihf": "npm:4.34.9"
+ "@rollup/rollup-linux-arm-musleabihf": "npm:4.34.9"
+ "@rollup/rollup-linux-arm64-gnu": "npm:4.34.9"
+ "@rollup/rollup-linux-arm64-musl": "npm:4.34.9"
+ "@rollup/rollup-linux-loongarch64-gnu": "npm:4.34.9"
+ "@rollup/rollup-linux-powerpc64le-gnu": "npm:4.34.9"
+ "@rollup/rollup-linux-riscv64-gnu": "npm:4.34.9"
+ "@rollup/rollup-linux-s390x-gnu": "npm:4.34.9"
+ "@rollup/rollup-linux-x64-gnu": "npm:4.34.9"
+ "@rollup/rollup-linux-x64-musl": "npm:4.34.9"
+ "@rollup/rollup-win32-arm64-msvc": "npm:4.34.9"
+ "@rollup/rollup-win32-ia32-msvc": "npm:4.34.9"
+ "@rollup/rollup-win32-x64-msvc": "npm:4.34.9"
"@types/estree": "npm:1.0.6"
fsevents: "npm:~2.3.2"
dependenciesMeta:
@@ -12862,7 +12910,7 @@ __metadata:
optional: true
bin:
rollup: dist/bin/rollup
- checksum: 10c0/2d2d0433b7cb53153a04c7b406f342f31517608dc57510e49177941b9e68c30071674b83a0292ef1d87184e5f7c6d0f2945c8b3c74963074de10c75366fe2c14
+ checksum: 10c0/dd0be1f7c4f8a93040026be13ecc39259fb55313db0dac7eafd97a3ac01ab4584e6b1a8afd86b0259dcf391699d5560a678abe6c0729af0aa4f2d5df70f05c8c
languageName: node
linkType: hard
@@ -13038,12 +13086,10 @@ __metadata:
languageName: node
linkType: hard
-"serialize-error@npm:^11.0.3":
- version: 11.0.3
- resolution: "serialize-error@npm:11.0.3"
- dependencies:
- type-fest: "npm:^2.12.2"
- checksum: 10c0/7263603883b8936650819f0fd5150d41427b317432678b21722c54b85367ae15b8552865eb7f3f39ba71a32a003730a2e2e971e6909431eb54db70a3ef8eca17
+"serialize-error-cjs@npm:^0.2.0":
+ version: 0.2.0
+ resolution: "serialize-error-cjs@npm:0.2.0"
+ checksum: 10c0/7e81aa8bf5c40a98345f0e1a2c736168e9ad7403fed3bd713d851595496ffeb7d50b9b0439d7e85042cf4e8522a08b4507939f1944581d539fd3f2c4b6a645e6
languageName: node
linkType: hard
@@ -13449,10 +13495,10 @@ __metadata:
languageName: node
linkType: hard
-"string-collapse-leading-whitespace@npm:^7.0.7":
- version: 7.0.7
- resolution: "string-collapse-leading-whitespace@npm:7.0.7"
- checksum: 10c0/f54c5a650c2d64b9c6d1b8a48366620f242958fbc289f52c94e18d80b7cf63476baa7b03a8d4f8a4e6bffb0916867c9d0cbbcbb91344f69d4b9073f941dba24c
+"string-collapse-leading-whitespace@npm:^7.0.9":
+ version: 7.0.9
+ resolution: "string-collapse-leading-whitespace@npm:7.0.9"
+ checksum: 10c0/d8b6f1fd83be4901e673ce7235b2c55216beab4925e1a005983546952ca41456f57b426fb7e01f4e9b795b18f18df8b303137ce7c3b14457b98ff6e3f599b6b9
languageName: node
linkType: hard
@@ -13463,35 +13509,35 @@ __metadata:
languageName: node
linkType: hard
-"string-left-right@npm:^6.0.17":
- version: 6.0.17
- resolution: "string-left-right@npm:6.0.17"
+"string-left-right@npm:^6.0.20":
+ version: 6.0.20
+ resolution: "string-left-right@npm:6.0.20"
dependencies:
- codsen-utils: "npm:^1.6.4"
- rfdc: "npm:^1.3.1"
- checksum: 10c0/d07830f8027c8bd518fb82ed58bd5fbb2c8acc3d8c8c61b73488ca9126f86e21acfed999e533cf84533fcd7b9f770cb002f7306333380d5deb9a77e2d85bf463
+ codsen-utils: "npm:^1.6.7"
+ rfdc: "npm:^1.4.1"
+ checksum: 10c0/5199343a01709e355704ea3866ca51dc7fb07ef35d66227f2439e7c8689b431bc24ea9d753d582f83460dfa461973eff01857949b604deb1298d66ae91cbf4ca
languageName: node
linkType: hard
-"string-strip-html@npm:^13.4.8":
- version: 13.4.8
- resolution: "string-strip-html@npm:13.4.8"
+"string-strip-html@npm:^13.4.12":
+ version: 13.4.12
+ resolution: "string-strip-html@npm:13.4.12"
dependencies:
"@types/lodash-es": "npm:^4.17.12"
- codsen-utils: "npm:^1.6.4"
+ codsen-utils: "npm:^1.6.7"
html-entities: "npm:^2.5.2"
lodash-es: "npm:^4.17.21"
- ranges-apply: "npm:^7.0.16"
- ranges-push: "npm:^7.0.15"
- string-left-right: "npm:^6.0.17"
- checksum: 10c0/e93f104ce7a86ce5124fbfbd10374728a9488b55c8f16f22fb91800df39eb03bb789eaf41d32c5d01b7f5cea5a3f3d1398bb58cb73be3b6083dec0c6852c328f
+ ranges-apply: "npm:^7.0.19"
+ ranges-push: "npm:^7.0.18"
+ string-left-right: "npm:^6.0.20"
+ checksum: 10c0/21218943908f0ea382091ab04b6ab3307e2ae7db8d43bef2e33bdf40d9f7fa617d75f459e2693a62118022b37d0cd29eda73837eddffd12ea64e9cf832582029
languageName: node
linkType: hard
-"string-trim-spaces-only@npm:^5.0.10":
- version: 5.0.10
- resolution: "string-trim-spaces-only@npm:5.0.10"
- checksum: 10c0/23a1480ab58acd3b5bec20cb5a8a01ab0592304c068cf8438dd45a66633b04feec5099737168fbe1730a504fb40be9d564a35938bef1db325a9c0a20fe0e9ddb
+"string-trim-spaces-only@npm:^5.0.12":
+ version: 5.0.12
+ resolution: "string-trim-spaces-only@npm:5.0.12"
+ checksum: 10c0/c49dc295a0658b2b2b1dc9df241b89b0fd51936a4e5015cf73cb9a468ed99ca31726cafce0bf5f9e41f6a217a2c91d8586b889664abb05d91864a71f429c3b41
languageName: node
linkType: hard
@@ -13608,6 +13654,13 @@ __metadata:
languageName: node
linkType: hard
+"strnum@npm:^2.0.5":
+ version: 2.0.5
+ resolution: "strnum@npm:2.0.5"
+ checksum: 10c0/856026ef65eaf15359d340a313ece25822b6472377b3029201b00f2657a1a3fa1cd7a7ce349dad35afdd00faf451344153dbb3d8478f082b7af8c17a64799ea6
+ languageName: node
+ linkType: hard
+
"strtok3@npm:^6.2.4":
version: 6.3.0
resolution: "strtok3@npm:6.3.0"
@@ -13833,13 +13886,20 @@ __metadata:
languageName: node
linkType: hard
-"tinyglobby@npm:^0.2.9":
- version: 0.2.10
- resolution: "tinyglobby@npm:0.2.10"
+"tinyexec@npm:^0.3.2":
+ version: 0.3.2
+ resolution: "tinyexec@npm:0.3.2"
+ checksum: 10c0/3efbf791a911be0bf0821eab37a3445c2ba07acc1522b1fa84ae1e55f10425076f1290f680286345ed919549ad67527d07281f1c19d584df3b74326909eb1f90
+ languageName: node
+ linkType: hard
+
+"tinyglobby@npm:^0.2.11":
+ version: 0.2.12
+ resolution: "tinyglobby@npm:0.2.12"
dependencies:
- fdir: "npm:^6.4.2"
+ fdir: "npm:^6.4.3"
picomatch: "npm:^4.0.2"
- checksum: 10c0/ce946135d39b8c0e394e488ad59f4092e8c4ecd675ef1bcd4585c47de1b325e61ec6adfbfbe20c3c2bfa6fd674c5b06de2a2e65c433f752ae170aff11793e5ef
+ checksum: 10c0/7c9be4fd3625630e262dcb19015302aad3b4ba7fc620f269313e688f2161ea8724d6cb4444baab5ef2826eb6bed72647b169a33ec8eea37501832a2526ff540f
languageName: node
linkType: hard
@@ -14001,25 +14061,25 @@ __metadata:
languageName: node
linkType: hard
-"tsup@npm:^8.3.6":
- version: 8.3.6
- resolution: "tsup@npm:8.3.6"
+"tsup@npm:^8.4.0":
+ version: 8.4.0
+ resolution: "tsup@npm:8.4.0"
dependencies:
- bundle-require: "npm:^5.0.0"
+ bundle-require: "npm:^5.1.0"
cac: "npm:^6.7.14"
- chokidar: "npm:^4.0.1"
- consola: "npm:^3.2.3"
- debug: "npm:^4.3.7"
- esbuild: "npm:^0.24.0"
+ chokidar: "npm:^4.0.3"
+ consola: "npm:^3.4.0"
+ debug: "npm:^4.4.0"
+ esbuild: "npm:^0.25.0"
joycon: "npm:^3.1.1"
picocolors: "npm:^1.1.1"
postcss-load-config: "npm:^6.0.1"
resolve-from: "npm:^5.0.0"
- rollup: "npm:^4.24.0"
+ rollup: "npm:^4.34.8"
source-map: "npm:0.8.0-beta.0"
sucrase: "npm:^3.35.0"
- tinyexec: "npm:^0.3.1"
- tinyglobby: "npm:^0.2.9"
+ tinyexec: "npm:^0.3.2"
+ tinyglobby: "npm:^0.2.11"
tree-kill: "npm:^1.2.2"
peerDependencies:
"@microsoft/api-extractor": ^7.36.0
@@ -14038,7 +14098,7 @@ __metadata:
bin:
tsup: dist/cli-default.js
tsup-node: dist/cli-node.js
- checksum: 10c0/b8669bba2aafb8831832d7638792a20a101778a07ba971ea36fca47c27e7095fe0c91d29669d2fc0d17941138bc87245de1d0c4e5abf0ce5dfec7bf9eb76a5bd
+ checksum: 10c0/c6636ffd6ade59d3544cd424c7115449f8712eb5c872e1e36d25817436f9ea9424d8ee8f1b6244ac7c9a887b0fcf6cc42c102baa55a9080236afc18ba73871e6
languageName: node
linkType: hard
@@ -14106,7 +14166,7 @@ __metadata:
languageName: node
linkType: hard
-"type-fest@npm:^2.12.2, type-fest@npm:^2.5.1":
+"type-fest@npm:^2.5.1":
version: 2.19.0
resolution: "type-fest@npm:2.19.0"
checksum: 10c0/a5a7ecf2e654251613218c215c7493574594951c08e52ab9881c9df6a6da0aeca7528c213c622bc374b4e0cb5c443aa3ab758da4e3c959783ce884c3194e12cb
@@ -14127,13 +14187,20 @@ __metadata:
languageName: node
linkType: hard
-"type-fest@npm:^4.2.0, type-fest@npm:^4.27.0":
+"type-fest@npm:^4.2.0":
version: 4.30.0
resolution: "type-fest@npm:4.30.0"
checksum: 10c0/9441fbbc971f92a53d7dfdb0db3f9c71a5a33ac3e021ca605cba8ad0b5c0a1e191cc778b4980c534b098ccb4e3322809100baf763be125510c993c9b8361f60e
languageName: node
linkType: hard
+"type-fest@npm:^4.35.0":
+ version: 4.37.0
+ resolution: "type-fest@npm:4.37.0"
+ checksum: 10c0/5bad189f66fbe3431e5d36befa08cab6010e56be68b7467530b7ef94c3cf81ef775a8ac3047c8bbda4dd3159929285870357498d7bc1df062714f9c5c3a84926
+ languageName: node
+ linkType: hard
+
"typed-function@npm:^4.2.1":
version: 4.2.1
resolution: "typed-function@npm:4.2.1"
@@ -14171,23 +14238,23 @@ __metadata:
languageName: node
linkType: hard
-"typescript@npm:^5.7.3":
- version: 5.7.3
- resolution: "typescript@npm:5.7.3"
+"typescript@npm:^5.8.2":
+ version: 5.8.2
+ resolution: "typescript@npm:5.8.2"
bin:
tsc: bin/tsc
tsserver: bin/tsserver
- checksum: 10c0/b7580d716cf1824736cc6e628ab4cd8b51877408ba2be0869d2866da35ef8366dd6ae9eb9d0851470a39be17cbd61df1126f9e211d8799d764ea7431d5435afa
+ checksum: 10c0/5c4f6fbf1c6389b6928fe7b8fcd5dc73bb2d58cd4e3883f1d774ed5bd83b151cbac6b7ecf11723de56d4676daeba8713894b1e9af56174f2f9780ae7848ec3c6
languageName: node
linkType: hard
-"typescript@patch:typescript@npm%3A^5.7.3#optional!builtin":
- version: 5.7.3
- resolution: "typescript@patch:typescript@npm%3A5.7.3#optional!builtin::version=5.7.3&hash=5adc0c"
+"typescript@patch:typescript@npm%3A^5.8.2#optional!builtin":
+ version: 5.8.2
+ resolution: "typescript@patch:typescript@npm%3A5.8.2#optional!builtin::version=5.8.2&hash=5adc0c"
bin:
tsc: bin/tsc
tsserver: bin/tsserver
- checksum: 10c0/3b56d6afa03d9f6172d0b9cdb10e6b1efc9abc1608efd7a3d2f38773d5d8cfb9bbc68dfb72f0a7de5e8db04fc847f4e4baeddcd5ad9c9feda072234f0d788896
+ checksum: 10c0/8a6cd29dfb59bd5a978407b93ae0edb530ee9376a5b95a42ad057a6f80ffb0c410489ccd6fe48d1d0dfad6e8adf5d62d3874bbd251f488ae30e11a1ce6dabd28
languageName: node
linkType: hard
@@ -15110,16 +15177,16 @@ __metadata:
languageName: node
linkType: hard
-"zod-to-json-schema@npm:^3.23.5":
- version: 3.23.5
- resolution: "zod-to-json-schema@npm:3.23.5"
+"zod-to-json-schema@npm:^3.24.3":
+ version: 3.24.3
+ resolution: "zod-to-json-schema@npm:3.24.3"
peerDependencies:
- zod: ^3.23.3
- checksum: 10c0/bf50455f446c96b9a161476347ebab6e3bcae7fdf1376ce0b74248e79db733590164476dac2fc481a921868f705fefdcafd223a98203a700b3f01ba1cda6aa90
+ zod: ^3.24.1
+ checksum: 10c0/5d626fa7a51539236962b1348a7b7e7111bd1722f23ad06dead2f76599e6bd918e4067ffba0695e0acac5a60f217b4953d2ad62ad403a482d034d94f025f3a4c
languageName: node
linkType: hard
-"zod@npm:^3.22.3, zod@npm:^3.22.4, zod@npm:^3.23.8":
+"zod@npm:^3.22.3, zod@npm:^3.22.4, zod@npm:^3.23.8, zod@npm:~3.23.8":
version: 3.23.8
resolution: "zod@npm:3.23.8"
checksum: 10c0/8f14c87d6b1b53c944c25ce7a28616896319d95bc46a9660fe441adc0ed0a81253b02b5abdaeffedbeb23bdd25a0bf1c29d2c12dd919aef6447652dd295e3e69