Skip to content

Commit

Permalink
Merge branch 'main' into mypy-poe-typecheck
Browse files Browse the repository at this point in the history
  • Loading branch information
adigidh authored Mar 6, 2025
2 parents f15d9a5 + 1a55864 commit eea2e47
Show file tree
Hide file tree
Showing 114 changed files with 3,577 additions and 1,091 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,8 +150,8 @@ To stay up-to-date on our [public roadmap](https://github.com/orgs/i-am-bee/proj
BeeAI framework is open-source and we ❤️ contributions.<br>

To help build BeeAI, take a look at our:
- [Python contribution guidelines](/python/docs/CONTRIBUTING.md)
- [TypeScript contribution guidelines](/typescript/docs/CONTRIBUTING.md)
- [Python contribution guidelines](/python/CONTRIBUTING.md)
- [TypeScript contribution guidelines](/typescript/CONTRIBUTING.md)

## Bugs

Expand Down
17 changes: 17 additions & 0 deletions python/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -38,3 +38,20 @@ BEEAI_LOG_LEVEL=INFO

# XAI_API_KEY=your-xai-api-key
# XAI_CHAT_MODEL=grok-2

########################
### Vertex AI specific configuration
########################

# GOOGLE_VERTEX_CHAT_MODEL=gemini-2.0-flash-lite-001
# GOOGLE_VERTEX_PROJECT=""
# GOOGLE_VERTEX_ENDPOINT=""

########################
### Amazon Bedrock specific configuration
########################

# AWS_ACCESS_KEY_ID=
# AWS_SECRET_ACCESS_KEY=
# AWS_REGION_NAME=
# AWS_CHAT_MODEL=
34 changes: 34 additions & 0 deletions python/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,37 @@
## python_v0.1.4 (2025-03-06)

### Refactor

- rename Bee agent to ReAct agent (#505)
- move logger to the root (#504)
- update user-facing event data to all be dict and add docs (#431)
- **agents**: remove Bee branding from BaseAgent (#440)

### Bug Fixes

- improve decorated tool output (#499)
- **backend**: correctly merge inference parameters (#496)
- **backend**: tool calling, unify message content (#475)
- **backend**: correctly merge inference parameters (#486)
- **tools**: make emitter required (#461)
- **workflows**: handle relative steps (#463)

### Features

- **adapters**: add Amazon Bedrock support (#466)
- **examples**: adds logger examples and updates docs (#494)
- **internals**: construct Pydantic model from JSON Schema (#502)
- **adapters**: Add Google VertexAI support (#469)
- **tools**: add MCP tool (#481)
- langchain tool (#474)
- **examples**: templates examples ts parity (#480)
- **examples**: adds error examples and updates error docs (#490)
- **agents**: simplify variable usage in prompt templates (#484)
- improve PromptTemplate.render API (#476)
- **examples**: Add custom_agent and bee_advanced examples (#462)
- **agents**: handle message formatting (#470)
- **adapters**: Add xAI backend (#445) (#446)

## python_v0.1.3 (2025-03-03)

### Features
Expand Down
2 changes: 1 addition & 1 deletion python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ import traceback

from pydantic import ValidationError

from beeai_framework.agents.bee.agent import AgentExecutionConfig
from beeai_framework.agents.react.agent import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel
from beeai_framework.backend.message import UserMessage
from beeai_framework.memory import UnconstrainedMemory
Expand Down
4 changes: 2 additions & 2 deletions python/beeai_framework/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@


from beeai_framework.agents import BaseAgent
from beeai_framework.agents.bee.agent import BeeAgent
from beeai_framework.agents.react.agent import ReActAgent
from beeai_framework.backend import (
AssistantMessage,
CustomMessage,
Expand All @@ -40,13 +40,13 @@
"AssistantMessage",
"BaseAgent",
"BaseMemory",
"BeeAgent",
"CustomMessage",
"LoggerError",
"Message",
"OpenMeteoTool",
"Prompt",
"PromptTemplateError",
"ReActAgent",
"ReadOnlyMemory",
"Role",
"Serializable",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,3 @@
# limitations under the License.


from beeai_framework.agents.bee.agent import BeeAgent

__all__ = ["BeeAgent"]
54 changes: 54 additions & 0 deletions python/beeai_framework/adapters/amazon_bedrock/backend/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Amazon Bedrock

## Configuration

Set the following environment variables

* AWS_ACCESS_KEY_ID
* AWS_SECRET_ACCESS_KEY
* AWS_REGION_NAME

## Tested Models

Only Meta, Mistral, & Amazon Titan serverless models have been tested.

Other models should work, as beeai_framework uses LiteLLM. See [docs](https://docs.litellm.ai/docs/providers/bedrock) for more information

## Known Issues with tool use and structured output

The following models report Tool use not supported:

```text
litellm.llms.bedrock.common_utils.BedrockError: {"message":"This model doesn't support tool use."}
```

* `meta.llama3-70b-instruct-v1:0`
* `meta.llama3-8b-instruct-v1:0`

The following fail to return structured output with beeai_framework. Initial investigation indicates that these models are not responding with structured JSON output when requested

* `amazon.titan-text-express-v1`
* `amazon.titan-text-lite-v1`
* `mistral.mistral-7b-instruct-v0:2`
* `mistral.mixtral-8x7b-instruct-v0:1`
* `mistral.mistral-large-2402-v1:0`

The following models fail with an exception:

```text
litellm.exceptions.BadRequestError: litellm.BadRequestError: BedrockException - {"message":"This model doesn't support the toolConfig.toolChoice.tool field. Remove toolConfig.toolChoice.tool and try again."}
```

* `mistral.mistral-large-2402-v1:0`

## Quota limits

Default quota limits on Amazon Bedrock are low, and can cause even
simple examples to fail with:

```text
litellm.exceptions.RateLimitError: litellm.RateLimitError: BedrockException - {"message":"Too many requests, please wait before trying again."}
```

To increase quota limits, see [Amazon Bedrock #](https://aws.amazon.com/bedrock/#/) and
[Amazon Bedrock quotas](https://docs.aws.amazon.com/bedrock/latest/userguide/quotas.html).
15 changes: 15 additions & 0 deletions python/beeai_framework/adapters/amazon_bedrock/backend/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Copyright 2025 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


63 changes: 63 additions & 0 deletions python/beeai_framework/adapters/amazon_bedrock/backend/chat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Copyright 2025 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


import os

from beeai_framework.adapters.litellm.chat import LiteLLMChatModel
from beeai_framework.backend.constants import ProviderName
from beeai_framework.logger import Logger

logger = Logger(__name__)


class AmazonBedrockChatModel(LiteLLMChatModel):
@property
def provider_id(self) -> ProviderName:
return "amazon_bedrock"

def __init__(self, model_id: str | None = None, settings: dict | None = None) -> None:
_settings = settings.copy() if settings is not None else {}

aws_access_key_id = _settings.get("aws_access_key_id", os.getenv("AWS_ACCESS_KEY_ID"))
if not aws_access_key_id:
raise ValueError(
"Access key is required for Amazon Bedrock model. Specify *aws_access_key_id* "
+ "or set AWS_ACCESS_KEY_ID environment variable"
)

aws_secret_access_key = _settings.get("aws_secret_access_key", os.getenv("AWS_SECRET_ACCESS_KEY"))
if not aws_secret_access_key:
raise ValueError(
"Secret key is required for Amazon Bedrock model. Specify *aws_secret_access_key* "
+ "or set AWS_SECRET_ACCESS_KEY environment variable"
)

aws_region_name = _settings.get("aws_region_name", os.getenv("AWS_REGION_NAME"))
if not aws_region_name:
raise ValueError(
"Region is required for Amazon Bedrock model. Specify *aws_region_name* "
+ "or set AWS_REGION_NAME environment variable"
)

super().__init__(
(model_id if model_id else os.getenv("AWS_CHAT_MODEL", "llama-3.1-8b-instant")),
provider_id="bedrock",
settings=_settings
| {
"aws_access_key_id": aws_access_key_id,
"aws_secret_access_key": aws_secret_access_key,
"aws_region_name": aws_region_name,
},
)
4 changes: 2 additions & 2 deletions python/beeai_framework/adapters/groq/backend/chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@

from beeai_framework.adapters.litellm.chat import LiteLLMChatModel
from beeai_framework.backend.constants import ProviderName
from beeai_framework.utils.custom_logger import BeeLogger
from beeai_framework.logger import Logger

logger = BeeLogger(__name__)
logger = Logger(__name__)


class GroqChatModel(LiteLLMChatModel):
Expand Down
15 changes: 15 additions & 0 deletions python/beeai_framework/adapters/langchain/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Copyright 2025 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


78 changes: 78 additions & 0 deletions python/beeai_framework/adapters/langchain/tools.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Copyright 2025 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


from typing import Any, TypeVar

from langchain_core.callbacks import AsyncCallbackManagerForToolRun
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import StructuredTool
from langchain_core.tools import Tool as LangChainSimpleTool
from pydantic import BaseModel, ConfigDict

from beeai_framework.context import RunContext
from beeai_framework.emitter.emitter import Emitter
from beeai_framework.tools.tool import StringToolOutput, Tool, ToolRunOptions
from beeai_framework.utils.strings import to_safe_word


class LangChainToolRunOptions(ToolRunOptions):
langchain_runnable_config: RunnableConfig | None = None
model_config = ConfigDict(extra="allow", arbitrary_types_allowed=True)


T = TypeVar("T", bound=BaseModel)


class LangChainTool(Tool[T, LangChainToolRunOptions, StringToolOutput]):
@property
def name(self) -> str:
return self._tool.name

@property
def description(self) -> str:
return self._tool.description

@property
def input_schema(self) -> type[T]:
return self._tool.input_schema

def _create_emitter(self) -> Emitter:
return Emitter.root().child(
namespace=["tool", "langchain", to_safe_word(self._tool.name)],
creator=self,
)

def __init__(self, tool: StructuredTool | LangChainSimpleTool, options: dict[str, Any] | None = None) -> None:
super().__init__(options)
self._tool = tool

async def _run(self, input: T, options: LangChainToolRunOptions | None, context: RunContext) -> StringToolOutput:
langchain_runnable_config = options.langchain_runnable_config or {} if options else {}
args = (
input if isinstance(input, dict) else input.model_dump(),
{
**langchain_runnable_config,
"signal": context.signal or None if context else None,
},
)
is_async = (isinstance(self._tool, StructuredTool) and self._tool.coroutine) or (
isinstance(args[0].get("run_manager"), AsyncCallbackManagerForToolRun)
)
if is_async:
response = await self._tool.ainvoke(*args)
else:
response = self._tool.invoke(*args)

return StringToolOutput(result=str(response))
Loading

0 comments on commit eea2e47

Please # to comment.