Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

feat: improve error handling of Agent component, solves Empty ExceptionWithMessageError #6097

Merged
merged 11 commits into from
Feb 7, 2025

Conversation

edwinjosechittilappilly
Copy link
Collaborator

This pull request includes several changes to improve error handling and logging in the langflow backend. The most important changes include adding logging for exceptions, introducing a custom error class, updating the initialization of an existing error class, and refining validation and error handling in the agent component.

Error Handling and Logging Improvements:

Custom Error Class:

Existing Error Class Update:

Agent Component Enhancements:

@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 3, 2025
@edwinjosechittilappilly edwinjosechittilappilly marked this pull request as ready for review February 3, 2025 20:59
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Feb 3, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 3, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 3, 2025
Comment on lines +125 to +139
if not isinstance(self.agent_llm, str):
return self.agent_llm, None

try:
provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)
if not provider_info:
msg = f"Invalid model provider: {self.agent_llm}"
raise ValueError(msg)

component_class = provider_info.get("component_class")
display_name = component_class.display_name
inputs = provider_info.get("inputs")
prefix = provider_info.get("prefix", "")

return self._build_llm_model(component_class, inputs, prefix), display_name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if not isinstance(self.agent_llm, str):
return self.agent_llm, None
try:
provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)
if not provider_info:
msg = f"Invalid model provider: {self.agent_llm}"
raise ValueError(msg)
component_class = provider_info.get("component_class")
display_name = component_class.display_name
inputs = provider_info.get("inputs")
prefix = provider_info.get("prefix", "")
return self._build_llm_model(component_class, inputs, prefix), display_name
agent_llm = self.agent_llm
if not isinstance(agent_llm, str):
return agent_llm, None
provider_info = MODEL_PROVIDERS_DICT.get(agent_llm)
if provider_info is None:
msg = f"Invalid model provider: {agent_llm}"
raise ValueError(msg)
component_class = provider_info["component_class"]
display_name = component_class.display_name
inputs = provider_info["inputs"]
prefix = provider_info.get("prefix", "")
try:
llm_model = self._build_llm_model(component_class, inputs, prefix)
return llm_model, display_name
error_message = f"Error building {agent_llm} language model: {e!s}"
logger.error(error_message)
raise ValueError(f"Failed to initialize language model: {e!s}") from e

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like the diff is messed up here. I suppose there was a real optimization here but the output seems incorrect. This is happening due to our dependency on a diffing library that is buggy. We are looking to fix this

Copy link
Contributor

codeflash-ai bot commented Feb 3, 2025

⚡️ Codeflash found optimizations for this PR

📄 5,023% (50.23x) speedup for AgentComponent.get_llm in src/backend/base/langflow/components/agents/agent.py

⏱️ Runtime : 555 microseconds 10.8 microseconds (best of 38 runs)

📝 Explanation and details

To optimize this Python program for both runtime and memory, we will focus on a few key areas.

  1. We will use dictionary methods that are faster and more memory-efficient where applicable.
  2. We will reduce nesting and simplify exception handling where possible to improve performance.
  3. We will minimize repeated attribute lookups to improve speed.

Explanation of Changes

  1. Reduced Deep Nesting: The provider check and exception handling are separated, making the control flow clearer and reducing nested code paths.
  2. Removed Unnecessary Re-assignments: Directly accessed the agent_llm and stored in a local variable for slightly improved lookup performance.
  3. Optimized Exception Handling: Moved string interpolation inside the try block only where exceptions are expected.
  4. In-place Dictionary Access and Assignments: Leveraged dictionary methods for fast access and retrieval directly with fewer intermediate steps.

These changes should help in improving runtime efficiency and reduce memory overheads by streamlining the control flow and optimizing the dictionary operations.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 6 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage undefined
🌀 Generated Regression Tests Details
from unittest.mock import MagicMock, patch

# imports
import pytest  # used for our unit tests
from langflow.base.models.model_input_constants import MODEL_PROVIDERS_DICT
from langflow.components.agents.agent import AgentComponent
# function to test
from langflow.components.langchain_utilities.tool_calling import \
    ToolCallingAgentComponent
from langflow.logging import logger

MODEL_PROVIDERS_DICT: dict[str, dict] = {}


# unit tests

# Mock classes and inputs for testing
class MockInput:
    def __init__(self, name):
        self.name = name

class MockComponentClass:
    display_name = "MockComponentClass"
    
    def set(self, **kwargs):
        self.kwargs = kwargs
        return self
    
    def build_model(self):
        return "mock_model"

class MockComponentClassThatRaisesException:
    display_name = "MockComponentClass"
    
    def set(self, **kwargs):
        raise Exception("Mock exception during set")

@pytest.fixture
def setup_model_providers_dict():
    global MODEL_PROVIDERS_DICT
    MODEL_PROVIDERS_DICT = {
        "valid_provider": {
            "component_class": MockComponentClass,
            "inputs": [MockInput(name="param1"), MockInput(name="param2")],
            "prefix": "test_"
        },
        "incomplete_provider": {
            "inputs": [MockInput(name="param1"), MockInput(name="param2")]
            # Missing "component_class" and "prefix"
        },
        "large_input_provider": {
            "component_class": MockComponentClass,
            "inputs": [MockInput(name=f"param{i}") for i in range(1000)],
            "prefix": "test_"
        }
    }

@pytest.fixture
def agent_component():
    return AgentComponent()



def test_invalid_model_provider(setup_model_providers_dict, agent_component):
    agent_component.agent_llm = "invalid_provider"
    with pytest.raises(ValueError, match="Invalid model provider: invalid_provider"):
        agent_component.get_llm()


def test_empty_model_providers_dict(agent_component):
    global MODEL_PROVIDERS_DICT
    MODEL_PROVIDERS_DICT = {}
    agent_component.agent_llm = "any_provider"
    with pytest.raises(ValueError, match="Invalid model provider: any_provider"):
        agent_component.get_llm()




def test_logging_on_error(mock_logger, setup_model_providers_dict, agent_component):
    agent_component.agent_llm = "invalid_provider"
    with pytest.raises(ValueError):
        agent_component.get_llm()
    mock_logger.assert_called_with("Error building invalid_provider language model: Invalid model provider: invalid_provider")

def test_state_modification(setup_model_providers_dict, agent_component):
    original_dict = MODEL_PROVIDERS_DICT.copy()
    agent_component.agent_llm = "some_provider"
    with pytest.raises(ValueError):
        agent_component.get_llm()
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

from unittest.mock import Mock, patch

# imports
import pytest  # used for our unit tests
from langflow.base.models.model_input_constants import MODEL_PROVIDERS_DICT
from langflow.components.agents.agent import AgentComponent
# function to test
from langflow.components.langchain_utilities.tool_calling import \
    ToolCallingAgentComponent
from langflow.logging import logger

MODEL_PROVIDERS_DICT: dict[str, ModelProvidersDict] = {}


# unit tests

# Mock classes for testing
class MockComponentClass:
    display_name = "MockComponent"

    def set(self, **kwargs):
        return self

    def build_model(self):
        return "MockModel"

class MockInput:
    def __init__(self, name):
        self.name = name

class MockModelObject:
    pass

@pytest.fixture
def agent_component():
    return AgentComponent()



def test_invalid_model_provider(agent_component):
    # Edge case: invalid model provider string
    agent_component.agent_llm = "invalid_provider"
    with pytest.raises(ValueError, match="Invalid model provider: invalid_provider"):
        agent_component.get_llm()

def test_empty_model_providers_dict(agent_component):
    # Edge case: empty MODEL_PROVIDERS_DICT
    agent_component.agent_llm = "valid_provider"
    MODEL_PROVIDERS_DICT.clear()
    with pytest.raises(ValueError, match="Invalid model provider: valid_provider"):
        agent_component.get_llm()





def test_logging_errors(agent_component):
    # Side effects: logging errors
    agent_component.agent_llm = "invalid_provider"
    with patch.object(logger, 'error') as mock_logger_error:
        with pytest.raises(ValueError, match="Invalid model provider: invalid_provider"):
            agent_component.get_llm()
        mock_logger_error.assert_called_once_with("Error building invalid_provider language model: Invalid model provider: invalid_provider")
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

Codeflash

Copy link
Contributor

@anovazzi1 anovazzi1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice improvement, just make sure to adress what @misrasaurabh1 said

@edwinjosechittilappilly
Copy link
Collaborator Author

nice improvement, just make sure to address what @misrasaurabh1 said

I believe it is a bug from codeflash!

…emote protocol error caused by OpenAI LLM in Agents (#6118)
@dosubot dosubot bot added size:XXL This PR changes 1000+ lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Feb 6, 2025
@github-actions github-actions bot removed the enhancement New feature or request label Feb 6, 2025
@github-actions github-actions bot added the enhancement New feature or request label Feb 6, 2025
Copy link
Contributor

@anovazzi1 anovazzi1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Feb 6, 2025
@dosubot dosubot bot removed the lgtm This PR has been approved by a maintainer label Feb 6, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 6, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 6, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 6, 2025
Comment on lines +125 to +143
if not isinstance(self.agent_llm, str):
return self.agent_llm, None

try:
provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)
if not provider_info:
msg = f"Invalid model provider: {self.agent_llm}"
raise ValueError(msg)

component_class = provider_info.get("component_class")
display_name = component_class.display_name
inputs = provider_info.get("inputs")
prefix = provider_info.get("prefix", "")

return self._build_llm_model(component_class, inputs, prefix), display_name

except Exception as e:
logger.error(f"Error building {self.agent_llm} language model: {e!s}")
msg = f"Failed to initialize language model: {e!s}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if not isinstance(self.agent_llm, str):
return self.agent_llm, None
try:
provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)
if not provider_info:
msg = f"Invalid model provider: {self.agent_llm}"
raise ValueError(msg)
component_class = provider_info.get("component_class")
display_name = component_class.display_name
inputs = provider_info.get("inputs")
prefix = provider_info.get("prefix", "")
return self._build_llm_model(component_class, inputs, prefix), display_name
except Exception as e:
logger.error(f"Error building {self.agent_llm} language model: {e!s}")
msg = f"Failed to initialize language model: {e!s}"
if isinstance(self.agent_llm, str):
if provider_info:
try:
component_class = provider_info["component_class"]
return self._build_llm_model(
component_class, provider_info["inputs"], provider_info.get("prefix", "")
), component_class.display_name
except Exception as e:
logger.error(f"Error building {self.agent_llm} language model: {e}")
raise ValueError(f"Failed to initialize language model: {e}") from e
else:
raise ValueError(f"Invalid model provider: {self.agent_llm}")
return self.agent_llm, None

Copy link
Contributor

codeflash-ai bot commented Feb 6, 2025

⚡️ Codeflash found optimizations for this PR

📄 4,205% (42.05x) speedup for AgentComponent.get_llm in src/backend/base/langflow/components/agents/agent.py

⏱️ Runtime : 512 microseconds 11.9 microseconds (best of 35 runs)

📝 Explanation and details

To optimize this program for better performance, the focus will be on eliminating unnecessary variable assignments and improving the efficiency of dictionary lookups. We'll also add inline exception handling to minimize the impact of each step. Here's a revised version of the program aimed at running faster.

Changes Made.

  1. Inline Check for agent_llm Type: Instead of nesting too deep, we handle the type check for agent_llm right away.
  2. Remove Unnecessary Variable Assignments: Avoid redundant assignments by using provider_info dictionary directly.
  3. Combined Exception Handling: Merge the try block and related exception logic to handle exceptions right where they occur, ensuring faster failure and debugging.
  4. Directly Use provider_info in Method Arguments: Minimize dictionary key lookups by using values directly in the method call.

These enhancements aim to streamline the code, reduce overhead from unnecessary steps, and make the program's logic flow more direct.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 7 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage undefined
🌀 Generated Regression Tests Details
from unittest.mock import MagicMock, patch

# imports
import pytest  # used for our unit tests
from langflow.base.models.model_input_constants import MODEL_PROVIDERS_DICT
from langflow.components.agents.agent import AgentComponent
# function to test
from langflow.components.langchain_utilities.tool_calling import \
    ToolCallingAgentComponent
from langflow.logging import logger

MODEL_PROVIDERS_DICT: dict[str, dict] = {}


# unit tests

# Helper classes for mocking
class MockComponent:
    display_name = "MockComponent"

    def set(self, **kwargs):
        return self

    def build_model(self):
        return "MockModel"

class MockInput:
    def __init__(self, name):
        self.name = name

# Basic Test Cases



def test_invalid_model_provider():
    agent = AgentComponent()
    agent.agent_llm = "invalid_provider"
    with pytest.raises(ValueError, match="Invalid model provider: invalid_provider"):
        agent.get_llm()

def test_non_string_agent_llm():
    agent = AgentComponent()
    agent.agent_llm = 12345
    model, display_name = agent.get_llm()



def test_missing_component_class():
    agent = AgentComponent()
    agent.agent_llm = "provider_missing_component_class"
    MODEL_PROVIDERS_DICT["provider_missing_component_class"] = {
        "inputs": [MockInput("input1"), MockInput("input2")],
        "prefix": ""
    }
    with pytest.raises(ValueError):
        agent.get_llm()

def test_missing_inputs():
    agent = AgentComponent()
    agent.agent_llm = "provider_missing_inputs"
    MODEL_PROVIDERS_DICT["provider_missing_inputs"] = {
        "component_class": MockComponent,
        "prefix": ""
    }
    with pytest.raises(ValueError):
        agent.get_llm()






from unittest.mock import MagicMock

# imports
import pytest  # used for our unit tests
from langflow.base.models.model_input_constants import MODEL_PROVIDERS_DICT
from langflow.components.agents.agent import AgentComponent
# function to test
from langflow.components.langchain_utilities.tool_calling import \
    ToolCallingAgentComponent
from langflow.logging import logger

MODEL_PROVIDERS_DICT: dict[str, dict] = {}


# unit tests

class MockComponentClass:
    display_name = "Mock Display Name"
    
    def set(self, **kwargs):
        self.kwargs = kwargs
        return self
    
    def build_model(self):
        return "Mock Model"

class MockInput:
    def __init__(self, name):
        self.name = name

class MockLLMObject:
    pass

@pytest.fixture
def agent_component():
    return AgentComponent()


def test_agent_llm_is_already_llm_object(agent_component):
    agent_component.agent_llm = MockLLMObject()
    result, display_name = agent_component.get_llm()

def test_invalid_agent_llm_string(agent_component):
    agent_component.agent_llm = "invalid_provider"
    with pytest.raises(ValueError, match="Invalid model provider: invalid_provider"):
        agent_component.get_llm()

def test_missing_component_class_in_provider_info(agent_component):
    agent_component.agent_llm = "incomplete_provider"
    MODEL_PROVIDERS_DICT["incomplete_provider"] = {
        "inputs": [MockInput("param1")]
    }
    with pytest.raises(ValueError, match="Failed to initialize language model"):
        agent_component.get_llm()

def test_empty_model_providers_dict(agent_component):
    agent_component.agent_llm = "any_provider"
    MODEL_PROVIDERS_DICT.clear()
    with pytest.raises(ValueError, match="Invalid model provider: any_provider"):
        agent_component.get_llm()


def test_logging_on_error(agent_component, caplog):
    agent_component.agent_llm = "provider_with_error"
    MODEL_PROVIDERS_DICT["provider_with_error"] = {
        "component_class": MockComponentClass,
        "inputs": [MockInput("param1")]
    }
    with pytest.raises(ValueError, match="Failed to initialize language model"):
        agent_component.get_llm()
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

Codeflash

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Feb 7, 2025
@ogabrielluiz ogabrielluiz added this pull request to the merge queue Feb 7, 2025
Merged via the queue into main with commit f9e41f9 Feb 7, 2025
44 of 45 checks passed
@ogabrielluiz ogabrielluiz deleted the fix-agent-errors branch February 7, 2025 13:06
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
enhancement New feature or request lgtm This PR has been approved by a maintainer size:XXL This PR changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants