Skip to content

Pre/beta - Unit Tests #965

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Merged
merged 1 commit into from
Apr 15, 2025
Merged

Pre/beta - Unit Tests #965

merged 1 commit into from
Apr 15, 2025

Conversation

codebeaver-ai[bot]
Copy link
Contributor

@codebeaver-ai codebeaver-ai bot commented Apr 14, 2025

CodeBeaver Report

I started working from Pre/beta

πŸ”„ 2 test files added.
πŸ› Found 1 bug
πŸ› οΈ 108/156 tests passed

πŸ”„ Test Updates

I've added 2 tests. They all pass β˜‘οΈ
New Tests:

  • tests/test_chromium.py
  • tests/test_scrape_do.py

No existing tests required updates.

πŸ› Bug Detection

Potential issues:

  • scrapegraphai/graphs/abstract_graph.py
    The error is occurring in the _create_llm method of the AbstractGraph class. Specifically, it's failing when trying to create a Bedrock model instance. The error message indicates that it's trying to pop a 'temperature' key from the llm_params dictionary, but this key doesn't exist.
    This suggests that the test is expecting the Bedrock model configuration to include a 'temperature' parameter, which is not being provided in the test case.
    The issue is not with the test itself, but with how the _create_llm method is handling the Bedrock model configuration. It's assuming that all Bedrock models will have a 'temperature' parameter, which may not always be the case.
    To fix this, the code should be modified to handle cases where the 'temperature' parameter is not provided for Bedrock models. This could be done by using the get method with a default value, or by checking if the key exists before trying to pop it.
    For example, the code could be changed to:
if llm_params["model_provider"] == "bedrock":
    llm_params["model_kwargs"] = {
        "temperature": llm_params.pop("temperature", None)  # Use None as default if not provided
    }

This change would allow the code to work correctly even when the 'temperature' parameter is not provided in the test configuration.

Test Error Log
tests.graphs.abstract_graph_test.TestAbstractGraph#test_create_llm[llm_config5-ChatBedrock]: self = <abstract_graph_test.TestGraph object at 0x7fa2b6a70d90>
llm_config = {'model': 'bedrock/anthropic.claude-3-sonnet-20240229-v1:0', 'region_name': 'IDK'}
    def _create_llm(self, llm_config: dict) -> object:
        """
        Create a large language model instance based on the configuration provided.
    
        Args:
            llm_config (dict): Configuration parameters for the language model.
    
        Returns:
            object: An instance of the language model client.
    
        Raises:
            KeyError: If the model is not supported.
        """
    
        llm_defaults = {"streaming": False}
        llm_params = {**llm_defaults, **llm_config}
        rate_limit_params = llm_params.pop("rate_limit", {})
    
        if rate_limit_params:
            requests_per_second = rate_limit_params.get("requests_per_second")
            max_retries = rate_limit_params.get("max_retries")
            if requests_per_second is not None:
                with warnings.catch_warnings():
                    warnings.simplefilter("ignore")
                    llm_params["rate_limiter"] = InMemoryRateLimiter(
                        requests_per_second=requests_per_second
                    )
            if max_retries is not None:
                llm_params["max_retries"] = max_retries
    
        if "model_instance" in llm_params:
            try:
                self.model_token = llm_params["model_tokens"]
            except KeyError as exc:
                raise KeyError("model_tokens not specified") from exc
            return llm_params["model_instance"]
    
        known_providers = {
            "openai",
            "azure_openai",
            "google_genai",
            "google_vertexai",
            "ollama",
            "oneapi",
            "nvidia",
            "groq",
            "anthropic",
            "bedrock",
            "mistralai",
            "hugging_face",
            "deepseek",
            "ernie",
            "fireworks",
            "clod",
            "togetherai",
        }
    
        if "/" in llm_params["model"]:
            split_model_provider = llm_params["model"].split("/", 1)
            llm_params["model_provider"] = split_model_provider[0]
            llm_params["model"] = split_model_provider[1]
        else:
            possible_providers = [
                provider
                for provider, models_d in models_tokens.items()
                if llm_params["model"] in models_d
            ]
            if len(possible_providers) <= 0:
                raise ValueError(
                    f"""Provider {llm_params["model_provider"]} is not supported.
                                If possible, try to use a model instance instead."""
                )
            llm_params["model_provider"] = possible_providers[0]
            print(
                (
                    f"Found providers {possible_providers} for model {llm_params['model']}, using {llm_params['model_provider']}.\n"
                    "If it was not intended please specify the model provider in the graph configuration"
                )
            )
    
        if llm_params["model_provider"] not in known_providers:
            raise ValueError(
                f"""Provider {llm_params["model_provider"]} is not supported.
                             If possible, try to use a model instance instead."""
            )
    
        if llm_params.get("model_tokens", None) is None:
            try:
                self.model_token = models_tokens[llm_params["model_provider"]][
                    llm_params["model"]
                ]
            except KeyError:
                print(
                    f"""Max input tokens for model {llm_params["model_provider"]}/{llm_params["model"]} not found,
                    please specify the model_tokens parameter in the llm section of the graph configuration.
                    Using default token size: 8192"""
                )
                self.model_token = 8192
        else:
            self.model_token = llm_params["model_tokens"]
    
        try:
            if llm_params["model_provider"] not in {
                "oneapi",
                "nvidia",
                "ernie",
                "deepseek",
                "togetherai",
                "clod",
            }:
                if llm_params["model_provider"] == "bedrock":
                    llm_params["model_kwargs"] = {
>                       "temperature": llm_params.pop("temperature")
                    }
E                   KeyError: 'temperature'
scrapegraphai/graphs/abstract_graph.py:223: KeyError
During handling of the above exception, another exception occurred:
self = <abstract_graph_test.TestAbstractGraph object at 0x7fa2b6be8210>
llm_config = {'model': 'bedrock/anthropic.claude-3-sonnet-20240229-v1:0', 'region_name': 'IDK'}
expected_model = <class 'langchain_aws.chat_models.bedrock.ChatBedrock'>
    @pytest.mark.parametrize(
        "llm_config, expected_model",
        [
            (
                {"model": "openai/gpt-3.5-turbo", "openai_api_key": "sk-randomtest001"},
                ChatOpenAI,
            ),
            (
                {
                    "model": "azure_openai/gpt-3.5-turbo",
                    "api_key": "random-api-key",
                    "api_version": "no version",
                    "azure_endpoint": "https://www.example.com/",
                },
                AzureChatOpenAI,
            ),
            ({"model": "ollama/llama2"}, ChatOllama),
            ({"model": "oneapi/qwen-turbo", "api_key": "oneapi-api-key"}, OneApi),
            (
                {"model": "deepseek/deepseek-coder", "api_key": "deepseek-api-key"},
                DeepSeek,
            ),
            (
                {
                    "model": "bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
                    "region_name": "IDK",
                },
                ChatBedrock,
            ),
        ],
    )
    def test_create_llm(self, llm_config, expected_model):
>       graph = TestGraph("Test prompt", {"llm": llm_config})
tests/graphs/abstract_graph_test.py:87: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/graphs/abstract_graph_test.py:19: in __init__
    super().__init__(prompt, config)
scrapegraphai/graphs/abstract_graph.py:60: in __init__
    self.llm_model = self._create_llm(config["llm"])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
self = <abstract_graph_test.TestGraph object at 0x7fa2b6a70d90>
llm_config = {'model': 'bedrock/anthropic.claude-3-sonnet-20240229-v1:0', 'region_name': 'IDK'}
    def _create_llm(self, llm_config: dict) -> object:
        """
        Create a large language model instance based on the configuration provided.
    
        Args:
            llm_config (dict): Configuration parameters for the language model.
    
        Returns:
            object: An instance of the language model client.
    
        Raises:
            KeyError: If the model is not supported.
        """
    
        llm_defaults = {"streaming": False}
        llm_params = {**llm_defaults, **llm_config}
        rate_limit_params = llm_params.pop("rate_limit", {})
    
        if rate_limit_params:
            requests_per_second = rate_limit_params.get("requests_per_second")
            max_retries = rate_limit_params.get("max_retries")
            if requests_per_second is not None:
                with warnings.catch_warnings():
                    warnings.simplefilter("ignore")
                    llm_params["rate_limiter"] = InMemoryRateLimiter(
                        requests_per_second=requests_per_second
                    )
            if max_retries is not None:
                llm_params["max_retries"] = max_retries
    
        if "model_instance" in llm_params:
            try:
                self.model_token = llm_params["model_tokens"]
            except KeyError as exc:
                raise KeyError("model_tokens not specified") from exc
            return llm_params["model_instance"]
    
        known_providers = {
            "openai",
            "azure_openai",
            "google_genai",
            "google_vertexai",
            "ollama",
            "oneapi",
            "nvidia",
            "groq",
            "anthropic",
            "bedrock",
            "mistralai",
            "hugging_face",
            "deepseek",
            "ernie",
            "fireworks",
            "clod",
            "togetherai",
        }
    
        if "/" in llm_params["model"]:
            split_model_provider = llm_params["model"].split("/", 1)
            llm_params["model_provider"] = split_model_provider[0]
            llm_params["model"] = split_model_provider[1]
        else:
            possible_providers = [
                provider
                for provider, models_d in models_tokens.items()
                if llm_params["model"] in models_d
            ]
            if len(possible_providers) <= 0:
                raise ValueError(
                    f"""Provider {llm_params["model_provider"]} is not supported.
                                If possible, try to use a model instance instead."""
                )
            llm_params["model_provider"] = possible_providers[0]
            print(
                (
                    f"Found providers {possible_providers} for model {llm_params['model']}, using {llm_params['model_provider']}.\n"
                    "If it was not intended please specify the model provider in the graph configuration"
                )
            )
    
        if llm_params["model_provider"] not in known_providers:
            raise ValueError(
                f"""Provider {llm_params["model_provider"]} is not supported.
                             If possible, try to use a model instance instead."""
            )
    
        if llm_params.get("model_tokens", None) is None:
            try:
                self.model_token = models_tokens[llm_params["model_provider"]][
                    llm_params["model"]
                ]
            except KeyError:
                print(
                    f"""Max input tokens for model {llm_params["model_provider"]}/{llm_params["model"]} not found,
                    please specify the model_tokens parameter in the llm section of the graph configuration.
                    Using default token size: 8192"""
                )
                self.model_token = 8192
        else:
            self.model_token = llm_params["model_tokens"]
    
        try:
            if llm_params["model_provider"] not in {
                "oneapi",
                "nvidia",
                "ernie",
                "deepseek",
                "togetherai",
                "clod",
            }:
                if llm_params["model_provider"] == "bedrock":
                    llm_params["model_kwargs"] = {
                        "temperature": llm_params.pop("temperature")
                    }
                with warnings.catch_warnings():
                    warnings.simplefilter("ignore")
                    return init_chat_model(**llm_params)
            else:
                model_provider = llm_params.pop("model_provider")
    
                if model_provider == "clod":
                    return CLoD(**llm_params)
    
                if model_provider == "deepseek":
                    return DeepSeek(**llm_params)
    
                if model_provider == "ernie":
                    from langchain_community.chat_models import ErnieBotChat
    
                    return ErnieBotChat(**llm_params)
    
                elif model_provider == "oneapi":
                    return OneApi(**llm_params)
    
                elif model_provider == "togetherai":
                    try:
                        from langchain_together import ChatTogether
                    except ImportError:
                        raise ImportError(
                            """The langchain_together module is not installed.
                                          Please install it using 'pip install langchain-together'."""
                        )
                    return ChatTogether(**llm_params)
    
                elif model_provider == "nvidia":
                    try:
                        from langchain_nvidia_ai_endpoints import ChatNVIDIA
                    except ImportError:
                        raise ImportError(
                            """The langchain_nvidia_ai_endpoints module is not installed.
                                          Please install it using 'pip install langchain-nvidia-ai-endpoints'."""
                        )
                    return ChatNVIDIA(**llm_params)
    
        except Exception as e:
>           raise Exception(f"Error instancing model: {e}")
E           Exception: Error instancing model: 'temperature'
scrapegraphai/graphs/abstract_graph.py:266: Exception
tests.graphs.abstract_graph_test.TestAbstractGraph#test_create_llm_with_rate_limit[llm_config5-ChatBedrock]: self = <abstract_graph_test.TestGraph object at 0x7fa2b6a27810>
llm_config = {'model': 'bedrock/anthropic.claude-3-sonnet-20240229-v1:0', 'rate_limit': {'requests_per_second': 1}, 'region_name': 'IDK'}
    def _create_llm(self, llm_config: dict) -> object:
        """
        Create a large language model instance based on the configuration provided.
    
        Args:
            llm_config (dict): Configuration parameters for the language model.
    
        Returns:
            object: An instance of the language model client.
    
        Raises:
            KeyError: If the model is not supported.
        """
    
        llm_defaults = {"streaming": False}
        llm_params = {**llm_defaults, **llm_config}
        rate_limit_params = llm_params.pop("rate_limit", {})
    
        if rate_limit_params:
            requests_per_second = rate_limit_params.get("requests_per_second")
            max_retries = rate_limit_params.get("max_retries")
            if requests_per_second is not None:
                with warnings.catch_warnings():
                    warnings.simplefilter("ignore")
                    llm_params["rate_limiter"] = InMemoryRateLimiter(
                        requests_per_second=requests_per_second
                    )
            if max_retries is not None:
                llm_params["max_retries"] = max_retries
    
        if "model_instance" in llm_params:
            try:
                self.model_token = llm_params["model_tokens"]
            except KeyError as exc:
                raise KeyError("model_tokens not specified") from exc
            return llm_params["model_instance"]
    
        known_providers = {
            "openai",
            "azure_openai",
            "google_genai",
            "google_vertexai",
            "ollama",
            "oneapi",
            "nvidia",
            "groq",
            "anthropic",
            "bedrock",
            "mistralai",
            "hugging_face",
            "deepseek",
            "ernie",
            "fireworks",
            "clod",
            "togetherai",
        }
    
        if "/" in llm_params["model"]:
            split_model_provider = llm_params["model"].split("/", 1)
            llm_params["model_provider"] = split_model_provider[0]
            llm_params["model"] = split_model_provider[1]
        else:
            possible_providers = [
                provider
                for provider, models_d in models_tokens.items()
                if llm_params["model"] in models_d
            ]
            if len(possible_providers) <= 0:
                raise ValueError(
                    f"""Provider {llm_params["model_provider"]} is not supported.
                                If possible, try to use a model instance instead."""
                )
            llm_params["model_provider"] = possible_providers[0]
            print(
                (
                    f"Found providers {possible_providers} for model {llm_params['model']}, using {llm_params['model_provider']}.\n"
                    "If it was not intended please specify the model provider in the graph configuration"
                )
            )
    
        if llm_params["model_provider"] not in known_providers:
            raise ValueError(
                f"""Provider {llm_params["model_provider"]} is not supported.
                             If possible, try to use a model instance instead."""
            )
    
        if llm_params.get("model_tokens", None) is None:
            try:
                self.model_token = models_tokens[llm_params["model_provider"]][
                    llm_params["model"]
                ]
            except KeyError:
                print(
                    f"""Max input tokens for model {llm_params["model_provider"]}/{llm_params["model"]} not found,
                    please specify the model_tokens parameter in the llm section of the graph configuration.
                    Using default token size: 8192"""
                )
                self.model_token = 8192
        else:
            self.model_token = llm_params["model_tokens"]
    
        try:
            if llm_params["model_provider"] not in {
                "oneapi",
                "nvidia",
                "ernie",
                "deepseek",
                "togetherai",
                "clod",
            }:
                if llm_params["model_provider"] == "bedrock":
                    llm_params["model_kwargs"] = {
>                       "temperature": llm_params.pop("temperature")
                    }
E                   KeyError: 'temperature'
scrapegraphai/graphs/abstract_graph.py:223: KeyError
During handling of the above exception, another exception occurred:
self = <abstract_graph_test.TestAbstractGraph object at 0x7fa2b6bea3d0>
llm_config = {'model': 'bedrock/anthropic.claude-3-sonnet-20240229-v1:0', 'rate_limit': {'requests_per_second': 1}, 'region_name': 'IDK'}
expected_model = <class 'langchain_aws.chat_models.bedrock.ChatBedrock'>
    @pytest.mark.parametrize(
        "llm_config, expected_model",
        [
            (
                {
                    "model": "openai/gpt-3.5-turbo",
                    "openai_api_key": "sk-randomtest001",
                    "rate_limit": {"requests_per_second": 1},
                },
                ChatOpenAI,
            ),
            (
                {
                    "model": "azure_openai/gpt-3.5-turbo",
                    "api_key": "random-api-key",
                    "api_version": "no version",
                    "azure_endpoint": "https://www.example.com/",
                    "rate_limit": {"requests_per_second": 1},
                },
                AzureChatOpenAI,
            ),
            (
                {"model": "ollama/llama2", "rate_limit": {"requests_per_second": 1}},
                ChatOllama,
            ),
            (
                {
                    "model": "oneapi/qwen-turbo",
                    "api_key": "oneapi-api-key",
                    "rate_limit": {"requests_per_second": 1},
                },
                OneApi,
            ),
            (
                {
                    "model": "deepseek/deepseek-coder",
                    "api_key": "deepseek-api-key",
                    "rate_limit": {"requests_per_second": 1},
                },
                DeepSeek,
            ),
            (
                {
                    "model": "bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
                    "region_name": "IDK",
                    "rate_limit": {"requests_per_second": 1},
                },
                ChatBedrock,
            ),
        ],
    )
    def test_create_llm_with_rate_limit(self, llm_config, expected_model):
>       graph = TestGraph("Test prompt", {"llm": llm_config})
tests/graphs/abstract_graph_test.py:146: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/graphs/abstract_graph_test.py:19: in __init__
    super().__init__(prompt, config)
scrapegraphai/graphs/abstract_graph.py:60: in __init__
    self.llm_model = self._create_llm(config["llm"])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
self = <abstract_graph_test.TestGraph object at 0x7fa2b6a27810>
llm_config = {'model': 'bedrock/anthropic.claude-3-sonnet-20240229-v1:0', 'rate_limit': {'requests_per_second': 1}, 'region_name': 'IDK'}
    def _create_llm(self, llm_config: dict) -> object:
        """
        Create a large language model instance based on the configuration provided.
    
        Args:
            llm_config (dict): Configuration parameters for the language model.
    
        Returns:
            object: An instance of the language model client.
    
        Raises:
            KeyError: If the model is not supported.
        """
    
        llm_defaults = {"streaming": False}
        llm_params = {**llm_defaults, **llm_config}
        rate_limit_params = llm_params.pop("rate_limit", {})
    
        if rate_limit_params:
            requests_per_second = rate_limit_params.get("requests_per_second")
            max_retries = rate_limit_params.get("max_retries")
            if requests_per_second is not None:
                with warnings.catch_warnings():
                    warnings.simplefilter("ignore")
                    llm_params["rate_limiter"] = InMemoryRateLimiter(
                        requests_per_second=requests_per_second
                    )
            if max_retries is not None:
                llm_params["max_retries"] = max_retries
    
        if "model_instance" in llm_params:
            try:
                self.model_token = llm_params["model_tokens"]
            except KeyError as exc:
                raise KeyError("model_tokens not specified") from exc
            return llm_params["model_instance"]
    
        known_providers = {
            "openai",
            "azure_openai",
            "google_genai",
            "google_vertexai",
            "ollama",
            "oneapi",
            "nvidia",
            "groq",
            "anthropic",
            "bedrock",
            "mistralai",
            "hugging_face",
            "deepseek",
            "ernie",
            "fireworks",
            "clod",
            "togetherai",
        }
    
        if "/" in llm_params["model"]:
            split_model_provider = llm_params["model"].split("/", 1)
            llm_params["model_provider"] = split_model_provider[0]
            llm_params["model"] = split_model_provider[1]
        else:
            possible_providers = [
                provider
                for provider, models_d in models_tokens.items()
                if llm_params["model"] in models_d
            ]
            if len(possible_providers) <= 0:
                raise ValueError(
                    f"""Provider {llm_params["model_provider"]} is not supported.
                                If possible, try to use a model instance instead."""
                )
            llm_params["model_provider"] = possible_providers[0]
            print(
                (
                    f"Found providers {possible_providers} for model {llm_params['model']}, using {llm_params['model_provider']}.\n"
                    "If it was not intended please specify the model provider in the graph configuration"
                )
            )
    
        if llm_params["model_provider"] not in known_providers:
            raise ValueError(
                f"""Provider {llm_params["model_provider"]} is not supported.
                             If possible, try to use a model instance instead."""
            )
    
        if llm_params.get("model_tokens", None) is None:
            try:
                self.model_token = models_tokens[llm_params["model_provider"]][
                    llm_params["model"]
                ]
            except KeyError:
                print(
                    f"""Max input tokens for model {llm_params["model_provider"]}/{llm_params["model"]} not found,
                    please specify the model_tokens parameter in the llm section of the graph configuration.
                    Using default token size: 8192"""
                )
                self.model_token = 8192
        else:
            self.model_token = llm_params["model_tokens"]
    
        try:
            if llm_params["model_provider"] not in {
                "oneapi",
                "nvidia",
                "ernie",
                "deepseek",
                "togetherai",
                "clod",
            }:
                if llm_params["model_provider"] == "bedrock":
                    llm_params["model_kwargs"] = {
                        "temperature": llm_params.pop("temperature")
                    }
                with warnings.catch_warnings():
                    warnings.simplefilter("ignore")
                    return init_chat_model(**llm_params)
            else:
                model_provider = llm_params.pop("model_provider")
    
                if model_provider == "clod":
                    return CLoD(**llm_params)
    
                if model_provider == "deepseek":
                    return DeepSeek(**llm_params)
    
                if model_provider == "ernie":
                    from langchain_community.chat_models import ErnieBotChat
    
                    return ErnieBotChat(**llm_params)
    
                elif model_provider == "oneapi":
                    return OneApi(**llm_params)
    
                elif model_provider == "togetherai":
                    try:
                        from langchain_together import ChatTogether
                    except ImportError:
                        raise ImportError(
                            """The langchain_together module is not installed.
                                          Please install it using 'pip install langchain-together'."""
                        )
                    return ChatTogether(**llm_params)
    
                elif model_provider == "nvidia":
                    try:
                        from langchain_nvidia_ai_endpoints import ChatNVIDIA
                    except ImportError:
                        raise ImportError(
                            """The langchain_nvidia_ai_endpoints module is not installed.
                                          Please install it using 'pip install langchain-nvidia-ai-endpoints'."""
                        )
                    return ChatNVIDIA(**llm_params)
    
        except Exception as e:
>           raise Exception(f"Error instancing model: {e}")
E           Exception: Error instancing model: 'temperature'
scrapegraphai/graphs/abstract_graph.py:266: Exception

β˜‚οΈ Coverage Improvements

Coverage improvements by file:

  • tests/test_chromium.py

    New coverage: 17.24%
    Improvement: +0.00%

  • tests/test_scrape_do.py

    New coverage: 100.00%
    Improvement: +29.41%

🎨 Final Touches

  • I ran the hooks included in the pre-commit config.

Settings | Logs | CodeBeaver

@codebeaver-ai codebeaver-ai bot mentioned this pull request Apr 14, 2025
@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. bug Something isn't working tests Improvements or additions to test labels Apr 14, 2025
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Apr 15, 2025
@VinciGit00 VinciGit00 merged commit b64f5fe into pre/beta Apr 15, 2025
3 checks passed
@VinciGit00 VinciGit00 deleted the codebeaver/pre/beta-963 branch April 15, 2025 10:12
Copy link

πŸŽ‰ This PR is included in version 1.47.0-beta.1 πŸŽ‰

The release is available on:

Your semantic-release bot πŸ“¦πŸš€

Copy link

πŸŽ‰ This PR is included in version 1.47.0 πŸŽ‰

The release is available on:

Your semantic-release bot πŸ“¦πŸš€

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
bug Something isn't working lgtm This PR has been approved by a maintainer released on @dev released on @stable size:XL This PR changes 500-999 lines, ignoring generated files. tests Improvements or additions to test
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants