Skip to content

Calliope-Quality/TestRail_AI_Validator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

TestRail AI Validator

This repository falls under the larger Project Calliope.

Overview

The purpose of this project is to integrate TestRail with OpenAI to revolutionize the way test case management and quality assurance processes are handled. By leveraging the power of OpenAI’s advanced language models, this integration aims to:

  • Enhance Testing Efficiency: Automate the review of test cases to ensure they align with best practices and cover critical scenarios effectively.
  • Identify Gaps in Coverage: Analyze test cases against user stories in TestRail to detect gaps or weaknesses in testing strategies.
  • Streamline Feedback: Provide actionable insights for improving test cases, reducing human error, and accelerating the quality assurance lifecycle.
  • Improve Collaboration: Enable seamless collaboration across teams by offering AI-driven suggestions and solutions based on the data in TestRail.

This integration is designed to support teams in building a robust, efficient, and scalable QA process that minimizes defects and enhances software quality.

Key Functionalities

This Python notebook contains tools and utilities designed to integrate with TestRail and OpenAI APIs. The main functionalities include:

  1. Logging and debugging for tracking operations and issues.
  2. Utility functions for date conversion between human-readable formats and UNIX timestamps.
  3. Interaction with TestRail APIs using credentials from a .env file for secure management of secrets.
  4. OpenAI API integration for advanced capabilities powered by AI models.

Architecture Overview

The integration follows a modular and scalable architecture designed to handle interactions between TestRail and OpenAI effectively. Below is a high-level overview of the workflow:

  1. Environment Setup:

    • Credentials for TestRail and OpenAI are securely stored in a .env file to ensure secure API communication.
    • The notebook initializes these credentials for API authentication.
  2. Data Extraction from TestRail:

    • The script communicates with the TestRail API to retrieve user stories, associated test cases, and other metadata.
    • This data forms the foundation for analysis.
  3. Test Case Analysis:

    • Test cases are processed and fed into OpenAI’s GPT-4 model (gpt-4o-mini).
    • The AI analyzes the test cases for:
      • Coverage of user stories.
      • Consistency with best practices.
      • Potential gaps or redundancies.
  4. Actionable Suggestions:

    • Based on the analysis, OpenAI generates:
      • Recommendations for improving test cases.
      • Suggestions for additional test scenarios to enhance coverage.
    • The results are formatted for easy understanding and actionable follow-up.
  5. Logging and Debugging:

    • Detailed logs are maintained to track all operations, ensuring smooth debugging and auditability.
  6. Output and Reporting:

    • AI-generated suggestions and insights are compiled into a structured format.
    • These outputs can be exported for review and incorporated into TestRail for iterative improvements.

Prerequisites

To run this notebook, ensure you have the following installed and configured:

  • Python 3.8+
  • Required Python packages (listed in requirements.txt or import statements):
    • requests
    • python-dotenv
    • openai
    • pprint
    • logging
  • A .env file at the project root containing:
    • TESTRAIL_BASE_URL
    • TESTRAIL_USERNAME
    • TESTRAIL_API_KEY
    • OPENAI_API_KEY

Key Features

  1. Logging Setup:

    • Logs debug messages to a file named with the current date.
    • Enables robust tracking of application events and errors.
  2. Utility Functions:

    • get_unix_time(date_str: str): Converts mm/dd/yyyy formatted dates to UNIX timestamps.
    • get_human_time(seconds: str): Converts UNIX timestamps back to human-readable dates.
  3. TestRail and OpenAI API Integration:

    • Utilizes the OpenAI GPT-4 model (gpt-4o-mini) for advanced analysis.
    • API configurations are securely handled using python-dotenv.

Setup Instructions

  1. Clone the repository containing this notebook.
  2. Install dependencies: pip install -r requirements.txt
  3. Set up your .env file with required credentials.
  4. Create a logs directory in the project root for logging.

Execution

  1. Run the notebook in your preferred Python environment (e.g., Jupyter Notebook, VSCode, or Google Colab).
  2. Follow the markdown cells for detailed instructions and examples.

Future Enhancements

• Expand date utility functions for more robust time zone support. • Add API examples for pulling and processing data from TestRail. • Extend OpenAI integration for automated report generation.


Authors

Created by Adam Satterfield and Mike McDermott

License

This project is licensed under the MIT License.

About

Pulls test cases and checks for completeness using AI

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published