This framework (complex prompt) guides Reasoning AI systems to process unstructured AI feedback and feedback-on-feedback to generate prioritized lists of software improvements. The output is a human-reviewable, Markdown-formatted document that helps developers implement changes efficiently.
- Parses free-form text feedback from multiple AI sources.
- Extracts key information: issues, solutions, evidence, and reasoning.
- Categorizes and prioritizes improvements based on severity.
- Calculates confidence scores using a multi-factor algorithm, including weighted AI support based on feedback quality.
- Flags ambiguities, conflicts, and potential false positives.
- Analyzes conflicting code suggestions, attempting to provide a reasoned recommendation.
- Evaluates feedback quality with detailed assessment metrics (numerical and categorical).
- Uses feedback-on-feedback to critically evaluate initial AI analyses.
- Formats output as a structured Markdown document.
- Suggests names for new code components when refactoring.
- Code review and refactoring projects.
- Technical debt management.
- Quality assurance processes.
- AI-augmented development workflows.
- Meta-analysis of AI code suggestions.
- A Reasoning AI system capable of following complex instructions (Claude 3 Opus, GPT-4, etc.).
- Unstructured AI feedback files from code analysis.
- Optional: Feedback-on-feedback files (meta-analysis).
A human developer will use this framework as follows:
-
Gather Unstructured Feedback: Collect unstructured text feedback on your code from several different AI systems. This feedback should analyze your code and suggest improvements.
-
"Feedback on Feedback" or "Meta-Analysis" (Optional but Highly Recommended): Gather additional unstructured text feedback where AIs analyze and critique the initial feedback. This helps identify conflicting opinions and improve the overall quality of the analysis.
-
Prepare Input Files:
- Create separate text files for each AI's initial feedback (e.g.,
feedback_ai1.txt
,feedback_ai2.txt
). - Create separate text files for each AI's feedback-on-feedback (e.g.,
meta_feedback_ai1.txt
,meta_feedback_ai2.txt
). These files should clearly indicate which initial feedback they are referencing. - Ensure your code files are accessible to the Reasoning AI.
- Create separate text files for each AI's initial feedback (e.g.,
-
Framework as Prompt: Copy the entire framework (the prompt) from prompt.md.
-
Send to Reasoning AI: Input the following to your Reasoning AI, in this order:
- The complete framework (prompt) from prompt.md.
- The initial AI feedback files, one after another.
- The feedback-on-feedback files, one after another.
- Instructions to begin processing, referencing your code files as context, something like: "Begin processing the feedback, using the provided code files as context."
-
AI-Generated List: The Reasoning AI, guided by the framework, will process the input and produce a draft Actionable Improvement List (in Markdown format).
-
Human Review and Refinement: Critically review and manually adjust the AI-generated list. Apply your domain knowledge and judgment. You may add, modify, or remove entries. The AI-generated list is a starting point, not a final product.
-
Iterative Improvement: This process can be repeated with updated code and feedback.
Goal: The framework aims to produce the most useful and accurate draft Actionable Improvement List possible, minimizing the manual effort required for post-processing and validation.
<Provide the AI with the prompt>
AI Feedback File 1:
The OptimizedATS class is handling too many responsibilities, which violates the Single Responsibility Principle. This class currently manages configuration, NLP processing, keyword extraction, parallel processing, and output formatting. This makes the code harder to maintain and test.
Feedback-on-Feedback File 1:
I've reviewed the initial feedback about the OptimizedATS class. The analysis is accurate - the class does handle multiple responsibilities. The feedback provides specific file references and code locations, which makes it highly actionable.
See example_output.md for a complete example of the generated improvement list. (Note: You would create this example file.)
The framework employs a structured approach to processing feedback:
- Parsing Stage: Extracts key information from unstructured text.
- Consolidation Stage: Groups related feedback, identifies contradictions, and flags potential false positives.
- Prioritization Stage: Assigns initial priority based on severity and weighted consensus.
- Assessment Stage: Evaluates the quality of each feedback source (using numerical and categorical ratings) and analyzes conflicting suggestions.
- Output Generation Stage: Creates a standardized document with all findings.
For a detailed explanation of the algorithms and logic, see technical_details.md. (Note: You would create this file if you wanted to provide more technical depth.)
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository.
- Create your feature branch (
git checkout -b feature/amazing-feature
). - Commit your changes (
git commit -m 'Add some amazing feature'
). - Push to the branch (
git push origin feature/amazing-feature
). - Open a Pull Request.
If you have questions or feedback, please open an issue on this repository.
- The open-source AI community for inspiration and feedback.
- All contributors who help improve this framework.