-
Notifications
You must be signed in to change notification settings - Fork 0
Toolkit for Building Smart AI Assistants
Hey there, developer! 👋 Ever wanted to make your own AI friend that can chat, think, and help you out? Well, you're in luck! Welcome to the LLM Agents PHP ecosystem—your perfect place to build cool AI assistants that will wow your friends. Let's dive in and start creating something awesome together! 🚀🤖
Here's a UML sequence diagram that shows you exactly what goes down when a user asks their smart home agent to turn on the kitchen light. It's like a behind-the-scenes tour of your code in action!
This sequence demonstrates how the different packages in the LLM Agents ecosystem work together to process a user's request, from natural language input to executing a real-world action and providing a response. It showcases the role of each major component in the process, including prompt generation, LLM interaction, tool execution, and data mapping.
sequenceDiagram
actor User
participant CLI as CLI Chat (cli-chat)
participant CS as ChatService (agents)
participant AE as AgentExecutor (agents)
participant PG as PromptGenerator (prompt-generator)
participant OAI as OpenAIClient (openai-client)
participant SHC as SmartHomeControl (agent-specific)
participant JSM as JSONSchemaMapper (json-schema-mapper)
User->>CLI: "Turn on the light in the kitchen"
activate CLI
CLI->>CS: ask(sessionUuid, message)
activate CS
CS->>AE: execute(agent, prompt)
activate AE
AE->>PG: generate(agent, userPrompt, context)
activate PG
PG-->>AE: generatedPrompt
deactivate PG
AE->>OAI: generate(context, prompt, options)
activate OAI
OAI-->>AE: LLM Response (tool call)
deactivate OAI
AE->>JSM: toObject(toolCallJson, ToolCallInput::class)
activate JSM
JSM-->>AE: toolCallInput
deactivate JSM
AE->>SHC: execute(toolCallInput)
activate SHC
SHC-->>AE: actionResult
deactivate SHC
AE->>JSM: toJsonSchema(actionResult)
activate JSM
JSM-->>AE: resultSchema
deactivate JSM
AE->>OAI: generate(context, updatedPrompt, options)
activate OAI
OAI-->>AE: Final LLM Response
deactivate OAI
AE-->>CS: executionResult
deactivate AE
CS-->>CLI: response
deactivate CS
CLI-->>User: "I've turned on the kitchen light for you."
deactivate CLI
Here's a quick look at our main packages and how they fit together:
graph TD
A[llm-agents/agents] --> B[llm-agents/prompt-generator]
A --> C[llm-agents/openai-client]
A --> D[llm-agents/json-schema-mapper]
E[llm-agents/cli-chat] --> A
E --> B
E --> C
This is your main toolkit for creating smart AI agents. Think of it as the command center for your AI assistants.
Key Features:
- Agent Creation: Build custom AI agents with specific skills and knowledge.
- Tool Integration: Give your agents superpowers by adding tools they can use.
- Memory Management: Help your agents remember important stuff from conversations.
- Decision Making: Let your agents figure out the best way to handle tasks.
This diagram shows how different interceptors work together to process your request, interact with the AI model, and handle the response.
sequenceDiagram
participant CS as ChatService
participant AEP as AgentExecutorPipeline
participant GPI as GeneratePromptInterceptor
participant IMI as InjectModelInterceptor
participant ITI as InjectToolsInterceptor
participant IOI as InjectOptionsInterceptor
participant LLM as LLMInterface
participant IRIP as InjectResponseIntoPromptInterceptor
participant P as Prompt
CS->>AEP: execute(agent, prompt, context, options)
activate AEP
AEP->>GPI: execute(input, next)
activate GPI
GPI->>P: generate(agent, userPrompt, context)
P-->>GPI: generatedPrompt
GPI-->>AEP: updatedInput
deactivate GPI
AEP->>IMI: execute(input, next)
activate IMI
IMI->>P: injectModel(agent.getModel())
P-->>IMI: updatedPrompt
IMI-->>AEP: updatedInput
deactivate IMI
AEP->>ITI: execute(input, next)
activate ITI
ITI->>P: injectTools(agent.getTools())
P-->>ITI: updatedPrompt
ITI-->>AEP: updatedInput
deactivate ITI
AEP->>IOI: execute(input, next)
activate IOI
IOI->>P: injectOptions(agent.getConfiguration())
P-->>IOI: updatedPrompt
IOI-->>AEP: updatedInput
deactivate IOI
AEP->>LLM: generate(context, prompt, options)
activate LLM
LLM-->>AEP: LLMResponse
deactivate LLM
AEP->>IRIP: execute(input, next)
activate IRIP
IRIP->>P: injectResponse(LLMResponse)
P-->>IRIP: updatedPrompt
IRIP-->>AEP: updatedInput
deactivate IRIP
AEP-->>CS: executionResult
deactivate AEP
This package helps your agents form their thoughts and responses in a structured way.
Key Features:
- Dynamic Prompt Creation: Generate prompts based on the conversation context.
- Interceptor System: Customize how prompts are built using a flexible interceptor approach.
- Context Awareness: Include relevant information in prompts for more accurate responses.
Example Usage:
use LLM\Agents\PromptGenerator\PromptGeneratorPipeline;
use LLM\Agents\PromptGenerator\Interceptors\InstructionGenerator;
use LLM\Agents\PromptGenerator\Interceptors\AgentMemoryInjector;
$promptGenerator = new PromptGeneratorPipeline();
$promptGenerator->withInterceptor(
new InstructionGenerator(),
new AgentMemoryInjector()
);
$prompt = $promptGenerator->generate($agent, $userMessage, $context);
Check out this UML sequence diagram.
sequenceDiagram
participant AE as AgentExecutor
participant PGP as PromptGeneratorPipeline
participant IG as InstructionGenerator
participant AMI as AgentMemoryInjector
participant LAI as LinkedAgentsInjector
participant UPI as UserPromptInjector
participant P as Prompt
AE->>PGP: generate(agent, userPrompt, context)
activate PGP
PGP->>IG: generate(input, next)
activate IG
IG->>P: add system instruction message to prompt
P-->>IG: updatedPrompt
IG-->>PGP: updatedInput
deactivate IG
PGP->>AMI: generate(input, next)
activate AMI
AMI->>P: add agent memory message to prompt
P-->>AMI: updatedPrompt
AMI-->>PGP: updatedInput
deactivate AMI
PGP->>LAI: generate(input, next)
activate LAI
LAI->>P: add linked agents info message to prompt
P-->>LAI: updatedPrompt
LAI-->>PGP: updatedInput
deactivate LAI
PGP->>UPI: generate(input, next)
activate UPI
UPI->>P: add user message to prompt
P-->>UPI: updatedPrompt
UPI-->>PGP: updatedInput
deactivate UPI
PGP-->>AE: finalGeneratedPrompt
deactivate PGP
This is your agents' direct line to OpenAI's powerful language models.
Key Features:
- Easy API Integration: Simplified way to connect with OpenAI's services.
- Model Selection: Choose which AI model your agents should use.
- Response Streaming: Get responses in real-time for a more dynamic interaction.
This package helps your agents understand and create structured data easily.
Key Features:
- PHP to JSON Schema: Convert PHP classes to JSON schemas for validation.
- JSON to PHP Objects: Turn JSON data into PHP objects for easy manipulation.
- Flexible Mapping: Handle complex data structures with ease.
This package gives you a ready-to-use command-line interface for chatting with your AI agents.
Key Features:
- Interactive CLI: Easy-to-use interface for chatting with your agents.
- Session Management: Keep track of conversation history and context.
- Tool Call Handling: Manage and display results when your agent uses tools.
- Customizable Output: Adjust how responses and information are displayed.
By combining these packages, you can create a powerful and flexible AI assistant system.