title | emoji | colorFrom | colorTo | sdk | sdk_version | app_file | pinned | tags | |||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
First Agent Template |
🤖 |
pink |
yellow |
gradio |
5.15.0 |
app.py |
false |
|
A powerful conversational agent built with SmoLAgents that can connect to various language models, perform web searches, create visualizations, execute code, and much more.
This project provides a flexible and powerful conversational agent that can:
- Connect to different types of language models (local or cloud-based)
- Perform web searches to retrieve up-to-date information
- Visit and extract content from webpages
- Execute shell commands with appropriate security measures
- Create and modify files
- Generate data visualizations based on natural language requests
- Execute Python code within the chat interface
The agent is available through two interfaces:
- A Gradio interface (original)
- A Streamlit interface (new) with enhanced features and configuration options
- Python 3.8+
- A language model, which can be one of:
-
Clone this repository:
git clone https://github.com/yourusername/smolagents-conversational-agent.git cd smolagents-conversational-agent
-
Install the required dependencies:
pip install -r requirements.txt
You have several options for the language model:
- Download and install LM Studio
- Launch LM Studio and download a model (e.g., Mistral 7B, Llama 2, etc.)
- Start the local server by clicking "Start Server"
- Note the server URL (typically http://localhost:1234/v1)
- Create an account on OpenRouter
- Get your API key from the dashboard
- Use the OpenRouter URL and your API key in the agent configuration
- If you have access to Hugging Face API endpoints, you can use them directly
- Configure the URL and parameters in the agent interface
The Streamlit interface offers a more user-friendly experience with additional features:
-
Launch the Streamlit application:
streamlit run streamlit_app.py
-
Access the interface in your web browser at http://localhost:8501
-
Configure your model in the sidebar:
- Select the model type (OpenAI Server, Hugging Face API, or Hugging Face Cloud)
- Enter the required configuration parameters
- Click "Apply Configuration"
-
Start chatting with the agent in the main interface
The original Gradio interface is still available:
-
Launch the Gradio application:
python app.py
-
Access the interface in your web browser at the URL displayed in the terminal (typically http://localhost:7860)
- Interactive Chat Interface: Engage in natural conversations with the agent
- Multiple Model Support:
- OpenAI Server (LM Studio or other OpenAI-compatible servers)
- Hugging Face API
- Hugging Face Cloud
- Real-time Agent Reasoning: See the agent's thought process as it works on your request
- Customizable Configuration: Adjust model parameters without modifying code
- Data Visualization: Request and generate charts directly in the chat
- Code Execution: Run Python code generated by the agent within the chat interface
- Timezone Display: Check current time in different time zones
- Custom Icon: Uses a custom ico.webp icon for the application and sidebar
The agent comes equipped with several powerful tools:
- Web Search: Search the web via DuckDuckGo to get up-to-date information
- Webpage Visiting: Visit and extract content from specific webpages
- Shell Command Execution: Run commands on your system (with appropriate security)
- File Operations: Create and modify files on your system
- Data Visualization: Generate charts and graphs based on your requests
- Code Execution: Run Python code within the chat interface
You can extend the agent with your own custom tools by modifying the app.py
file:
@tool
def my_custom_tool(arg1: str, arg2: int) -> str:
"""Description of what the tool does
Args:
arg1: description of the first argument
arg2: description of the second argument
"""
# Your tool implementation
return "Tool result"
The agent's behavior can be customized by modifying the prompt templates in the prompts.yaml
file.
The agent can generate visualizations based on natural language requests. Try asking:
- "Show me a line chart of temperature trends over the past year"
- "Create a bar chart of sales by region"
- "Display a scatter plot of age vs. income"
- Agent not responding: Verify that your LLM server is running and accessible
- Connection errors: Check the URL and API key in your configuration
- Slow responses: Consider using a smaller or more efficient model
- Missing dependencies: Ensure all requirements are installed via
pip install -r requirements.txt
Here are some example queries you can try with the agent:
- "What's the current time in Tokyo?"
- "Can you summarize the latest news about AI?"
- "Create a Python function to sort a list of dictionaries by a specific key"
- "Explain how transformer models work in AI"
- "Show me a bar chart of population by continent"
- "Write a simple web scraper to extract headlines from a news website"
Contributions are welcome! Please feel free to submit a Pull Request.
For more information on Hugging Face Spaces configuration, visit https://huggingface.co/docs/hub/spaces-config-reference