Streaming ReAct agent in TypeScript - bring your own LLM model
A lightweight, streaming-first ReAct (Reasoning + Acting) agent that works with any LangChain-compatible model. Focus on agent logic while LangChain handles the provider complexity.
- π Streaming-first: Real-time chunked response processing
- π§ Tool Integration: Seamless function calling with automatic iteration
- π― Provider Agnostic: Works with any LangChain model (Anthropic, OpenAI, Groq, local, etc.)
- π‘ Minimal: ~200 LOC focused on ReAct pattern only
- π Type Safe: Full TypeScript support with proper interfaces
- β‘ Zero Config: Just pass your model and tools
npm install @lamemind/react-agent-ts
You'll also need a LangChain model provider:
# Choose your provider
npm install @langchain/anthropic # for Claude
npm install @langchain/openai # for GPT
npm install @langchain/google-genai # for Gemini
import { ChatAnthropic } from "@langchain/anthropic";
import { ReActAgent } from "@lamemind/react-agent-ts";
import { DynamicTool } from "@langchain/core/tools";
// Create your model
const model = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
model: "claude-3-sonnet-20240229"
});
// Define tools
const tools = [
new DynamicTool({
name: "calculator",
description: "Performs basic math operations",
func: async (input: string) => {
return eval(input).toString();
}
})
];
// Create and use agent
const agent = new ReActAgent(model, tools);
const response = await agent.invoke("What's 15 * 24 + 100?");
console.log(response);
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4"
});
const agent = new ReActAgent(model, tools);
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
const model = new ChatGoogleGenerativeAI({
apiKey: process.env.GOOGLE_API_KEY,
model: "gemini-pro"
});
const agent = new ReActAgent(model, tools);
import { z } from "zod";
import { validateAndParseInput } from "@lamemind/react-agent-ts";
const weatherTool = new DynamicTool({
name: "get_weather",
description: "Get current weather for a city",
func: async (input: string) => {
const schema = z.object({
city: z.string(),
country: z.string().optional()
});
const parsed = validateAndParseInput(JSON.parse(input), schema);
// Your weather API call here
return `Weather in ${parsed.city}: 22Β°C, sunny`;
}
});
const agent = new ReActAgent(model, [weatherTool]);
await agent.invoke("What's the weather like in Rome?");
import { systemPrompt, userMessage } from "@lamemind/react-agent-ts";
// Start with system prompt
const messages = [
systemPrompt("You are a helpful data analyst assistant"),
userMessage("Analyze this sales data...")
];
const response = await agent.invoke(messages);
// Continue conversation
const textResponse = await agent.extractAiTextResponse();
console.log("AI said:", textResponse);
const agent = new ReActAgent(model, tools);
// Get notified of each iteration
agent.onMessage((messages) => {
console.log(`Agent completed iteration, ${messages.length} messages so far`);
});
const response = await agent.invoke("Complex multi-step task...");
new ReActAgent(model: BaseChatModel, tools: any[], maxIterations?: number)
model
: Any LangChain-compatible chat modeltools
: Array of LangChain toolsmaxIterations
: Maximum reasoning iterations (default: 10)
invoke(messages: string | any[]): Promise<any>
Execute the agent with a message or conversation history.
extractAiTextResponse(): Promise<string>
Extract the final text response from the agent, excluding tool calls.
onMessage(callback: (messages: any[]) => void): void
Set callback for streaming updates during execution.
dumpConversation(): void
Debug method to log the complete conversation history.
The agent accepts any LangChain BaseChatModel
. Configure your model according to LangChain documentation:
// Anthropic with custom settings
const model = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
model: "claude-3-sonnet-20240229",
temperature: 0.7,
maxTokens: 4000
});
// OpenAI with streaming
const model = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4",
streaming: true
});
const agent = new ReActAgent(
model,
tools,
5 // Max iterations - prevents infinite loops
);
Separation of Concerns: You handle model configuration and API keys, we handle ReAct logic.
No Vendor Lock-in: Switch between providers without changing agent code.
Minimal Dependencies: Only LangChain core, no provider-specific dependencies.
Production Ready: Built for real applications with proper error handling and streaming.
# Clone the repo
git clone https://github.com/lamemind/react-agent-ts
cd react-agent-ts
# Install dependencies
npm install
# Build
npm run build
# Development mode
npm run dev
Contributions are welcome! This project focuses on the ReAct pattern implementation. For provider-specific issues, please refer to LangChain documentation.
- Fork the repo
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
ISC Β© lamemind
- Built on top of LangChain for model abstraction
- Inspired by the ReAct paper: Reasoning and Acting in Language Models
π If this helped you, consider giving it a star on GitHub!