Agents
Multi-agent orchestration for AI SDK v5. Build intelligent workflows with specialized agents, automatic handoffs, and seamless coordination. Works with any AI provider. Includes built-in memory system for persistent context.
npm install @ai-sdk-tools/agents @ai-sdk-tools/memory ai zod
Why Multi-Agent Systems?
Complex tasks benefit from specialized expertise. Instead of a single model handling everything, break work into focused agents:
Customer Support
Triage → Technical Support → Billing
Content Pipeline
Research → Writing → Editing → Publishing
Code Development
Planning → Implementation → Testing → Documentation
Data Analysis
Collection → Processing → Visualization → Insights
Specialization
Each agent focuses on its domain with optimized instructions and tools
Context Preservation
Full conversation history maintained across handoffs
Provider Flexibility
Use different models for different tasks (GPT-4 for analysis, Claude for writing)
Programmatic Routing
Pattern matching and automatic agent selection
Built-in Memory System
Every agent includes a powerful memory system that maintains context across conversations. Memory is a required dependency that provides:
Working Memory
Persistent context that agents can read and update during conversations
Conversation History
Automatic message persistence and retrieval across chat sessions
Chat Management
Automatic title generation and chat organization
Flexible Scopes
Chat-level or user-level memory with multiple storage backends
import { Agent } from '@ai-sdk-tools/agents' import { InMemoryProvider } from '@ai-sdk-tools/memory' import { openai } from '@ai-sdk/openai' const agent = new Agent({ name: 'Assistant', model: openai('gpt-4o'), instructions: 'You are a helpful assistant.', memory: { provider: new InMemoryProvider(), workingMemory: { enabled: true, scope: 'chat', // or 'user' }, history: { enabled: true, limit: 10, }, chats: { enabled: true, generateTitle: true, } } })
Quick Start
Basic: Single Agent
import { Agent } from '@ai-sdk-tools/agents' import { openai } from '@ai-sdk/openai' const agent = new Agent({ name: 'Assistant', model: openai('gpt-4o'), instructions: 'You are a helpful assistant.', }) // Generate response const result = await agent.generate({ prompt: 'What is 2+2?', }) console.log(result.text) // "4"
Handoffs: Two Specialists
import { Agent } from '@ai-sdk-tools/agents' import { openai } from '@ai-sdk/openai' // Create specialized agents const mathAgent = new Agent({ name: 'Math Tutor', model: openai('gpt-4o'), instructions: 'You help with math problems. Show step-by-step solutions.', }) const historyAgent = new Agent({ name: 'History Tutor', model: openai('gpt-4o'), instructions: 'You help with history questions. Provide context and dates.', }) // Create orchestrator with handoff capability const orchestrator = new Agent({ name: 'Triage', model: openai('gpt-4o'), instructions: 'Route questions to the appropriate specialist.', handoffs: [mathAgent, historyAgent], }) // LLM decides which specialist to use const result = await orchestrator.generate({ prompt: 'What is the quadratic formula?', }) console.log(`Handled by: ${result.finalAgent}`) // "Math Tutor" console.log(`Handoffs: ${result.handoffs.length}`) // 1
Orchestration: Auto-Routing
Use programmatic routing for instant agent selection without LLM overhead:
const mathAgent = new Agent({ name: 'Math Tutor', model: openai('gpt-4o'), instructions: 'You help with math problems.', matchOn: ['calculate', 'math', 'equation', /\d+\s*[\+\-\*\/]\s*\d+/], }) const historyAgent = new Agent({ name: 'History Tutor', model: openai('gpt-4o'), instructions: 'You help with history questions.', matchOn: ['history', 'war', 'civilization', /\d{4}/], // Years }) const orchestrator = new Agent({ name: 'Smart Router', model: openai('gpt-4o-mini'), // Efficient for routing instructions: 'Route to specialists. Fall back to handling general questions.', handoffs: [mathAgent, historyAgent], }) // Automatically routes to mathAgent based on pattern match const result = await orchestrator.generate({ prompt: 'What is 15 * 23?', })
Streaming with UI
For Next.js route handlers and real-time UI updates:
// app/api/chat/route.ts import { Agent } from '@ai-sdk-tools/agents' import { openai } from '@ai-sdk/openai' const supportAgent = new Agent({ name: 'Support', model: openai('gpt-4o'), instructions: 'Handle customer support inquiries.', handoffs: [technicalAgent, billingAgent], }) export async function POST(req: Request) { const { messages } = await req.json() return supportAgent.toUIMessageStream({ messages, maxRounds: 5, // Max handoffs maxSteps: 10, // Max tool calls per agent onEvent: async (event) => { if (event.type === 'agent-handoff') { console.log(`Handoff: ${event.from} → ${event.to}`) } }, }) }
Tools and Context
Adding Tools
import { tool } from 'ai' import { z } from 'zod' const calculatorTool = tool({ description: 'Perform calculations', parameters: z.object({ expression: z.string(), }), execute: async ({ expression }) => { return eval(expression) // Use safe-eval in production }, }) const agent = new Agent({ name: 'Calculator Agent', model: openai('gpt-4o'), instructions: 'Help with math using the calculator tool.', tools: { calculator: calculatorTool, }, maxTurns: 20, // Max tool call iterations })
Context-Aware Agents
Use typed context for team/user-specific behavior:
interface TeamContext { teamId: string userId: string preferences: Record<string, string> } const agent = new Agent<TeamContext>({ name: 'Team Assistant', model: openai('gpt-4o'), instructions: (context) => { return `You are helping team ${context.teamId}. User preferences: ${JSON.stringify(context.preferences)}` }, }) // Pass context when streaming agent.toUIMessageStream({ messages, context: { teamId: 'team-123', userId: 'user-456', preferences: { theme: 'dark', language: 'en' }, }, })
Multi-Provider Setup
Use the best model for each task:
import { openai } from '@ai-sdk/openai' import { anthropic } from '@ai-sdk/anthropic' import { google } from '@ai-sdk/google' const researchAgent = new Agent({ name: 'Researcher', model: anthropic('claude-3-5-sonnet-20241022'), // Excellent reasoning instructions: 'Research topics thoroughly.', }) const writerAgent = new Agent({ name: 'Writer', model: openai('gpt-4o'), // Great at creative writing instructions: 'Create engaging content.', }) const editorAgent = new Agent({ name: 'Editor', model: google('gemini-1.5-pro'), // Strong at review instructions: 'Review and improve content.', handoffs: [writerAgent], // Can send back for rewrites }) const pipeline = new Agent({ name: 'Content Manager', model: openai('gpt-4o-mini'), // Efficient orchestrator instructions: 'Coordinate content creation.', handoffs: [researchAgent, writerAgent, editorAgent], })
Guardrails
Control agent behavior with input/output validation:
const agent = new Agent({ name: 'Moderated Agent', model: openai('gpt-4o'), instructions: 'Answer questions helpfully.', inputGuardrails: [ async (input) => { if (containsProfanity(input)) { return { pass: false, action: 'block', message: 'Input violates content policy', } } return { pass: true } }, ], outputGuardrails: [ async (output) => { if (containsSensitiveInfo(output)) { return { pass: false, action: 'modify', modifiedOutput: redactSensitiveInfo(output), } } return { pass: true } }, ], })
API Reference
Agent Constructor Options
name: string
- Unique agent identifiermodel: LanguageModel
- AI SDK language modelinstructions: string | ((context: TContext) => string)
- System prompttools?: Record<string, Tool>
- Available toolshandoffs?: Agent[]
- Agents this agent can hand off tomaxTurns?: number
- Maximum tool call iterations (default: 10)temperature?: number
- Model temperaturematchOn?: (string | RegExp)[] | ((message: string) => boolean)
- Routing patternsonEvent?: (event: AgentEvent) => void
- Lifecycle event handlerinputGuardrails?: InputGuardrail[]
- Pre-execution validationoutputGuardrails?: OutputGuardrail[]
- Post-execution validationpermissions?: ToolPermissions
- Tool access controlMethods
generate(options)
Generate response (non-streaming)
stream(options)
Stream response (AI SDK stream)
toUIMessageStream(options)
Stream as UI messages (Next.js route handler)
getHandoffs()
Get handoff agents
Event Types
agent-start
- Agent starts executionagent-step
- Agent completes a stepagent-finish
- Agent finishes roundagent-handoff
- Agent hands off to anotheragent-complete
- All execution completeagent-error
- Error occurredIntegration with Other Packages
With @ai-sdk-tools/memory
Add persistent working memory and conversation history to agents:
import { DrizzleProvider } from '@ai-sdk-tools/memory' const agent = new Agent({ name: 'Assistant', model: openai('gpt-4o'), instructions: 'You are a helpful assistant.', memory: { provider: new DrizzleProvider(db), workingMemory: { enabled: true, scope: 'user', // or 'chat' }, history: { enabled: true, limit: 10, }, chats: { enabled: true, generateTitle: true, } }, }) // Agent automatically: // - Loads working memory into system prompt // - Injects updateWorkingMemory tool // - Loads conversation history // - Persists messages and generates titles
Learn more: Memory Documentation
With @ai-sdk-tools/cache
Cache expensive tool calls across agents:
import { cached } from '@ai-sdk-tools/cache' const agent = new Agent({ name: 'Data Agent', model: openai('gpt-4o'), instructions: 'Analyze data.', tools: { analyze: cached(expensiveAnalysisTool), }, })
With @ai-sdk-tools/artifacts
Stream structured artifacts from agents:
import { artifact } from '@ai-sdk-tools/artifacts' const reportAgent = new Agent({ name: 'Report Generator', model: openai('gpt-4o'), instructions: 'Generate structured reports.', tools: { createReport: tool({ execute: async function* ({ title }) { const report = artifact.stream({ title, sections: [] }) yield { text: 'Report complete', forceStop: true } }, }), }, })
With @ai-sdk-tools/devtools
Debug agent execution in development:
import { AIDevTools } from '@ai-sdk-tools/devtools' const agent = new Agent({ name: 'Debug Agent', model: openai('gpt-4o'), instructions: 'Test agent.', onEvent: (event) => { console.log('[Agent Event]', event) }, }) // In your app export default function App() { return ( <> <YourChatInterface /> <AIDevTools /> </> ) }
Examples
Real-world implementations can be found in /apps/example/src/ai/agents/
:
Triage Agent
Route customer questions to specialists
Financial Agent
Multi-step analysis with artifacts
Code Review
Analyze → Test → Document workflow
Multi-Provider
Use different models for different tasks