AI SDK Tools
AI SDK Tools

Make your AI agents faster and cheaper with your own cache.

Agents call the same tools repeatedly across conversation turns, burning money and time. Cache expensive operations once, reuse instantly. Transform slow, costly agent flows into lightning-fast experiences.

npm install @ai-sdk-tools/cache
Used by
◇ Universal Tool Caching
import { createCached } from '@ai-sdk-tools/cache'
import { Redis } from '@upstash/redis'

const expensiveWeatherTool = tool({
  description: 'Get weather data',
  parameters: z.object({
    location: z.string()
  }),
  execute: async ({ location }) => {
    // Expensive API call - 2s response time
    return await weatherAPI.get(location)
  }
})

// LRU cache (zero config)
const cached = createCached()

// Or Redis (just pass the client!)
const cached = createCached({ cache: Redis.fromEnv() })

const weatherTool = cached(expensiveWeatherTool)

// First call: 2s API request
// Next calls: <1ms from cache ⚡

Universal Compatibility

Works with any AI SDK tool - regular functions, streaming generators, and complex artifact tools. One caching solution for all patterns.

Complete Data Preservation

Caches everything - return values, yielded chunks, and writer messages. Streaming tools with artifacts work perfectly on cache hits.

Zero Configuration

Just wrap your tool with cached() and it works. React Query style key generation, smart defaults, and automatic type inference.

Multiple Backends

LRU cache for single instances, Redis for distributed apps. Environment-aware configuration with seamless switching.

Production Performance

10x faster responses for repeated requests. 80% cost reduction by avoiding duplicate API calls and expensive computations.

Agent Flow Optimization

Agents naturally call the same tools across conversation turns. Transform expensive repeated operations into instant responses for smoother, faster, and cheaper agent experiences.

Examples

◇ Basic Usage
import { createCached } from '@ai-sdk-tools/cache'

// Any AI SDK tool
const weatherTool = tool({
  description: 'Get weather',
  parameters: z.object({
    location: z.string()
  }),
  execute: async ({ location }) => {
    return await api.getWeather(location)
  }
})

// Create cached function (LRU by default)
const cached = createCached()

// Cache with zero config
const cachedWeatherTool = cached(weatherTool)

// Use in your AI application
const result = streamText({
  model: openai('gpt-4o'),
  tools: { weather: cachedWeatherTool },
  messages
})
◇ Redis Configuration
import { createCached } from '@ai-sdk-tools/cache'
import { Redis } from '@upstash/redis'

// Just pass your Redis client - that's it!
export const cached = createCached({
  cache: Redis.fromEnv(), // Upstash Redis
  keyPrefix: 'ai-tools:',
  ttl: 30 * 60 * 1000, // 30 minutes
})

// Or standard Redis
import Redis from 'redis'
export const cached = createCached({
  cache: Redis.createClient({ url: process.env.REDIS_URL }),
  keyPrefix: 'ai-tools:',
  ttl: 30 * 60 * 1000,
})

// All tools use your chosen backend
const weatherTool = cached(expensiveWeatherTool)
const analysisTools = cached(burnRateAnalysis)
◇ Streaming Tools with Artifacts
import { createCached } from '@ai-sdk-tools/cache'
import { Redis } from '@upstash/redis'

const burnRateAnalysis = tool({
  description: 'Generate burn rate analysis',
  parameters: z.object({
    companyId: z.string(),
    months: z.number()
  }),
  execute: async function* ({ companyId, months }) {
    // Create streaming artifact
    const analysis = burnRateArtifact.stream({
      stage: "loading",
      // ... artifact data
    })

    yield { text: "Starting analysis..." }
    
    // Update artifact with charts, metrics
    await analysis.update({
      chart: { monthlyData: [...] },
      metrics: { burnRate: 50000, runway: 18 }
    })
    
    yield { text: "Analysis complete", forceStop: true }
  }
})

// Create cached with Redis
const cached = createCached({ cache: Redis.fromEnv() })
const cachedAnalysis = cached(burnRateAnalysis)

// ✅ Streaming text cached
// ✅ Artifact data cached  
// ✅ Charts & metrics restored on cache hit
◇ Environment-Aware Setup
// src/lib/cache.ts - Smart environment setup
import { createCached } from '@ai-sdk-tools/cache'
import { Redis } from '@upstash/redis'

// Clean environment-based selection
export const cached = process.env.UPSTASH_REDIS_REST_URL
  ? createCached({
      cache: Redis.fromEnv(), // Production: Upstash Redis
      ttl: 60 * 60 * 1000, // 1 hour
    })
  : createCached({
      // Development: LRU cache
      debug: true,
      ttl: 5 * 60 * 1000, // 5 minutes
    })

// Throughout your app
import { cached } from '@/lib/cache'
const weatherTool = cached(expensiveWeatherTool)

// Production: Redis with 1hr TTL
// Development: LRU with 5min TTL + debug
// Same code, different backends
10x
Faster responses
80%
Cost reduction
0
Configuration required

Start Caching Your AI Tools

Reduce costs and improve performance with universal AI tool caching.