Universal Compatibility
Works with any AI SDK tool - regular functions, streaming generators, and complex artifact tools. One caching solution for all patterns.
Agents call the same tools repeatedly across conversation turns, burning money and time. Cache expensive operations once, reuse instantly. Transform slow, costly agent flows into lightning-fast experiences.
npm install @ai-sdk-tools/cache
import { createCached } from '@ai-sdk-tools/cache' import { Redis } from '@upstash/redis' const expensiveWeatherTool = tool({ description: 'Get weather data', parameters: z.object({ location: z.string() }), execute: async ({ location }) => { // Expensive API call - 2s response time return await weatherAPI.get(location) } }) // LRU cache (zero config) const cached = createCached() // Or Redis (just pass the client!) const cached = createCached({ cache: Redis.fromEnv() }) const weatherTool = cached(expensiveWeatherTool) // First call: 2s API request // Next calls: <1ms from cache ⚡
Works with any AI SDK tool - regular functions, streaming generators, and complex artifact tools. One caching solution for all patterns.
Caches everything - return values, yielded chunks, and writer messages. Streaming tools with artifacts work perfectly on cache hits.
Just wrap your tool with cached() and it works. React Query style key generation, smart defaults, and automatic type inference.
LRU cache for single instances, Redis for distributed apps. Environment-aware configuration with seamless switching.
10x faster responses for repeated requests. 80% cost reduction by avoiding duplicate API calls and expensive computations.
Agents naturally call the same tools across conversation turns. Transform expensive repeated operations into instant responses for smoother, faster, and cheaper agent experiences.
import { createCached } from '@ai-sdk-tools/cache' // Any AI SDK tool const weatherTool = tool({ description: 'Get weather', parameters: z.object({ location: z.string() }), execute: async ({ location }) => { return await api.getWeather(location) } }) // Create cached function (LRU by default) const cached = createCached() // Cache with zero config const cachedWeatherTool = cached(weatherTool) // Use in your AI application const result = streamText({ model: openai('gpt-4o'), tools: { weather: cachedWeatherTool }, messages })
import { createCached } from '@ai-sdk-tools/cache' import { Redis } from '@upstash/redis' // Just pass your Redis client - that's it! export const cached = createCached({ cache: Redis.fromEnv(), // Upstash Redis keyPrefix: 'ai-tools:', ttl: 30 * 60 * 1000, // 30 minutes }) // Or standard Redis import Redis from 'redis' export const cached = createCached({ cache: Redis.createClient({ url: process.env.REDIS_URL }), keyPrefix: 'ai-tools:', ttl: 30 * 60 * 1000, }) // All tools use your chosen backend const weatherTool = cached(expensiveWeatherTool) const analysisTools = cached(burnRateAnalysis)
import { createCached } from '@ai-sdk-tools/cache' import { Redis } from '@upstash/redis' const burnRateAnalysis = tool({ description: 'Generate burn rate analysis', parameters: z.object({ companyId: z.string(), months: z.number() }), execute: async function* ({ companyId, months }) { // Create streaming artifact const analysis = burnRateArtifact.stream({ stage: "loading", // ... artifact data }) yield { text: "Starting analysis..." } // Update artifact with charts, metrics await analysis.update({ chart: { monthlyData: [...] }, metrics: { burnRate: 50000, runway: 18 } }) yield { text: "Analysis complete", forceStop: true } } }) // Create cached with Redis const cached = createCached({ cache: Redis.fromEnv() }) const cachedAnalysis = cached(burnRateAnalysis) // ✅ Streaming text cached // ✅ Artifact data cached // ✅ Charts & metrics restored on cache hit
// src/lib/cache.ts - Smart environment setup import { createCached } from '@ai-sdk-tools/cache' import { Redis } from '@upstash/redis' // Clean environment-based selection export const cached = process.env.UPSTASH_REDIS_REST_URL ? createCached({ cache: Redis.fromEnv(), // Production: Upstash Redis ttl: 60 * 60 * 1000, // 1 hour }) : createCached({ // Development: LRU cache debug: true, ttl: 5 * 60 * 1000, // 5 minutes }) // Throughout your app import { cached } from '@/lib/cache' const weatherTool = cached(expensiveWeatherTool) // Production: Redis with 1hr TTL // Development: LRU with 5min TTL + debug // Same code, different backends
Reduce costs and improve performance with universal AI tool caching.