Vercel AI SDK
Use @codespar/vercel to integrate CodeSpar tools with the Vercel AI SDK for streaming agent interactions.
Vercel AI SDK Adapter
The @codespar/vercel adapter integrates CodeSpar tools with the Vercel AI SDK, giving you streaming-first agent interactions with built-in tool execution. The Vercel AI SDK handles the tool-call loop automatically via maxSteps, so you do not need to write a manual loop. This is the recommended adapter for Next.js applications.
Installation
npm install @codespar/sdk @codespar/vercel ai @ai-sdk/anthropicpnpm add @codespar/sdk @codespar/vercel ai @ai-sdk/anthropicyarn add @codespar/sdk @codespar/vercel ai @ai-sdk/anthropic[!NOTE] You can use any Vercel AI SDK provider --
@ai-sdk/anthropic,@ai-sdk/openai,@ai-sdk/google,@ai-sdk/mistral, etc. The CodeSpar adapter is provider-agnostic. Install whichever provider you prefer.
API Reference
getTools(session): Promise<Record<string, CoreTool>>
Fetches all tools from the session and returns them in the Vercel AI SDK tools format. Each tool includes a description, parameters (Zod schema), and an execute function already wired to the session. This means the Vercel AI SDK can call tools automatically -- you do not need handleToolUse or handleToolCall like with the Claude and OpenAI adapters.
import { CodeSpar } from "@codespar/sdk";
import { getTools } from "@codespar/vercel";
const codespar = new CodeSpar({ apiKey: process.env.CODESPAR_API_KEY });
const session = await codespar.sessions.create({
servers: ["stripe", "mercadopago"],
});
const tools = await getTools(session);
// tools is a Record<string, CoreTool>, keyed by tool name
// {
// codespar_discover: { description: "...", parameters: ZodSchema, execute: fn },
// codespar_checkout: { description: "...", parameters: ZodSchema, execute: fn },
// codespar_pay: { description: "...", parameters: ZodSchema, execute: fn },
// ...
// }[!WARNING]
getToolsis async because it callssession.tools()under the hood. Alwaysawaitit. Theexecutefunctions are pre-bound to the session, so tool calls are routed automatically when the Vercel AI SDK invokes them.
toVercelTool(session, tool): CoreTool
Converts a single CodeSpar tool definition to a Vercel AI SDK CoreTool. Use this when you want to convert tools individually -- for example, to filter or augment the tool set before passing it to generateText or streamText.
import { toVercelTool } from "@codespar/vercel";
const allTools = await session.tools();
// Filter to only payment tools and convert
const paymentTools = Object.fromEntries(
allTools
.filter((t) => ["codespar_checkout", "codespar_pay"].includes(t.name))
.map((t) => [t.name, toVercelTool(session, t)])
);
const { text } = await generateText({
model: anthropic("claude-sonnet-4-20250514"),
tools: paymentTools,
maxSteps: 5,
prompt: "Create a R$49.90 Pix checkout link",
});generateText example
Use generateText for simple request-response interactions where you do not need to stream output. The Vercel AI SDK handles the full tool-call loop automatically via maxSteps:
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { CodeSpar } from "@codespar/sdk";
import { getTools } from "@codespar/vercel";
const codespar = new CodeSpar({ apiKey: process.env.CODESPAR_API_KEY });
const session = await codespar.sessions.create({
servers: ["stripe", "mercadopago", "correios"],
});
const tools = await getTools(session);
const { text, steps } = await generateText({
model: anthropic("claude-sonnet-4-20250514"),
tools,
maxSteps: 5,
system:
"You are a commerce assistant for a Brazilian e-commerce store. " +
"Use the available tools for payments, invoicing, and shipping. " +
"Always confirm amounts before processing.",
prompt: "Create a R$79.90 checkout link for 'Starter Plan' via Stripe",
});
console.log("Response:", text);
console.log("Steps taken:", steps.length);
// Inspect tool calls made during the interaction
for (const step of steps) {
if (step.toolCalls) {
for (const call of step.toolCalls) {
console.log(` Tool: ${call.toolName}`, call.args);
}
}
}
await session.close();{
"toolName": "codespar_checkout",
"args": {
"provider": "stripe",
"amount": 7990,
"currency": "BRL",
"description": "Starter Plan",
"payment_methods": ["pix", "card"]
},
"result": {
"checkout_id": "chk_3m4n5o6p7q8r",
"url": "https://checkout.stripe.com/c/pay/cs_live_...",
"amount": 7990,
"currency": "BRL",
"status": "open",
"expires_at": "2026-04-16T14:30:00Z"
}
}[!TIP]
maxStepscontrols how many tool-call rounds the SDK will automatically execute. Set it to match your expected tool-chain depth. For most commerce operations, 3-5 is sufficient. If the agent needs to discover, then checkout, then notify, that is 3 steps.
streamText example
Use streamText for real-time streaming in a Next.js API route. This is the recommended approach for chat interfaces:
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { CodeSpar } from "@codespar/sdk";
import { getTools } from "@codespar/vercel";
const codespar = new CodeSpar({ apiKey: process.env.CODESPAR_API_KEY });
export async function POST(req: Request) {
const { messages } = await req.json();
const session = await codespar.sessions.create({
servers: ["stripe", "correios", "twilio"],
});
const tools = await getTools(session);
const result = streamText({
model: anthropic("claude-sonnet-4-20250514"),
tools,
maxSteps: 5,
system:
"You are a commerce assistant for a Brazilian e-commerce store. " +
"Use tools to process payments, create shipping labels, and send notifications. " +
"Always confirm amounts and details before processing.",
messages,
onFinish: async () => {
// Clean up the session after the stream completes
await session.close();
},
});
return result.toDataStreamResponse();
}[!NOTE] Use the
onFinishcallback to close the session after the stream completes. This ensures the session is cleaned up even if the client disconnects mid-stream.
Client-side usage with useChat
Pair the streaming API route with the useChat hook for a complete chat experience:
"use client";
import { useChat } from "ai/react";
export default function ChatPage() {
const { messages, input, handleInputChange, handleSubmit, isLoading } =
useChat({
api: "/api/chat",
});
return (
<div className="flex flex-col h-screen max-w-2xl mx-auto p-4">
<div className="flex-1 overflow-y-auto space-y-4">
{messages.map((m) => (
<div
key={m.id}
className={`p-3 rounded-lg ${
m.role === "user"
? "bg-blue-100 ml-auto max-w-xs"
: "bg-gray-100 mr-auto max-w-md"
}`}
>
<p className="text-sm font-medium">
{m.role === "user" ? "You" : "Assistant"}
</p>
<p>{m.content}</p>
</div>
))}
</div>
<form onSubmit={handleSubmit} className="flex gap-2 pt-4">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask about payments, shipping, or invoicing..."
className="flex-1 border rounded-lg px-4 py-2"
disabled={isLoading}
/>
<button
type="submit"
disabled={isLoading}
className="bg-blue-600 text-white px-4 py-2 rounded-lg disabled:opacity-50"
>
Send
</button>
</form>
</div>
);
}Using with different providers
The CodeSpar adapter is provider-agnostic. Swap the model provider to use GPT, Gemini, or Mistral:
import { openai } from "@ai-sdk/openai";
const { text } = await generateText({
model: openai("gpt-4o"),
tools,
maxSteps: 5,
prompt: "Check shipping rates from Sao Paulo to Rio de Janeiro for 2kg",
});import { google } from "@ai-sdk/google";
const { text } = await generateText({
model: google("gemini-1.5-pro"),
tools,
maxSteps: 5,
prompt: "Create a boleto for R$500 due in 10 days",
});import { mistral } from "@ai-sdk/mistral";
const { text } = await generateText({
model: mistral("mistral-large-latest"),
tools,
maxSteps: 5,
prompt: "Send an order confirmation via WhatsApp to +5511999887766",
});Error handling
The Vercel AI SDK surfaces tool execution errors through the steps array. You can also add error handling in the onFinish callback:
const result = streamText({
model: anthropic("claude-sonnet-4-20250514"),
tools,
maxSteps: 5,
messages,
onFinish: async ({ finishReason, steps }) => {
// Log any tool errors
for (const step of steps) {
if (step.toolResults) {
for (const result of step.toolResults) {
if (result.result?.error) {
console.error(
`Tool ${result.toolName} failed:`,
result.result.error
);
}
}
}
}
await session.close();
},
});For more control over error handling, wrap getTools and add error-aware execute functions using toVercelTool:
import { toVercelTool } from "@codespar/vercel";
const allTools = await session.tools();
const tools = Object.fromEntries(
allTools.map((t) => {
const vercelTool = toVercelTool(session, t);
return [
t.name,
{
...vercelTool,
execute: async (args: unknown) => {
try {
return await vercelTool.execute(args);
} catch (error) {
return {
error: error instanceof Error ? error.message : "Tool call failed",
tool_name: t.name,
};
}
},
},
];
})
);[!TIP] The Vercel AI SDK will pass error results back to the model automatically. The model can then reason about the error, retry, or ask the user for clarification -- just like with the Claude and OpenAI adapters.
Best practices
-
Use
maxStepswisely. Start with 5 and adjust based on your use case. Too low and the agent cannot complete multi-step operations. Too high and you risk runaway tool calls and increased latency. -
Close sessions in
onFinish. For streaming routes, always clean up the session in theonFinishcallback, not after the response is returned. -
Scope servers narrowly. Only connect the MCP servers your agent needs. Fewer servers means fewer tools, which improves model accuracy.
-
Use
streamTextfor chat UIs. It provides a better user experience thangenerateTextbecause users see the response as it is generated. -
Inspect steps for debugging. The
stepsarray fromgenerateTextcontains the full trace of tool calls and results. Use it for logging, debugging, and monitoring.
Next steps
- Sessions -- Session lifecycle and server configuration
- Tools and Meta-Tools -- The 6 meta-tools explained
- Claude Adapter -- Direct Anthropic SDK integration with manual tool loop
- OpenAI Adapter -- Direct OpenAI SDK integration
- MCP -- Use CodeSpar tools in Claude Desktop and Cursor
- Quickstart -- End-to-end setup in under 5 minutes