OpenAI
Use @codespar/openai to give GPT agents commerce capabilities in Latin America.
OpenAI Adapter
The @codespar/openai adapter converts CodeSpar session tools into OpenAI's function-calling format and provides helpers to handle tool calls in the response loop. It works with the official openai Node.js SDK and supports both gpt-4o and gpt-4-turbo models.
Installation
npm install @codespar/sdk @codespar/openai openaipnpm add @codespar/sdk @codespar/openai openaiyarn add @codespar/sdk @codespar/openai openai[!NOTE]
@codespar/openaihas peer dependencies on@codespar/sdk@^0.2.0andopenai@^4.0.0. Make sure both are installed.
API Reference
getTools(session): Promise<OpenAI.ChatCompletionTool[]>
Fetches all tools from the session and converts them to OpenAI's ChatCompletionTool[] format. Each tool is wrapped as a function type with name, description, and parameters (JSON Schema).
import { CodeSpar } from "@codespar/sdk";
import { getTools } from "@codespar/openai";
const codespar = new CodeSpar({ apiKey: process.env.CODESPAR_API_KEY });
const session = await codespar.sessions.create({
servers: ["stripe", "mercadopago"],
});
const tools = await getTools(session);
console.log(JSON.stringify(tools[0], null, 2));{
"type": "function",
"function": {
"name": "codespar_checkout",
"description": "Create a checkout session for a product or service",
"parameters": {
"type": "object",
"properties": {
"provider": {
"type": "string",
"description": "Payment provider to use (e.g., stripe, mercadopago)"
},
"amount": {
"type": "number",
"description": "Amount in cents (e.g., 4990 for R$49.90)"
},
"currency": {
"type": "string",
"description": "ISO 4217 currency code (e.g., BRL)"
},
"description": {
"type": "string",
"description": "Product or service description"
},
"payment_methods": {
"type": "array",
"items": { "type": "string" },
"description": "Accepted payment methods (pix, card, boleto)"
}
},
"required": ["provider", "amount", "currency"]
}
}
}[!WARNING]
getToolsis async because it callssession.tools()under the hood. Alwaysawaitit. Forgetting to await will pass a Promise instead of an array toopenai.chat.completions.create, causing a runtime error.
toOpenAITool(tool): OpenAI.ChatCompletionTool
Converts a single CodeSpar tool definition to OpenAI's ChatCompletionTool format. Use this when you have already fetched tools via session.tools() and want to convert them individually -- for example, to filter or transform tools before passing them to GPT.
import { toOpenAITool } from "@codespar/openai";
const allTools = await session.tools();
// Filter to only shipping tools
const shippingTools = allTools
.filter((t) => t.name === "codespar_ship")
.map(toOpenAITool);
const response = await openai.chat.completions.create({
model: "gpt-4o",
tools: shippingTools,
messages,
});The function maps CodeSpar's input_schema to OpenAI's parameters field:
// Input: CodeSpar Tool
{ name: string; description: string; input_schema: JSONSchema }
// Output: OpenAI.ChatCompletionTool
{ type: "function", function: { name: string; description: string; parameters: JSONSchema } }handleToolCall(session, toolCall): Promise<string>
Executes an OpenAI tool call against the CodeSpar session. It extracts the function name and arguments from the ChatCompletionMessageToolCall, calls session.execute(), and returns the result as a JSON string (ready to be used as a tool message content).
import { handleToolCall } from "@codespar/openai";
// toolCall comes from response.choices[0].message.tool_calls
// {
// id: "call_abc123",
// type: "function",
// function: { name: "codespar_checkout", arguments: "{...}" }
// }
const result = await handleToolCall(session, toolCall);
console.log(result);"{\"checkout_id\":\"chk_7f8g9h0i1j2k\",\"url\":\"https://checkout.stripe.com/c/pay/cs_live_...\",\"amount\":4990,\"currency\":\"BRL\",\"status\":\"open\"}"[!NOTE] Unlike the Claude adapter's
handleToolUsewhich returnsunknown,handleToolCallreturns astring. This matches OpenAI's expectation that tool messagecontentis always a string.
Full agent loop
This is a complete, production-ready example of a GPT agent that processes commerce operations in Latin America:
import OpenAI from "openai";
import { CodeSpar } from "@codespar/sdk";
import { getTools, handleToolCall } from "@codespar/openai";
const openai = new OpenAI();
const codespar = new CodeSpar({ apiKey: process.env.CODESPAR_API_KEY });
async function run(userMessage: string) {
// 1. Create a session with the servers you need
const session = await codespar.sessions.create({
servers: ["stripe", "asaas", "correios"],
});
// 2. Get tools in OpenAI format
const tools = await getTools(session);
// 3. Build the initial messages
const messages: OpenAI.ChatCompletionMessageParam[] = [
{
role: "system",
content:
"You are a commerce assistant for a Brazilian e-commerce store. " +
"Use the available tools to handle payments, invoicing, and shipping. " +
"Always confirm amounts and details before processing payments. " +
"Respond in the same language the user writes in.",
},
{ role: "user", content: userMessage },
];
// 4. First completion
let response = await openai.chat.completions.create({
model: "gpt-4o",
tools,
messages,
});
let message = response.choices[0].message;
// 5. Tool call loop
const MAX_ITERATIONS = 10;
let iterations = 0;
while (
message.tool_calls &&
message.tool_calls.length > 0 &&
iterations < MAX_ITERATIONS
) {
// Add assistant message with tool calls
messages.push(message);
// Execute each tool call and add results
for (const toolCall of message.tool_calls) {
let content: string;
try {
content = await handleToolCall(session, toolCall);
} catch (error) {
content = JSON.stringify({
error: error instanceof Error ? error.message : "Tool call failed",
});
}
messages.push({
role: "tool",
tool_call_id: toolCall.id,
content,
});
}
// Next completion
response = await openai.chat.completions.create({
model: "gpt-4o",
tools,
messages,
});
message = response.choices[0].message;
iterations++;
}
// 6. Clean up
await session.close();
return message.content ?? "";
}
// Usage
const reply = await run("Generate a boleto for R$250 due in 7 days");
console.log(reply);Handling parallel tool calls
GPT-4o may return multiple tool calls in a single response. The OpenAI protocol requires you to return results for all tool calls before making the next completion request:
// GPT returns multiple tool calls
// message.tool_calls = [
// { id: "call_1", function: { name: "codespar_checkout", arguments: "..." } },
// { id: "call_2", function: { name: "codespar_notify", arguments: "..." } }
// ]
messages.push(message);
// Execute all in parallel for better performance
const results = await Promise.all(
message.tool_calls.map(async (toolCall) => {
const content = await handleToolCall(session, toolCall);
return { toolCall, content };
})
);
// Add all results to messages
for (const { toolCall, content } of results) {
messages.push({
role: "tool",
tool_call_id: toolCall.id,
content,
});
}[!WARNING] You must return a tool result for every tool call in the response. Omitting a result will cause OpenAI to return a 400 error on the next completion request.
Streaming
The adapter works with streaming responses. Use openai.chat.completions.create with stream: true, accumulate the streamed chunks, then handle tool calls after the stream completes:
import OpenAI from "openai";
import { CodeSpar } from "@codespar/sdk";
import { getTools, handleToolCall } from "@codespar/openai";
const openai = new OpenAI();
const codespar = new CodeSpar({ apiKey: process.env.CODESPAR_API_KEY });
async function runStreaming(userMessage: string) {
const session = await codespar.sessions.create({
servers: ["stripe", "mercadopago"],
});
const tools = await getTools(session);
const messages: OpenAI.ChatCompletionMessageParam[] = [
{
role: "system",
content: "You are a commerce assistant for a Brazilian store.",
},
{ role: "user", content: userMessage },
];
let continueLoop = true;
while (continueLoop) {
const stream = await openai.chat.completions.create({
model: "gpt-4o",
tools,
messages,
stream: true,
});
// Accumulate the streamed response
let assistantContent = "";
const toolCalls: OpenAI.ChatCompletionMessageToolCall[] = [];
const toolCallArgs: Record<number, string> = {};
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta;
if (delta?.content) {
process.stdout.write(delta.content);
assistantContent += delta.content;
}
if (delta?.tool_calls) {
for (const tc of delta.tool_calls) {
if (tc.id) {
toolCalls[tc.index] = {
id: tc.id,
type: "function",
function: { name: tc.function?.name ?? "", arguments: "" },
};
}
if (tc.function?.arguments) {
toolCallArgs[tc.index] =
(toolCallArgs[tc.index] ?? "") + tc.function.arguments;
}
}
}
}
// Finalize tool call arguments
for (const [index, args] of Object.entries(toolCallArgs)) {
if (toolCalls[Number(index)]) {
toolCalls[Number(index)].function.arguments = args;
}
}
const validToolCalls = toolCalls.filter(Boolean);
if (validToolCalls.length > 0) {
messages.push({
role: "assistant",
content: assistantContent || null,
tool_calls: validToolCalls,
});
for (const toolCall of validToolCalls) {
let content: string;
try {
content = await handleToolCall(session, toolCall);
} catch (error) {
content = JSON.stringify({
error: error instanceof Error ? error.message : "Tool call failed",
});
}
messages.push({
role: "tool",
tool_call_id: toolCall.id,
content,
});
}
} else {
continueLoop = false;
}
}
await session.close();
}
await runStreaming("Create a Pix payment for R$150");Error handling
Tool execution errors
Wrap handleToolCall in a try-catch and return errors as tool message content. This lets GPT reason about the failure and decide what to do next:
for (const toolCall of message.tool_calls) {
let content: string;
try {
content = await handleToolCall(session, toolCall);
} catch (error) {
content = JSON.stringify({
error: error instanceof Error ? error.message : "Tool call failed",
tool_name: toolCall.function.name,
});
}
messages.push({
role: "tool",
tool_call_id: toolCall.id,
content,
});
}[!TIP] Returning errors as tool results (instead of throwing) lets GPT reason about the failure. It may retry with different parameters, ask the user for clarification, or suggest an alternative approach.
API errors
Handle OpenAI-specific errors like rate limits and context length:
try {
response = await openai.chat.completions.create({
model: "gpt-4o",
tools,
messages,
});
} catch (error) {
if (error instanceof OpenAI.RateLimitError) {
// Implement exponential backoff
await new Promise((resolve) => setTimeout(resolve, 1000));
// Retry...
} else if (error instanceof OpenAI.BadRequestError) {
// Context length exceeded -- truncate messages
console.error("Context too long:", error.message);
}
}Best practices
-
Always close sessions. Use
try/finallyto ensuresession.close()runs even if the loop throws an exception. -
Scope servers narrowly. Only connect the MCP servers your agent actually needs. Fewer servers means fewer tools, which improves GPT's tool selection accuracy.
-
Use
gpt-4ofor tool calling. It has the best function-calling accuracy.gpt-4-turboworks but may be less reliable with complex tool schemas. -
Set a descriptive system prompt. Tell GPT what domain it operates in and what tools to prefer. This reduces unnecessary
codespar_discovercalls. -
Return errors as tool results. Never let
handleToolCallexceptions crash the loop. Return them as structured JSON so GPT can self-correct. -
Limit loop iterations. Add a maximum iteration count (10 is a good default) to prevent infinite tool-call loops.
Next steps
- Sessions -- Session lifecycle and configuration
- Tools and Meta-Tools -- Understand the 6 meta-tools and routing
- Claude Adapter -- If you prefer Anthropic models
- Vercel AI SDK -- Framework-agnostic with automatic tool execution
- Quickstart -- End-to-end setup in under 5 minutes