LangChain
Use @codespar/langchain to give LangChain.js agents commerce capabilities in Latin America.
LangChain Adapter
The @codespar/langchain adapter converts CodeSpar session tools into LangChain-compatible tool objects with Zod schemas and execution methods. It works with any LangChain.js agent — createToolCallingAgent, createReactAgent, or custom loops — using any LLM provider (@langchain/openai, @langchain/anthropic, @langchain/google-genai).
Installation
npm install @codespar/sdk @codespar/langchain zodpnpm add @codespar/sdk @codespar/langchain zodyarn add @codespar/sdk @codespar/langchain zod@codespar/langchain has peer dependencies on @codespar/sdk@^0.2.0 and zod@>=3.0.0. You also need a LangChain LLM package like @langchain/openai or @langchain/anthropic.
API Reference
getTools(session): Promise<CodeSparLangChainTool[]>
Fetches all tools from the session and converts them to LangChain-compatible tool objects. Each tool has a name, description, Zod schema, and an invoke method that routes through the CodeSpar session.
import { CodeSpar } from "@codespar/sdk";
import { getTools } from "@codespar/langchain";
const codespar = new CodeSpar({ apiKey: process.env.CODESPAR_API_KEY });
const session = await codespar.sessions.create({
servers: ["stripe", "mercadopago"],
});
const tools = await getTools(session);
console.log(tools[0].name); // "codespar_checkout"
console.log(tools[0].schema); // ZodObjectgetTools is async because it calls session.tools() under the hood. Always await it.
toLangChainTool(tool, session): CodeSparLangChainTool
Converts a single CodeSpar tool to LangChain format. Use this when you want to filter or transform tools individually.
import { toLangChainTool } from "@codespar/langchain";
const allTools = await session.tools();
const paymentTools = allTools
.filter((t) => t.name.includes("pay"))
.map((t) => toLangChainTool(t, session));handleToolCall(session, toolName, args): Promise<ToolResult>
Convenience executor that routes a tool call through the CodeSpar session. Returns the raw ToolResult object.
import { handleToolCall } from "@codespar/langchain";
const result = await handleToolCall(session, "codespar_checkout", {
provider: "stripe",
amount: 4990,
currency: "BRL",
});jsonSchemaToZod(schema): z.ZodObject
Utility that converts a JSON Schema object to a Zod object schema. Handles string, number, integer, boolean, array, and object types, plus required fields.
Full agent loop
This is a complete example using LangChain's tool-calling agent with OpenAI:
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { CodeSpar } from "@codespar/sdk";
import { getTools } from "@codespar/langchain";
const codespar = new CodeSpar({ apiKey: process.env.CODESPAR_API_KEY });
async function run(userMessage: string) {
// 1. Create a session with the servers you need
const session = await codespar.sessions.create({
servers: ["stripe", "asaas", "correios"],
});
// 2. Get tools in LangChain format
const tools = await getTools(session);
// 3. Create the LLM and prompt
const llm = new ChatOpenAI({ model: "gpt-4o" });
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a commerce assistant for a Brazilian e-commerce store. " +
"Use the available tools to handle payments, invoicing, and shipping. " +
"Respond in the same language the user writes in.",
],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
// 4. Create and run the agent
const agent = await createToolCallingAgent({ llm, tools, prompt });
const executor = new AgentExecutor({ agent, tools, maxIterations: 10 });
const result = await executor.invoke({ input: userMessage });
// 5. Clean up
await session.close();
return result.output;
}
const reply = await run("Generate a boleto for R$250 due in 7 days");
console.log(reply);Handling parallel tool calls
LangChain's AgentExecutor handles parallel tool calls automatically when the LLM returns multiple tool invocations. If you're building a custom loop, use Promise.all:
const toolCalls = response.tool_calls ?? [];
const results = await Promise.all(
toolCalls.map(async (tc) => {
const tool = tools.find((t) => t.name === tc.name);
if (!tool) throw new Error(`Unknown tool: ${tc.name}`);
return tool.invoke(tc.args);
})
);Streaming
LangChain supports streaming via .stream() on the agent executor:
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { CodeSpar } from "@codespar/sdk";
import { getTools } from "@codespar/langchain";
const codespar = new CodeSpar({ apiKey: process.env.CODESPAR_API_KEY });
async function runStreaming(userMessage: string) {
const session = await codespar.sessions.create({
servers: ["stripe", "mercadopago"],
});
const tools = await getTools(session);
const llm = new ChatOpenAI({ model: "gpt-4o", streaming: true });
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a commerce assistant for a Brazilian store."],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const agent = await createToolCallingAgent({ llm, tools, prompt });
const executor = new AgentExecutor({ agent, tools });
const stream = await executor.stream({ input: userMessage });
for await (const event of stream) {
if (event.output) {
process.stdout.write(event.output);
}
}
await session.close();
}
await runStreaming("Create a Pix payment for R$150");Error handling
Wrap tool invocations in try-catch and return errors as structured data so the LLM can reason about failures:
for (const tc of toolCalls) {
try {
const result = await handleToolCall(session, tc.name, tc.args);
// Feed result back to the agent
} catch (error) {
const errorResult = JSON.stringify({
error: error instanceof Error ? error.message : "Tool call failed",
tool_name: tc.name,
});
// Feed error back to the agent as tool output
}
}When using AgentExecutor, error handling is built in. The executor catches tool errors and feeds them back to the LLM automatically.
Best practices
-
Always close sessions. Use
try/finallyto ensuresession.close()runs even if the agent throws an exception. -
Scope servers narrowly. Only connect the MCP servers your agent actually needs. Fewer servers means fewer tools, which improves tool selection accuracy.
-
Set
maxIterations. PassmaxIterations: 10toAgentExecutorto prevent infinite tool-call loops. -
Use a descriptive system prompt. Tell the LLM what domain it operates in and what tools to prefer.
-
Pick the right LLM. GPT-4o and Claude have the best tool-calling accuracy. Smaller models may struggle with complex tool schemas.
-
Filter tools when possible. If your agent only needs payment tools, filter with
session.findTools("payments")before converting.
Next steps
- Sessions -- Session lifecycle and configuration
- Tools and Meta-Tools -- Understand the 6 meta-tools and routing
- OpenAI Adapter -- Direct OpenAI SDK integration
- Vercel AI SDK -- Framework-agnostic with automatic tool execution
- Quickstart -- End-to-end setup in under 5 minutes