Skip to main content
All posts
MCP

Building the Largest MCP Catalog for Latin America

The MCP ecosystem has 5,800+ servers and zero covering Pix, CFDI, Melhor Envio, SPEI, or any LatAm commercial API. We've built dozens of servers across 4 countries: Brazil, Mexico, Argentina, and Colombia. Payments, fiscal, logistics, messaging, banking, ERP. Here's what we learned.
FC
Fabiano Cruz
Co-founder, CodeSpar
MCP
2026.03.18
26 min
50+MCP servers published
400+tools total
4countries covered

We started in January 2026 with a simple question: what commercial APIs do businesses in Latin America actually use every day? Not what's trendy. Not what's well-documented. What do real businesses, processing real transactions, depend on?

The answer became the catalog. And the process of building it taught us more about the state of LatAm fintech infrastructure than any analyst report ever could.

As of April 2026, we have published over 50 MCP servers covering 9 verticals across 4 countries. These servers expose more than 400 tools that let AI agents perform real commerce operations - charge customers, generate invoices, ship products, send WhatsApp notifications - in a region that processes over $200 billion in digital payments annually but has near-zero AI agent infrastructure.

Why MCP, not REST or GraphQL

Before we built a single server, we had to make an architectural decision that would define everything downstream: what protocol should AI agents use to interact with commerce APIs?

The candidates were obvious: REST, GraphQL, or MCP (Model Context Protocol). We chose MCP, and it wasn't a close call.

REST is designed for human developers, not AI agents. A REST API gives you endpoints, HTTP methods, request/response schemas, and authentication headers. A human developer reads the docs, writes a client, handles errors, and ships. An AI agent, on the other hand, needs to discover available operations dynamically, understand parameter schemas at runtime, and execute tools through a standardized protocol. REST doesn't provide any of this. Every REST API is a snowflake. Every integration is bespoke.

GraphQL solves the wrong problem. GraphQL is excellent for frontend developers who need to fetch exactly the data they want with a single query. But AI agents don't query data - they execute operations. An agent doesn't want to compose a GraphQL mutation to create a Pix payment. It wants to call a tool called "create_pix_payment" with typed parameters and get a structured result. GraphQL adds complexity (schema introspection, query parsing, resolver chains) without adding value for tool-use workflows.

MCP was designed for exactly this use case. Model Context Protocol defines a standard for exposing tools to AI models. Each tool has a name, a description (that the model reads to understand what it does), and a JSON Schema for input parameters (that the model uses to generate valid arguments). The model discovers tools, selects the right one, generates arguments, and the MCP client executes the call. It's the right abstraction for the right problem.

protocol-comparison
// REST: Human developer writes this
const response = await fetch("https://api.asaas.com/v3/payments", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "access_token": process.env.ASAAS_API_KEY,
  },
  body: JSON.stringify({
    customer: "cus_abc123",
    billingType: "PIX",
    value: 150.00,
    dueDate: "2026-04-20",
  }),
});
// Developer must know: endpoint, auth pattern, field names,
// date format, value format (decimal vs centavos), error codes

// MCP: AI agent calls this
// Tool: asaas_create_pix_charge
// Input: { amount: 15000, payer_cpf: "123.456.789-00" }
// The MCP server handles: auth, field mapping, value conversion,
// error normalization, retry logic

The DX difference is not marginal. It's categorical. With REST, an agent integration with Asaas requires the agent to understand Asaas's specific API design, authentication model, error format, and field naming conventions. With MCP, the agent calls a tool with a descriptive name and typed parameters. The MCP server handles everything else.

We measured the integration effort for a payment operation across all three approaches:

dx-comparison
Integration effort: "Create a Pix payment"

REST API (direct):
  - Read API docs: 30-60 min
  - Write auth handler: 15 min
  - Write request builder: 20 min
  - Write error handler: 30 min
  - Write response parser: 15 min
  - Write tests: 45 min
  - Total: 2.5-3 hours per provider
  - Per 16 payment providers: 40-48 hours

MCP Server (our servers):
  - npm install @codespar/mcp-asaas
  - Total: 30 seconds
  - Per 16 payment providers: via preset, 30 seconds total

CodeSpar SDK (meta tools):
  - npm install @codespar/sdk
  - session.tools() → codespar_pay
  - Total: 30 seconds, all providers included

The MCP server doesn't just wrap the API. It encodes the domain knowledge that makes the API usable. That's the value that took months to build and takes seconds to consume.

The breakdown by category

CategoryServersToolsExample APIs
Agentic Protocols459Stripe ACP, Google UCP, AP2, x402
Payments16145Zoop, Asaas, PagSeguro, Cielo, Mercado Pago
Communication559Z-API, Evolution API, Zenvia, Take Blip
Crypto/Stablecoins550Circle, Mercado Bitcoin, Bitso
ERP335Omie, Bling, Tiny
E-commerce/Logistics344Melhor Envio, VTEX, Correios
Fiscal/Invoicing333Nuvem Fiscal, Conta Azul, Focus NFe
Banking223Stark Bank, Open Finance
Identity115BrasilAPI (free, no auth)
Mexico6-FacturAPI, Conekta, SPEI, Skydropx, Bind ERP, Belvo
Argentina5-AFIP, Andreani, Tienda Nube, Colppy, BCRA
Colombia4-Wompi, Siigo, Nequi, Coordinadora

Nine verticals. Four countries. Over 400 tools. Every single one MIT licensed, published on npm, testable in isolation.

The build process: from REST API to MCP server

Building an MCP server for a LatAm commercial API is not a weekend hackathon project. Each server goes through a rigorous build process that typically takes 3-5 days for a simple API (BrasilAPI, 15 tools, no auth) to 2-3 weeks for a complex one (Nuvem Fiscal, 12 tools, certificate-based auth, SEFAZ XML schemas, state-specific tax rules).

Step 1: API audit

We start by auditing the provider's REST API. Not just reading the docs - actually hitting every endpoint in a sandbox environment. We document every field, every error code, every undocumented behavior. This step alone takes 1-3 days because LatAm API documentation is, to put it diplomatically, inconsistent.

Asaas has excellent documentation. Zoop has good documentation with some gaps in error codes. Nuvem Fiscal has documentation that covers the happy path but omits half the SEFAZ rejection codes. Cielo has documentation in Portuguese only, with some endpoints documented in PDF files rather than on the website. PagSeguro has documentation spread across three different developer portals because they've rebranded twice.

Step 2: Tool design

We map API endpoints to MCP tools. This is not a 1:1 mapping. A single MCP tool might combine multiple API calls (creating a customer and then charging them), and a single API endpoint might spawn multiple tools (a Pix payment and a boleto payment use the same endpoint but have different parameter requirements and different behaviors).

The tool design is where domain knowledge matters most. We name tools so that an AI model can understand what they do from the name alone. We write descriptions that tell the model when to use this tool versus a similar one. We design parameter schemas that are strict enough to prevent invalid requests but flexible enough to handle the variations between providers.

tool-design
// BAD: mirrors the API
{
  name: "post_v3_payments",
  description: "Creates a payment via POST /v3/payments",
  parameters: { /* mirrors API body schema exactly */ }
}

// GOOD: designed for AI agents
{
  name: "create_pix_charge",
  description: "Creates a Pix charge and returns a QR code for payment.
    Use this when the customer wants to pay via Pix (instant bank transfer).
    The QR code expires after the specified duration (default: 30 minutes).
    Amount is in centavos (BRL). Example: 15000 = R$150.00.",
  parameters: {
    amount: { type: "integer", description: "Amount in centavos" },
    payer_cpf: { type: "string", description: "Payer CPF (11 digits)" },
    payer_name: { type: "string", description: "Payer full name" },
    expiry_minutes: { type: "integer", default: 30 },
  }
}

The description is written for the model, not for a human developer reading API docs. It tells the model exactly when this tool is appropriate, what the parameters mean in context, and what to expect in the response. These descriptions are the most important part of the MCP server. They're the interface between human intent and machine execution.

Step 3: Auth abstraction

Every provider has its own authentication model. This is, without exaggeration, the hardest part of building MCP servers for LatAm APIs.

auth-patterns
// Auth patterns across our 50+ servers:

OAuth2 + PKCE:     Stripe ACP, Google UCP, Mercado Pago, VTEX
OAuth2 (basic):    Cielo, Bling
API Key (header):  Asaas, Zoop, Z-API, Melhor Envio, Focus NFe
API Key (query):   PagSeguro (legacy), BrasilAPI (optional)
API Key + Secret:  Conekta, Wompi
HMAC-SHA256:       Stark Bank (signed requests)
Certificate (A1):  Nuvem Fiscal (NF-e), SEFAZ direct
Certificate (A3):  NF-e in some states (hardware token)
Basic Auth:        Omie, Tiny, Bind ERP
Bearer Token:      Circle, Bitso, Evolution API
mTLS:              Open Finance Brasil (FAPI compliance)
Custom:            SPEI (STP digital signature), AFIP (WSAA)

The MCP server must abstract all of this behind the standard MCP auth flow. When a developer connects an MCP server to their agent, they should provide credentials once and never think about auth again. Token refresh, certificate rotation, HMAC signing - all handled transparently.

For OAuth2 providers, we implement the full authorization code flow with PKCE, including token storage, refresh token rotation, and graceful degradation when refresh fails. For certificate-based auth (the Brazilian NF-e case), we accept PFX/P12 files and handle the SSL context internally. For HMAC-signed providers like Stark Bank, we compute the signature on every request using the provider's documented algorithm.

Step 4: Error normalization

This is where the real domain knowledge lives. Every provider returns errors differently, and the error messages range from helpful to cryptic to outright misleading.

error-examples
// Asaas error for invalid CPF:
{ "errors": [{ "code": "invalid_value", "description": "CPF inválido" }] }
// Clear enough.

// Zoop error for deactivated Pix key:
{ "error": { "status_code": 422, "message": "pix_key_not_found" } }
// Misleading. The key exists but was deactivated by the bank.

// SEFAZ rejection for suspended state registration:
{ "cStat": "539", "xMotivo": "Rejeição: Duplicidade de NF-e..." }
// Wrong message. Code 539 actually means the CNPJ's state
// registration (IE) is suspended, not that the NF-e is duplicated.
// The real message depends on the state.

// Cielo error for expired card:
{ "Code": 77, "Message": "Cartão cancelado" }
// "Cancelado" here means expired, not cancelled. The card is
// still valid but past its expiration date.

// Our normalized MCP error:
{
  "error": "CARD_EXPIRED",
  "message": "The card has expired. Ask the customer for updated
    card details or suggest an alternative payment method.",
  "provider_code": 77,
  "recoverable": true,
  "suggested_action": "request_new_card"
}

We normalize every error into a structured format that tells the AI agent not just what went wrong, but what to do about it. The recoverable flag tells the agent whether to retry or escalate. The suggested_action field gives the agent a concrete next step. This turns cryptic provider errors into actionable agent intelligence.

Every server we publish encodes domain knowledge that would take a developer weeks to acquire independently. That's the value. Not the API wrapper. The knowledge embedded in it.

The registry: how servers are discovered

All published MCP servers are indexed in a registry JSON that describes each server's capabilities, auth requirements, geographic coverage, and tool manifest. The registry is the source of truth that the SDK uses to build presets and the dashboard uses to display available integrations.

registry-schema
{
  "id": "asaas",
  "name": "Asaas",
  "package": "@codespar/mcp-asaas",
  "version": "0.2.1",
  "protocol": "mcp-streamable-http",
  "category": "payments",
  "countries": ["BR"],
  "auth": {
    "type": "api_key",
    "location": "header",
    "header_name": "access_token",
    "sandbox_url": "https://sandbox.asaas.com/api/v3",
    "production_url": "https://api.asaas.com/v3"
  },
  "tools": [
    {
      "name": "create_pix_charge",
      "description": "Creates a Pix charge...",
      "input_schema": { /* JSON Schema */ }
    },
    // ... 17 more tools
  ],
  "health": {
    "uptime_30d": 0.9987,
    "avg_latency_ms": 340,
    "last_checked": "2026-04-15T00:00:00Z"
  }
}

The registry is updated automatically by a CI pipeline that runs every 6 hours. It pings every server's health endpoint, updates uptime metrics, and publishes the updated registry to our CDN. When the SDK creates a session, it fetches the latest registry to determine which servers are available and healthy.

Geographic strategy: why Brazil first

We could have started anywhere. We chose Brazil for three reasons, and the order matters.

1. Market size demands local infrastructure. Brazil is the largest economy in Latin America. $2.1 trillion GDP. 215 million people. Over 150 million Pix users (as of 2025, per Central Bank data). The country processed R$17.2 trillion in Pix transactions in 2024. Any commerce infrastructure play in LatAm that doesn't start with Brazil is building on the wrong foundation.

2. Regulatory complexity is a moat when you solve it. Brazil has the most complex fiscal system in Latin America. NF-e (product invoices), NFS-e (service invoices), NFC-e (consumer invoices), CT-e (transport invoices), MDF-e (transport manifests). Each has different XML schemas, different SEFAZ endpoints per state, different digital certificate requirements. This complexity is why no one has built MCP servers for Brazilian fiscal APIs. It's also why whoever does it first creates a durable competitive advantage. We did it first.

3. Pix is the most advanced instant payment system in the world. Not hyperbole. Pix processes more transactions than Visa and Mastercard combined in Brazil. It's free for individuals, nearly free for merchants, settles in under 10 seconds, and works 24/7/365. Any AI agent doing commerce in Brazil must support Pix natively. There's no MCP server for Pix on any public registry. We built servers for every major Pix provider: Asaas, Zoop, PagSeguro, Cielo, Mercado Pago, and Stark Bank.

Then Mexico

Mexico is the second-largest economy in LatAm ($1.3 trillion GDP) and the fastest-growing e-commerce market. CFDI (Comprobantes Fiscales Digitales por Internet) is Mexico's mandatory electronic invoicing system - every business must issue CFDIs for every transaction. SPEI (Sistema de Pagos Electronicos Interbancarios) is Mexico's real-time payment rail, similar to Pix but older and with different technical requirements.

We've published 6 MCP servers for Mexico: FacturAPI (CFDI generation), Conekta (payments), SPEI via STP (real-time transfers), Skydropx (logistics), Bind ERP (enterprise resource planning), and Belvo (open banking). These cover the core commerce stack for any business operating in Mexico.

Then Argentina and Colombia

Argentina presents unique challenges: capital controls, multiple exchange rates, and AFIP (the federal tax authority) requiring electronic invoicing for all transactions. We built 5 servers covering AFIP integration, Andreani logistics, Tienda Nube e-commerce, Colppy accounting, and BCRA (Central Bank) reference data.

Colombia is the third-largest economy in the region and has been aggressively modernizing its payment infrastructure. Wompi (payments), Siigo (ERP/invoicing), Nequi (digital wallet), and Coordinadora (logistics) form the core of our Colombian coverage.

Country expansion roadmap

Chile, Peru, and Ecuador are on the roadmap for H2 2026. Chile has Transbank (payments), SII (electronic invoicing), and Chilexpress (logistics). Peru has Niubiz (payments), SUNAT (invoicing), and Olva Courier (logistics). We're looking for contributors who know these markets. If you've built integrations with these APIs, we want to talk.

What makes a production-ready MCP server

We reject the premise that an MCP server is just a wrapper around a REST API. A wrapper calls the API and returns the response. A production-ready MCP server is an intelligent intermediary that handles everything between the agent's intent and the provider's response.

Our quality standards for shipping a server to npm:

1. Complete error normalization. Every known error code from the provider is mapped to a structured error with a human-readable message, a machine-readable code, a recoverability flag, and a suggested action. We maintain error code databases for each provider, updated from production incidents.

2. Retry with exponential backoff. Transient failures (429, 502, 503, 504) are retried automatically with exponential backoff and jitter. The default is 3 retries with a base delay of 200ms. Idempotent operations (creating a payment with an idempotency key) are safe to retry. Non-idempotent operations (refunding a payment) are not retried automatically.

3. Auth lifecycle management. OAuth tokens are refreshed proactively (before expiry, not after failure). API keys are validated on server startup. Certificates are checked for expiration on every request. If auth fails despite refresh attempts, the server returns a structured error telling the agent to re-authenticate, not a generic 401.

4. Input validation before API call. CPF/CNPJ format validation. Amount range checks (negative amounts, amounts exceeding provider limits). Required field validation. Date format normalization. This prevents unnecessary API calls and gives the agent immediate, clear error messages instead of waiting for the provider to reject the request.

5. Response normalization. Provider responses are transformed into consistent, predictable structures. Amounts are always in centavos (integer). Dates are always ISO 8601. Status codes are always from a standardized enum (pending, confirmed, failed, refunded). The agent should never need to know which provider was called to understand the response.

6. Sandbox and production parity. Every server works in both sandbox and production mode with the same tool interface. The only difference is the base URL and credentials. Developers can test against sandbox environments with real tool calls and real responses (including realistic error scenarios) before going to production.

quality-checklist
// Our internal checklist before npm publish:
□ All tools have descriptions written for AI models (not humans)
□ All error codes documented and normalized
□ Retry logic with exponential backoff + jitter
□ Auth refresh handles token expiration gracefully
□ Input validation prevents invalid API calls
□ Response normalization to standard schema
□ Sandbox mode with realistic test data
□ Integration tests against sandbox API
□ Health check endpoint for registry monitoring
□ npm package size < 50KB (no bundled dependencies)
□ Zero runtime dependencies beyond @modelcontextprotocol/sdk
□ MIT LICENSE file present
□ README with tool list, auth setup, and examples

What we learned building this catalog

API documentation is rarely enough. Most LatAm commercial APIs have documentation that's incomplete, outdated, or only available in Portuguese/Spanish. The real behavior of the API is discovered through testing, production errors, and conversations with the provider's engineering team. We've filed over 40 documentation bug reports to providers since January 2026. About half were acknowledged and fixed. The other half were met with silence.

Auth is the hardest part. Each API has its own auth model. OAuth2 with PKCE. API keys in headers. API keys in query params. HMAC-signed requests. Certificate-based auth (A1/A3 for NF-e). The MCP server must abstract all of this behind the standard MCP auth flow. We spent more time on auth handling than on tool implementation across the entire catalog.

Error handling is where domain knowledge lives. A SEFAZ rejection code 539 means the CNPJ's state registration is suspended. A Zoop error 422 with code "pix_key_not_found" means the Pix key was registered but deactivated. A Cielo code 77 says "cancelled" but means "expired." These aren't in the docs. They're in the code of people who've processed millions of transactions.

Provider sandbox quality varies wildly. Asaas has an excellent sandbox that mirrors production behavior closely. Zoop's sandbox returns realistic errors but doesn't support all payment methods. Cielo's sandbox requires a special test card number that's documented in a PDF buried three levels deep in their developer portal. PagSeguro's sandbox was down for 11 days in February 2026. Nuvem Fiscal's sandbox connects to SEFAZ homologation (the tax authority's test environment), which has its own availability issues independent of Nuvem Fiscal.

The 80/20 rule is real but the last 20% matters. Building the first 80% of a server (happy path for the main tools) takes 2-3 days. The last 20% (edge cases, error handling, auth refresh, retry logic) takes 2-3 weeks. The temptation is to ship the 80% version and iterate. We tried this with our first 5 servers. Every single one came back with production issues within a week. Now we ship 100% or we don't ship.

The flywheel

The catalog is not a static artifact. It's a flywheel.

More servers mean more developers using the SDK. More developers mean more feedback - bug reports, feature requests, edge cases we didn't anticipate. More feedback means better servers. Better servers attract more developers. The flywheel spins.

flywheel
More MCP servers published on npm
  → More developers install @codespar/sdk
    → More agents process real transactions
      → More edge cases discovered in production
        → More bug fixes and error handling improvements
          → Higher quality servers
            → More developers trust the SDK
              → More demand for new servers
                → We build more servers
                  → (repeat)

We've seen this play out concretely. In February, a developer in Sao Paulo reported that our Asaas server didn't handle split payments (where a payment is divided between a marketplace and a seller). We added split payment support in 3 days. That developer then requested Zoop support because their marketplace uses Zoop for sub-merchant onboarding. We built the Zoop server in 10 days. Two other developers saw the Zoop server and requested Cielo and PagSeguro. The flywheel is real.

As of April 2026, we're receiving an average of 4 new server requests per week through GitHub issues. We can ship approximately 2 servers per week with our current team. The demand is outpacing supply, which is exactly the dynamic you want in an open source project.

The catalog is alive. Every server we publish makes the next server easier to build, and every developer who uses a server makes every other server better.

How to use them

Every server is published on npm under the @codespar scope:

install
# Install a single server
npm install @codespar/mcp-zoop

# Or use the SDK to get all servers via preset
npm install @codespar/sdk

Each server implements the MCP Streamable HTTP protocol. You can use them with any MCP-compatible client, any AI framework, or through the CodeSpar SDK's meta tool layer.

sdk-usage
import { CodeSpar } from "@codespar/sdk@0.2.0";

const cs = new CodeSpar({ apiKey: process.env.CODESPAR_API_KEY });

// Option 1: Use a preset (recommended)
const session = await cs.session("brazil-full");
const tools = await session.tools(); // 6 meta tools, 50+ servers

// Option 2: Use specific servers
const session = await cs.session({
  servers: ["asaas", "nuvem-fiscal", "melhor-envio"],
});

// Option 3: Connect directly via MCP
const { url, headers } = session.mcp;
// Use with Claude Desktop, Cursor, or any MCP client

They're MIT licensed. Every line is readable. Every tool is testable. Use them, fork them, improve them, and tell us what's broken.

The catalog is alive

The catalog is growing. We add servers based on developer requests and market demand. If there's a LatAm commercial API your agents need access to, open an issue on GitHub or email us. We prioritize based on how many developers ask. The most-requested server that doesn't exist yet is Mercado Libre (the largest e-commerce platform in LatAm). It's being built now.