Skip to main content

BatchData MCP Server - Developer Integration Guide

Written by Charles Parra
Updated over 3 weeks ago

BatchData MCP Server - Developer Integration Guide

Target Audience: Software developers, AI application builders, integration engineers

Last Updated: December 2025


Table of Contents


Quick Start

The following section will get you started in less than 5 minutes using the Official MCP SDK or mcp-remote.

Understanding MCP Usage Costs

MCP tool calls for Property Search and Property Lookup operations are billed identically to direct BatchData API calls, with costs determined by your token's provisioned datasets rather than the specific data requested at runtime.

Core Billing Principle: Property Search and Property Lookup request costs are determined by the authentication token's provisioned dataset access, not by the datasets or fields requested in individual MCP tool calls.

Why Token-Based Billing?: This model provides predictable cost estimation and transparent usage reporting. By tying costs to token provisioning rather than dynamic dataset selection, you can accurately forecast API expenses based on expected record volumes. Usage reports directly correlate charges to property records retrieved, making it simple to validate billing against actual usage.

Per-Record Costs: Each property record returned from Property Search and Property Lookup operations through MCP tools incurs the full per-record rate based on all datasets your token is provisioned to access, regardless of which specific datasets you request in the tool call.

Runtime Parameter Effects: The datasets and customProjection parameters in MCP property search tool calls filter which data is returned in responses but do not reduce the per-record billing cost.

Cost Control Strategies:

  • Provision tokens with only datasets needed for your specific MCP use case

  • Use the take parameter appropriately to limit the number of records returned per search

  • Implement sandbox tokens during development to avoid charges while building and testing MCP integrations

  • Work with your account manager to optimize token provisioning based on your application's actual data needs

For comprehensive billing information and dataset details, see Datasets and Custom Projections.

1. Get Your API Token

Create a server-side token at app.batchdata.com β†’ API Tokens β†’ Server Side Token

Grant the token permissions for the endpoints you need:

Category

Permissions

Address

address-geocode, address-reverse-geocode, address-verify

Phone

phone-verification

Property

property-lookup-all-attributes, property-search, property-skip-trace-v3*

* Contact your Account Manager to enable V3 Skiptrace API access.

2. Connect to the MCP Server

Choose the method that best fits your use case:

Option A: Official MCP SDK (TypeScript/Node.js)

The official @modelcontextprotocol/sdk provides full MCP protocol support.

npm install @modelcontextprotocol/sdk
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';

// Create the MCP client
const client = new Client({
name: 'my-app',
version: '1.0.0',
});

// Connect using Streamable HTTP transport
const transport = new StreamableHTTPClientTransport(
new URL('https://mcp.batchdata.com'),
{
requestInit: {
headers: {
Authorization: `Bearer ${process.env.BATCHDATA_API_TOKEN}`,
},
},
}
);

await client.connect(transport);

// List available tools
const { tools } = await client.listTools();
console.log('Available tools:', tools.map(t => t.name));

// Call a tool
const result = await client.callTool({
name: 'lookup_property',
arguments: {
property_street: '123 Main St',
property_city: 'Phoenix',
property_state: 'AZ',
property_zip: '85001',
},
});

console.log(result);

// Clean up
await client.close();

Option B: mcp-remote (CLI / Claude Desktop)

The mcp-remote package connects stdio-based MCP clients (like Claude Desktop) to remote MCP servers.

# Run directly with npx
npx -y mcp-remote https://mcp.batchdata.com \
--header "Authorization: Bearer YOUR_API_TOKEN"

Claude Desktop Configuration (claude_desktop_config.json):

{
"mcpServers": {
"BatchData": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.batchdata.com",
"--header",
"Authorization:${AUTH_HEADER}"
],
"env": {
"AUTH_HEADER": "Bearer YOUR_API_TOKEN_HERE"
}
}
}
}

Tip: Replace YOUR_API_TOKEN with your actual BatchData API token. For security, consider using environment variables.

3. For AI/LLM Integration

For building AI-powered applications, we recommend the Vercel AI SDK which provides seamless MCP integration with any LLM provider:

npm install ai @ai-sdk/openai @ai-sdk/mcp
import { createOpenAI } from '@ai-sdk/openai';
import { experimental_createMCPClient as createMCPClient } from '@ai-sdk/mcp';
import { generateText } from 'ai';

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });

const mcpClient = await createMCPClient({
transport: {
type: 'http',
url: 'https://mcp.batchdata.com',
headers: {
Authorization: `Bearer ${process.env.BATCHDATA_API_TOKEN}`,
},
},
});

const tools = await mcpClient.tools();

const result = await generateText({
model: openai('gpt-4o'),
messages: [
{ role: 'user', content: 'Look up the property at 123 Main St, Phoenix, AZ 85001' }
],
tools,
});

console.log(result.text);

// Clean up
await client.close();

Full Guide: See Vercel AI SDK Integration for complete examples with Express.js and Next.js.

Available Tools

Tool

Purpose

lookup_property

Property details, valuation, liens, history

comparable_property

CMA with comparable properties and valuations

skip_trace_property

Owner contact info (phones, emails) with TCPA/DNC flags

verify_phone

Full phone verification (carrier, DNC, TCPA)

check_dnc_status

Quick DNC registry check

check_tcpa_status

Quick TCPA litigator check

verify_address

USPS address normalization

geocode_address

Address to coordinates

reverse_geocode_address

Coordinates to address

Key Points

  • Server-side only: Keep API tokens on your backend, never in client-side code

  • TCPA/DNC compliance: Filter contacts with tcpa: true or dnc: true before calling

  • For AI integration: See Vercel AI SDK Integration


BatchData MCP Demo

We have also prepared a reference implementation a simple AI chatbot to demonstrate a secure server-side integration with the BatchData MCP Server using the Vercel AI SDK. The repository is located at

Features

Feature

Description

Multi-Provider LLM Support

Choose from OpenAI, Anthropic, or Google models

Real-Time Progress Feedback

See live updates as AI tools are called via Server-Sent Events

Secure Token Management

API tokens are kept server-side, never exposed to the browser

Property Analysis

AI-powered investment analysis using BatchData property intelligence

Skip Tracing

Find owner contact information with TCPA/DNC compliance filtering

Property Comparison

Compare multiple properties for investment decisions

Free Chat

General AI chat with access to all BatchData MCP tools


BatchData MCP Server Overview

The following section is a more comprehensive overview of the BatchData MCP server.

The BatchData MCP Server is a Model Context Protocol-compliant server that exposes BatchData's real estate property intelligence through a standardized interface. This guide covers how to integrate the MCP server into custom applications, AI workflows, and LLM-powered tools.

What You Can Build

  • AI-powered property research tools

  • Automated investment analysis platforms

  • Real estate CRM integrations with skip tracing

  • Property valuation and comparable analysis tools

  • Compliance-aware contact verification systems

  • Custom AI agents with real estate expertise

Key Technical Details

  • Protocol: Model Context Protocol (MCP)

  • Transport: Streamable HTTP

  • Authentication: Bearer token (BatchData API Token)

  • API Version: 0.0.1


Technical Architecture

πŸ“š What is MCP? (Click to expand)

MCP Protocol Basics

The Model Context Protocol (MCP) is a standardized way for AI assistants and applications to interact with external tools and data sources. The BatchData MCP Server implements this protocol using Streamable HTTP transport to expose 9 specialized tools for real estate intelligence.

Communication Flow:

  1. Client connects to MCP server using Streamable HTTP transport

  2. Server validates authentication and parameters

  3. Server calls appropriate BatchData API endpoint

  4. Server returns formatted response with property data

  5. Client processes response or passes to LLM for analysis

Architecture Diagram

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Your App/LLM β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ Streamable HTTP
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ BatchData MCP Serverβ”‚ ← Bearer Auth
β”‚ mcp.batchdata.com β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ API Calls
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ BatchData API β”‚
β”‚ 140M+ Properties β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Authentication & Connection

API Token

See Quick Start: Get Your API Token for token creation and required permissions.

All requests require the token in the Authorization header:

Authorization: Bearer YOUR_API_TOKEN

Connection Methods

Method 1: Vercel AI SDK (Recommended for AI Apps)

The Vercel AI SDK provides the most seamless integration with MCP servers and LLMs:

import { experimental_createMCPClient as createMCPClient } from '@ai-sdk/mcp';

const mcpClient = await createMCPClient({
transport: {
type: 'http',
url: 'https://mcp.batchdata.com',
headers: {
Authorization: `Bearer ${process.env.BATCHDATA_API_TOKEN}`,
},
},
});

const tools = await mcpClient.tools();

See Vercel AI SDK Integration for complete examples.

Method 2: Official MCP SDK (TypeScript/Node.js)

The official @modelcontextprotocol/sdk provides full protocol support:

import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';

const client = new Client({
name: 'my-app',
version: '1.0.0',
});

const transport = new StreamableHTTPClientTransport(
new URL('https://mcp.batchdata.com'),
{
requestInit: {
headers: {
Authorization: `Bearer ${process.env.BATCHDATA_API_TOKEN}`,
},
},
}
);

await client.connect(transport);

const result = await client.callTool({
name: 'lookup_property',
arguments: {
property_street: '123 Main St',
property_city: 'San Francisco',
property_state: 'CA',
property_zip: '94103',
},
});

await client.close();

Method 3: mcp-remote (CLI / Claude Desktop)

The mcp-remote package connects stdio-based MCP clients to remote servers:

npx -y mcp-remote https://mcp.batchdata.com \
--header "Authorization: Bearer YOUR_API_TOKEN"

Claude Desktop Configuration (claude_desktop_config.json):

{
"mcpServers": {
"batchdata": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.batchdata.com",
"--header",
"Authorization:${AUTH_HEADER}"
],
"env": {
"AUTH_HEADER": "Bearer YOUR_API_TOKEN_HERE"
}
}
}
}

Method 4: Official MCP SDK (Python)

from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client

async with streamablehttp_client(
url='https://mcp.batchdata.com',
headers={'Authorization': 'Bearer YOUR_API_TOKEN'}
) as (read_stream, write_stream, _):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()

result = await session.call_tool(
'lookup_property',
arguments={
'property_street': '123 Main St',
'property_city': 'San Francisco',
'property_state': 'CA',
'property_zip': '94103'
}
)

Available Tools

The BatchData MCP Server provides 9 specialized tools across 5 categories:

1. Property Intelligence Tools

lookup_property

Get comprehensive property details including characteristics, valuation, liens, and history.

Lookup Methods:

  • By property ID: property_id

  • By parcel: property_county_fips + property_apn

  • By full address: property_street + property_city + property_state + property_zip

  • By partial address: property_street + property_zip

Example:

{
"name": "lookup_property",
"arguments": {
"property_street": "123 Main St",
"property_city": "Phoenix",
"property_state": "AZ",
"property_zip": "85001"
}
}

Returns: Lot details, building characteristics, sale history, valuation, liens, mortgages, tax assessments, permits, and market data.

comparable_property

Perform comparative market analysis (CMA) to find similar properties and get automated valuations.

Required Parameters:

  • property_street, property_city, property_state, property_zip (subject property)

Optional Filters:

  • distance_miles (default: 1.0)

  • bedrooms_min, bedrooms_max

  • bathrooms_min, bathrooms_max

  • square_feet_min, square_feet_max

  • year_built_min, year_built_max

  • sale_date_start, sale_date_end

Returns:

  • aggregatedMetrics: Estimated value, average price, average price per square foot

  • comparableCount: Number of comparable properties found

  • comparableProperties: Array of comparable property details

2. Skip Tracing Tool

skip_trace_property

Find contact information for property owners. Powered by V3 Skip Trace API - returns up to 3 persons per property.

Lookup Methods:

  • By FIPS + APN: property_county_fips + property_apn

  • By parcel: property_state + property_county + property_apn

  • By full address: property_street + property_city + property_state + property_zip

  • By partial address: property_street + property_zip

Optional Filters:

  • contact_first_name: Filter for specific person

  • contact_last_name: Filter for specific person

Returns (per person):

  • Full name and ownership status

  • Deceased status and date of birth

  • TCPA litigator list status

  • Multiple addresses with validity indicators

  • Phone numbers with type, carrier, reachability, DNC status, and TCPA status

  • Email addresses with verification status

Important TCPA/DNC Flags:

{
"phoneNumbers": [
{
"number": "5551234567",
"type": "Mobile",
"carrier": "AT&T",
"reachable": true,
"tcpa": false, // ⚠️ true = TCPA litigator, DO NOT CALL
"dnc": false, // ⚠️ true = Do Not Call registry, DO NOT CALL
"score": 100
}
]
}

3. Phone Verification Tools

verify_phone

Complete phone verification with carrier, type, reachability, DNC, and TCPA status.

Parameters:

  • phone_number: 10-digit phone number (no country code)

Example: "phone_number": "8005551234"

Returns: Complete verification details including carrier, type, state, DNC status, and TCPA status.

check_dnc_status

Quick Do Not Call registry lookup (use when you only need DNC status).

Parameters:

  • phone_number: 10-digit phone number

Returns: DNC status only.

check_tcpa_status

Quick TCPA litigator list verification (use when you only need TCPA status).

Parameters:

  • phone_number: 10-digit phone number

Returns: TCPA litigator status only.

4. Address Verification Tools

verify_address

Normalize and verify addresses to USPS standards with detailed components.

Parameters:

  • street (required)

  • city (optional)

  • state (optional)

  • zip (optional)

Verification Methods:

  • Full: street + city + state + zip

  • Without ZIP: street + city + state

  • Without city/state: street + zip

Returns: Normalized address, ZIP+4, county, FIPS code, coordinates, USPS validity status, delivery codes, time zone, carrier route, and congressional district.

geocode_address

Convert address to geographic coordinates.

Parameters:

  • address: Full address string

Example: "address": "123 Main St, San Francisco, CA 94103"

Returns: Latitude, longitude, and GeoJSON location.

reverse_geocode_address

Convert coordinates to street address.

Parameters:

  • latitude: Decimal latitude

  • longitude: Decimal longitude

Example: "latitude": 37.7749, "longitude": -122.4194

Returns: Street, city, state, ZIP code.


Integration Patterns

Pattern 1: AI-Powered Integration (Recommended)

Use the Vercel AI SDK for seamless integration with LLMs and automatic tool execution:

import { createOpenAI } from '@ai-sdk/openai';
import { experimental_createMCPClient as createMCPClient } from '@ai-sdk/mcp';
import { generateText } from 'ai';

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });

const mcpClient = await createMCPClient({
transport: {
type: 'http',
url: 'https://mcp.batchdata.com',
headers: {
Authorization: `Bearer ${process.env.BATCHDATA_API_TOKEN}`,
},
},
});

const tools = await mcpClient.tools();

async function getPropertyDetails(address: string) {
const result = await generateText({
model: openai('gpt-4o'),
messages: [
{ role: 'user', content: `Look up the property at ${address}` }
],
tools,
});

return result.text;
}

Pattern 2: LLM Enhancement (AI Analysis)

Use the AI to fetch data and provide analysis in one call:

async function analyzePropertyForInvestment(address: string) {
const result = await generateText({
model: openai('gpt-4o'),
system: `You are a real estate investment analyst. Use the available tools to look up
property data and provide detailed investment analysis.`,
messages: [
{ role: 'user', content: `Analyze ${address} for investment potential` }
],
tools,
});

return result.text;
}

Pattern 3: Multi-Tool Workflow

The AI automatically determines which tools to use based on your request:

async function comprehensivePropertyResearch(address: string) {
const result = await generateText({
model: openai('gpt-4o'),
system: `You are a real estate research assistant. When analyzing a property:
1. Look up the property details
2. Find comparable properties
3. Skip trace for owner contact information
4. Filter out any contacts with TCPA or DNC flags

Provide a comprehensive analysis.`,
messages: [
{ role: 'user', content: `Research ${address} thoroughly for investment` }
],
tools,
});

return result.text;
}

Recommended: Vercel AI SDK Integration

For developers building AI-powered applications that integrate with the BatchData MCP Server, we recommend using the Vercel AI SDK as the primary abstraction layer for LLM interactions.

πŸ“š Why Vercel AI SDK? (Click to expand)

Why Vercel AI SDK?

The Vercel AI SDK has become the informal industry standard for integrating GenAI in JavaScript/TypeScript/Node.js applications. It provides significant advantages over using vendor-specific SDKs directly:

Benefit

Description

Vendor Agnostic

Abstracts away differences between OpenAI, Anthropic, Google, and other providers

Easy Model Switching

Switch between models (GPT-4, Claude, Gemini) with minimal code changes

Mix & Match

Use different models for different tasks within the same application

Built-in MCP Support

Native support for Model Context Protocol, eliminating the need for the official MCP SDK

UI Components

Provides framework-specific UI libraries (React, Vue, Svelte, Angular) for chat interfaces

Streaming Support

Built-in streaming with performance optimizations

Best Practice: Avoid Vendor-Specific SDKs

Do not use vendor-specific SDKs (OpenAI SDK, Anthropic SDK, Google SDK) directly in production applications. While they work, switching models or mixing providers becomes cumbersome. The Vercel AI SDK handles all of this uniformly.

// ❌ Vendor-specific (avoid)
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: '...' });

// βœ… Vercel AI SDK (recommended)
import { createOpenAI } from '@ai-sdk/openai';
import { generateText } from 'ai';
const openai = createOpenAI({ apiKey: '...' });

Connecting to BatchData MCP Server with Vercel AI SDK

The BatchData MCP Server uses Streamable HTTP transport, which is fully supported by the Vercel AI SDK's MCP client.

Installation

npm install ai @ai-sdk/openai @ai-sdk/mcp

Note: Use the stable @ai-sdk/mcp@^0.0.11 version. This version works correctly with generateText() without any workarounds. Avoid beta versions (1.0.0-beta.x) which have a known bug preventing MCP tools from executing.

Basic Setup

import { createOpenAI } from '@ai-sdk/openai';
import { experimental_createMCPClient as createMCPClient } from '@ai-sdk/mcp';
import { generateText } from 'ai';

// 1. Create the LLM provider
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
});

// 2. Create MCP client for BatchData (Streamable HTTP transport)
const mcpClient = await createMCPClient({
transport: {
type: 'http', // Streamable HTTP transport
url: 'https://mcp.batchdata.com',
headers: {
Authorization: `Bearer ${process.env.BATCHDATA_API_TOKEN}`,
},
},
});

// 3. Get tools from BatchData MCP Server
const tools = await mcpClient.tools();

// 4. Use with generateText - tools are automatically available to the LLM
const result = await generateText({
model: openai('gpt-4o'),
messages: [
{
role: 'user',
content: 'Look up the property at 123 Main St, Phoenix, AZ 85001 and analyze it for investment potential',
},
],
tools,
});

console.log(result.text);

Security: Keep API Tokens Server-Side

⚠️ IMPORTANT: Never expose API tokens in client-side code. Both the BatchData API token and LLM provider API keys (OpenAI, Anthropic, etc.) must be kept on the server.

Why this matters:

  • Client-side JavaScript is visible to anyone using browser dev tools

  • Exposed tokens can be stolen and abused, incurring charges on your account

  • Tokens in frontend code get bundled and are easily extractable

The secure pattern:

  1. Frontend sends requests to your backend API

  2. Backend stores tokens in environment variables

  3. Backend makes calls to BatchData MCP Server and LLM providers

  4. Backend returns results to frontend

Server-Side Example: Express.js API

// server/ai-service.ts - Runs on your backend server
import { createOpenAI } from '@ai-sdk/openai';
import { experimental_createMCPClient as createMCPClient } from '@ai-sdk/mcp';
import { generateText, LanguageModelUsage } from 'ai';

export interface AIResponse {
text: string;
toolCalls?: ToolCall[];
usage?: LanguageModelUsage;
}

export interface ToolCall {
name: string;
args: Record<string, unknown>;
response?: unknown;
}

// Singleton MCP client
let mcpClient: Awaited<ReturnType<typeof createMCPClient>> | null = null;

async function getMCPClient() {
if (!mcpClient) {
mcpClient = await createMCPClient({
transport: {
type: 'http', // Streamable HTTP transport
url: 'https://mcp.batchdata.com',
headers: {
// βœ… Token from server environment variable - never exposed to client
Authorization: `Bearer ${process.env.BATCHDATA_API_TOKEN}`,
},
},
});
}
return mcpClient;
}

// βœ… API key from server environment variable - never exposed to client
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
});

export async function chat(
messages: Array<{ role: 'user' | 'assistant'; content: string }>,
options?: {
systemPrompt?: string;
temperature?: number;
}
): Promise<AIResponse> {
const { systemPrompt, temperature = 0.7 } = options ?? {};

const client = await getMCPClient();
const tools = await client.tools();

const result = await generateText({
model: openai('gpt-4o-mini'),
system: systemPrompt,
messages,
tools,
temperature,
});

// Extract tool calls from response steps
const toolCalls: ToolCall[] = [];
for (const step of result.steps) {
for (const toolCall of step.toolCalls) {
toolCalls.push({
name: toolCall.toolName,
args: toolCall.args as Record<string, unknown>,
response: step.toolResults.find(
(r: any) => r.toolCallId === toolCall.toolCallId
)?.result,
});
}
}

return {
text: result.text,
toolCalls: toolCalls.length > 0 ? toolCalls : undefined,
usage: result.usage,
};
}

export async function analyzeProperty(address: string): Promise<AIResponse> {
return chat(
[{ role: 'user', content: `Analyze the property at ${address} for investment potential` }],
{
systemPrompt: `You are a real estate investment analyst. Use the available tools to look up property data and provide detailed investment analysis including wholesale, fix-and-flip, and buy-and-hold strategies.`,
}
);
}
// server/routes/ai.ts - Express.js route handler
import express from 'express';
import { chat, analyzeProperty } from '../ai-service';

const router = express.Router();

// POST /api/ai/chat
router.post('/chat', async (req, res) => {
try {
const { messages, systemPrompt, temperature } = req.body;
const result = await chat(messages, { systemPrompt, temperature });
res.json(result);
} catch (error) {
console.error('AI chat error:', error);
res.status(500).json({ error: 'Failed to process request' });
}
});

// POST /api/ai/analyze-property
router.post('/analyze-property', async (req, res) => {
try {
const { address } = req.body;
const result = await analyzeProperty(address);
res.json(result);
} catch (error) {
console.error('Property analysis error:', error);
res.status(500).json({ error: 'Failed to analyze property' });
}
});

export default router;

Server-Side Example: Next.js API Route

// app/api/ai/chat/route.ts - Next.js App Router
import { createOpenAI } from '@ai-sdk/openai';
import { experimental_createMCPClient as createMCPClient } from '@ai-sdk/mcp';
import { generateText } from 'ai';
import { NextRequest, NextResponse } from 'next/server';

// βœ… Tokens from server environment - never sent to browser
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
});

let mcpClient: Awaited<ReturnType<typeof createMCPClient>> | null = null;

async function getMCPClient() {
if (!mcpClient) {
mcpClient = await createMCPClient({
transport: {
type: 'http', // Streamable HTTP transport
url: 'https://mcp.batchdata.com',
headers: {
Authorization: `Bearer ${process.env.BATCHDATA_API_TOKEN}`,
},
},
});
}
return mcpClient;
}

export async function POST(request: NextRequest) {
try {
const { messages, systemPrompt } = await request.json();

const client = await getMCPClient();
const tools = await client.tools();

const result = await generateText({
model: openai('gpt-4o-mini'),
system: systemPrompt,
messages,
tools,
});

return NextResponse.json({
text: result.text,
usage: result.usage,
});
} catch (error) {
console.error('AI error:', error);
return NextResponse.json(
{ error: 'Failed to process request' },
{ status: 500 }
);
}
}

Frontend: Calling Your Backend API

Your frontend (React, Angular, Vue, etc.) calls your backend API - not the LLM or MCP servers directly:

// Frontend code - no API tokens here!
async function analyzeProperty(address: string) {
const response = await fetch('/api/ai/analyze-property', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ address }),
});

if (!response.ok) {
throw new Error('Failed to analyze property');
}

return response.json();
}

// Usage
const result = await analyzeProperty('123 Main St, Phoenix, AZ 85001');
console.log(result.text);

Switching Between LLM Providers

One of the key benefits of the Vercel AI SDK is easy provider switching:

// OpenAI
import { createOpenAI } from '@ai-sdk/openai';
const provider = createOpenAI({ apiKey: '...' });
const model = provider('gpt-4o');

// Anthropic
import { createAnthropic } from '@ai-sdk/anthropic';
const provider = createAnthropic({ apiKey: '...' });
const model = provider('claude-sonnet-4-20250514');

// Google
import { createGoogleGenerativeAI } from '@ai-sdk/google';
const provider = createGoogleGenerativeAI({ apiKey: '...' });
const model = provider('gemini-1.5-pro');

// The rest of the code remains unchanged!
const result = await generateText({
model, // Works with any provider
messages,
tools, // BatchData MCP tools work with all providers
});

UI Libraries for Chat Interfaces

The Vercel AI SDK provides UI libraries with built-in streaming and performance optimizations:

npm install @ai-sdk/angular

These libraries can significantly reduce the effort needed to build AI chat interfaces compared to custom implementations.

Note on @ai-sdk/mcp Versions

βœ… Recommended: Use the stable @ai-sdk/mcp@^0.0.11 version. This version works correctly with generateText() and MCP tools execute as expected.

Version Guidance

Version

Status

Notes

@ai-sdk/mcp@^0.0.11 (stable)

βœ… Recommended

Works correctly with generateText()

⚠️ Avoid

Has a bug preventing MCP tool execution

Beta Version Bug (For Reference)

If you're using a beta version (1.0.0-beta.24 or similar), you may encounter this error when tools are called:

Cannot read properties of undefined (reading 'validate')

This causes tool calls to be marked as invalid: true and never executed. The bug is in how beta versions of @ai-sdk/mcp package tool schemas without a validate() function.

Solution: Downgrade to the stable version:

npm install @ai-sdk/mcp@^0.0.11

Resources


Using Responses with LLMs

Creating Effective Prompts

When passing MCP server responses to LLMs (Claude, OpenAI, etc.), structure your prompts effectively:

Example 1: Property Investment Analysis

const propertyData = await callMCPTool('lookup_property', {...});
const comparables = await callMCPTool('comparable_property', {...});

const prompt = `
Role: You are an experienced real estate investment analyst.

Task: Analyze this property for wholesale investment potential.

Property Data:
${JSON.stringify(propertyData, null, 2)}

Comparable Properties:
${JSON.stringify(comparables, null, 2)}

Analysis Required:
1. Property Condition Assessment
- Based on year built, last sale date, and building characteristics
- Estimate repair needs

2. Market Value Analysis
- Use comparable properties to estimate current market value
- Calculate average price per square foot
- Determine ARV (After Repair Value)

3. Investment Metrics
- Recommended wholesale offer (70% of ARV minus repairs)
- Assignment fee potential
- Holding time estimate

4. Risk Factors
- Liens or encumbrances
- Market trends (based on sale dates)
- Property-specific issues

Format your response as JSON with the following structure:
{
"estimatedMarketValue": number,
"estimatedRepairs": number,
"afterRepairValue": number,
"recommendedOffer": number,
"assignmentFee": number,
"riskLevel": "low" | "medium" | "high",
"analysis": "detailed analysis text"
}
`;

const llmResponse = await callLLM(prompt);
const investmentAnalysis = JSON.parse(llmResponse);

Example 2: Skip Trace Contact Prioritization

const skipTraceData = await callMCPTool('skip_trace_property', {...});

const prompt = `
Role: You are a real estate marketing expert.

Task: Prioritize these contacts for outreach and suggest communication strategy.

Contact Data:
${JSON.stringify(skipTraceData, null, 2)}

Requirements:
1. Filter contacts based on compliance:
- Exclude TCPA litigators (tcpa: true)
- Exclude Do Not Call numbers (dnc: true)
- Only include reachable numbers

2. Prioritize contacts by:
- Phone score (higher is better)
- Phone type (Mobile preferred)
- Email verification status

3. Suggest outreach sequence:
- Which contact method to try first
- Timing recommendations
- Message templates

Provide your response as JSON:
{
"prioritizedContacts": [
{
"name": string,
"contactMethod": "phone" | "email",
"contactValue": string,
"priority": number,
"reason": string
}
],
"outreachStrategy": string
}
`;

const strategy = await callClaude(prompt);

Example 3: Market Analysis with Multiple Properties

const properties = await Promise.all([
callMCPTool('lookup_property', address1),
callMCPTool('lookup_property', address2),
callMCPTool('lookup_property', address3)
]);

const prompt = `
Role: You are a market analyst specializing in real estate.

Task: Compare these properties and identify the best investment opportunity.

Properties:
${JSON.stringify(properties, null, 2)}

Analysis:
1. Compare key metrics:
- Price per square foot
- Equity percentage
- Days on market (if active listing)
- Neighborhood characteristics

2. Identify best opportunity based on:
- Highest equity potential
- Best price relative to market
- Lowest risk profile
- Strongest exit strategy potential

3. Provide ranking with justification

Return JSON:
{
"rankings": [
{
"propertyAddress": string,
"rank": number,
"score": number,
"pros": string[],
"cons": string[],
"recommendedStrategy": string
}
],
"topPick": {
"address": string,
"reasoning": string
}
}
`;

πŸ“š Prompt Engineering Tips (Click to expand)

Prompt Engineering Tips

  1. Be Specific About Role: Define the LLM's expertise (analyst, marketer, etc.)

  2. Provide Context: Include relevant property data as JSON

  3. Structure Output: Request JSON responses for easy parsing

  4. Include Constraints: Specify compliance requirements (TCPA, DNC)

  5. Request Reasoning: Ask for justification to validate AI decisions

  6. Use Examples: Provide example outputs when needed


Error Handling

HTTP Status Codes

  • 200: Success - request processed

  • 400: Bad Request - invalid request format or parameters

  • 401: Unauthorized - invalid or missing API token

  • 403: Forbidden - token lacks required permissions

  • 422: Unprocessable Entity - validation errors

  • 429: Rate Limit Exceeded - retry after delay

  • 500+: Server Error - temporary service issue

Handling Errors with Vercel AI SDK

When using the Vercel AI SDK, errors are thrown as exceptions:

try {
const result = await generateText({
model: openai('gpt-4o'),
messages: [{ role: 'user', content: 'Look up property at...' }],
tools,
});
console.log(result.text);
} catch (error) {
if (error.message.includes('401')) {
console.error('Authentication failed - check your API token');
} else if (error.message.includes('429')) {
console.error('Rate limited - retry after delay');
} else {
console.error('Error:', error.message);
}
}

Best Practices

Security

  1. Protect API Tokens:

    • Store tokens in environment variables or secure vaults

    • Never commit tokens to version control

    • Use different tokens for development and production

    • Rotate tokens periodically

  2. Validate Input:

    • Sanitize user inputs before sending to MCP server

    • Validate address formats

    • Check phone number formats (10 digits, US only)

  3. Handle Sensitive Data:

    • Comply with TCPA regulations - never call numbers with tcpa: true

    • Respect DNC registry - never call numbers with dnc: true

    • Implement proper data retention policies

    • Log access for compliance auditing

Performance

  1. Batch Requests Wisely:

    • Don't send parallel requests for the same property

    • Space out requests to avoid rate limits

    • Cache results when appropriate

  2. Optimize Tool Selection:

    • Use check_dnc_status instead of verify_phone if you only need DNC status

    • Use check_tcpa_status instead of verify_phone if you only need TCPA status

    • Request only the data you need

  3. Implement Caching:

    const cache = new Map();
    const CACHE_TTL = 3600000; // 1 hour

    async function getCachedPropertyData(address) {
    const cacheKey = JSON.stringify(address);
    const cached = cache.get(cacheKey);

    if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
    }

    const data = await callMCPTool('lookup_property', address);
    cache.set(cacheKey, { data, timestamp: Date.now() });

    return data;
    }

Error Handling

  1. Graceful Degradation:

    • Provide fallback behavior when MCP server is unavailable

    • Show partial results if some tools fail

    • Queue requests for retry during outages

  2. User-Friendly Errors:

    try {
    const data = await callMCPTool('lookup_property', address);
    } catch (error) {
    if (error.status === 404) {
    return { error: 'Property not found in database' };
    } else if (error.status === 401) {
    return { error: 'Authentication failed - check API token' };
    } else {
    return { error: 'Service temporarily unavailable - please try again' };
    }
    }

Compliance

  1. TCPA Compliance:

    function filterCompliantPhones(persons) {
    return persons.flatMap(person =>
    person.phoneNumbers.filter(phone =>
    !phone.tcpa && // Not TCPA litigator
    !phone.dnc && // Not on DNC registry
    phone.reachable // Number is active
    )
    );
    }
  2. Data Usage:

    • Use property data only for authorized purposes

    • Respect FCRA regulations for credit decisions

    • Implement proper consent mechanisms for contact


Code Examples

Complete Integration Example (TypeScript/Node.js)

import { createOpenAI } from '@ai-sdk/openai';
import { experimental_createMCPClient as createMCPClient } from '@ai-sdk/mcp';
import { generateText } from 'ai';

// Initialize providers
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });

// Connect to BatchData MCP Server
const mcpClient = await createMCPClient({
transport: {
type: 'http',
url: 'https://mcp.batchdata.com',
headers: {
Authorization: `Bearer ${process.env.BATCHDATA_API_TOKEN}`,
},
},
});

const tools = await mcpClient.tools();

// Property Lookup
const propertyResult = await generateText({
model: openai('gpt-4o'),
messages: [
{ role: 'user', content: 'Look up 123 Main St, Phoenix, AZ 85001' }
],
tools,
});

console.log(propertyResult.text);

// Investment Analysis
const analysisResult = await generateText({
model: openai('gpt-4o'),
system: `You are a real estate investment analyst. Analyze properties for
wholesale, fix-and-flip, and buy-and-hold strategies.`,
messages: [
{ role: 'user', content: 'Analyze 123 Main St, Phoenix, AZ 85001 for investment' }
],
tools,
});

console.log(analysisResult.text);

// Skip Trace with Compliance
const skipTraceResult = await generateText({
model: openai('gpt-4o'),
system: `You are a real estate marketing specialist. When returning contact info:
- Filter out any contacts with TCPA or DNC flags
- Only include reachable phone numbers
- Prioritize by phone score`,
messages: [
{ role: 'user', content: 'Skip trace 123 Main St, Phoenix, AZ 85001' }
],
tools,
});

console.log(skipTraceResult.text);

// Clean up
await mcpClient.close();

Express.js Server Example

See the Vercel AI SDK Integration section for complete server-side examples with Express.js and Next.js.


Rate Limiting & Performance

Rate Limits

The BatchData MCP Server is subject to the same rate limits as the BatchData API:

  • Standard tier: Varies by subscription plan

  • Rate limit headers: Check response headers for limit info

  • 429 responses: Indicates rate limit exceeded

Best Practices:

  • Implement exponential backoff for 429 errors

  • Cache frequently accessed data

  • Batch operations when possible

  • Monitor your usage through the BatchData platform

Performance Optimization

  1. Minimize Round Trips:

    • Request all needed data in single tool calls when possible

    • Use comparable property filters to reduce result size

  2. Implement Request Queuing:

    class RequestQueue {
    constructor(maxConcurrent = 5) {
    this.queue = [];
    this.active = 0;
    this.maxConcurrent = maxConcurrent;
    }

    async add(fn) {
    while (this.active >= this.maxConcurrent) {
    await new Promise(resolve => setTimeout(resolve, 100));
    }

    this.active++;
    try {
    return await fn();
    } finally {
    this.active--;
    }
    }
    }
  3. Monitor Performance:

    • Track request latency

    • Log slow requests for investigation

    • Set up alerts for errors or rate limiting


Additional Resources


Support

For technical support, questions, or integration assistance:

When contacting support, please include:

  • BatchData Account name

  • Tool name and parameters used

  • Error messages or unexpected behavior


Last Updated: December 2025

Version: 1.1

Changelog:

  • Initial release: Developer integration guide with comprehensive tool documentation, LLM integration patterns, and code examples in JavaScript and Python

Did this answer your question?