Skip to main content

Overview

ApiTraffic exposes a built-in MCP (Model Context Protocol) server that allows AI assistants to securely query your API traffic data. This enables AI-powered debugging, analytics, and monitoring workflows directly from tools like Claude Desktop, Cursor, Windsurf, VS Code, and any other MCP-compatible client.
MCP tokens are scoped to a specific environment and set of buckets — AI assistants can only access the data you explicitly allow.

Available Tools

When connected, the MCP server exposes the following tools to AI assistants:

Request & Metrics Tools

ToolDescription
search_requestsSearch API request logs with filtering by method, status code, time range, and query string
get_metricsGet summary metrics: total requests, average response time, error count, and blocked count
get_errorsList requests that returned 4xx or 5xx status codes
get_request_detailGet full details for a single request including headers, body, and response
get_throughputGet request throughput time series broken down by status code range (2xx, 3xx, 4xx, 5xx)
get_response_timesGet response time percentiles (p50, p95, p99) and averages over time

Configuration Tools

ToolDescription
list_bucketsList the buckets the MCP token has access to
list_environmentsList environments configured for the account (e.g. production, staging, development)
list_redactionsList data redaction rules that mask sensitive data from captured requests
list_exclusionsList exclusion rules that prevent matching requests from being captured

Event Tools

ToolDescription
list_eventsList event markers (deployments, incidents, config changes) to correlate with traffic data
create_eventCreate an event marker to correlate with API traffic (e.g. deployments, incidents)

Quick Start

1. Create an MCP Token

Navigate to Settings → MCP Tokens in the ApiTraffic dashboard and create a new token. Select the environment and buckets you want the AI assistant to access.
The token value is only shown once at creation. Copy it immediately and store it securely.

2. Configure Your AI Client

{
  "mcpServers": {
    "apitraffic": {
      "serverUrl": "https://api.apitraffic.io/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_MCP_TOKEN"
      }
    }
  }
}
Config file locations:
  • Windsurf: ~/.codeium/windsurf/mcp_config.json
  • Cursor: ~/.cursor/mcp.json (global) or .cursor/mcp.json (project)
  • Claude Desktop: ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
  • VS Code: .vscode/mcp.json (project)

3. Start Querying

Once connected, you can ask your AI assistant questions like:
  • “Show me the most recent API errors in production”
  • “What’s the average response time for the last hour?”
  • “Search for all POST requests to /api/users that returned 500”
  • “Give me throughput metrics for the past 24 hours”
  • “List my environments and redaction rules”
  • “Create a deployment event for v2.3.1”

Transport Protocols

ApiTraffic supports two MCP transport protocols: The primary transport with stateful session management. Sessions persist across requests using the mcp-session-id header and are backed by Redis for multi-instance reliability.
  • Endpoint: POST https://api.apitraffic.io/mcp
  • Auth: Authorization: Bearer YOUR_MCP_TOKEN header
  • Session: Automatic via mcp-session-id response header

SSE (Server-Sent Events)

Legacy transport maintained for backward compatibility with older MCP clients.
  • Endpoint: GET https://api.apitraffic.io/mcp/sse
  • Message Endpoint: POST https://api.apitraffic.io/mcp/sse/message
  • Auth: Authorization: Bearer YOUR_MCP_TOKEN header

Security

  • Scoped access — MCP tokens are read-only for traffic data; the only write operation is create_event
  • Environment scoped — Each token is locked to a single environment
  • Bucket scoped — Tokens can be restricted to specific buckets or granted access to all buckets in the environment
  • Instant revocation — Revoking a token immediately terminates access
  • No cross-tenant leakage — Token context is enforced on every tool call
  • Session isolation — Each MCP session is isolated with its own server instance

Example Interactions

Searching for Errors

Ask your AI assistant:
“Find all 500 errors in the last 2 hours and show me the request details”
The assistant will use get_errors to find matching requests, then get_request_detail to retrieve full request/response bodies for debugging.

Monitoring Performance

“What are the p95 response times for my API over the last day? Are there any anomalies?”
The assistant will use get_response_times and get_throughput to analyze performance patterns and identify spikes.

Investigating Issues

“Search for all requests from user-agent containing ‘bot’ that returned 429 status codes”
The assistant will use search_requests with the appropriate filters to find rate-limited bot traffic.

Tracking Deployments

“Create a deployment event for version 2.3.1 released from GitHub Actions”
The assistant will use create_event to create a marker that correlates with your traffic metrics, making it easy to spot regressions.

Reviewing Configuration

“Show me all my redaction rules and exclusions”
The assistant will use list_redactions and list_exclusions to display your data protection and traffic filtering configuration.