Documentation Index
Fetch the complete documentation index at: https://docs.apitraffic.io/llms.txt
Use this file to discover all available pages before exploring further.
Overview
ApiTraffic exposes a built-in MCP (Model Context Protocol) server that allows AI assistants to securely query your API traffic data. This enables AI-powered debugging, analytics, and monitoring workflows directly from tools like Claude Desktop, Cursor, Windsurf, VS Code, and any other MCP-compatible client.MCP tokens are scoped to a specific environment and set of buckets — AI assistants can only access the data you explicitly allow.
Available Tools
When connected, the MCP server exposes the following tools to AI assistants:Request & Metrics Tools
| Tool | Description |
|---|---|
search_requests | Search API request logs with filtering by method, status code, time range, and query string |
get_metrics | Get summary metrics: total requests, average response time, error count, and blocked count |
get_errors | List requests that returned 4xx or 5xx status codes |
get_request_detail | Get full details for a single request including headers, body, and response |
get_throughput | Get request throughput time series broken down by status code range (2xx, 3xx, 4xx, 5xx) |
get_response_times | Get response time percentiles (p50, p95, p99) and averages over time |
Configuration Tools
| Tool | Description |
|---|---|
list_buckets | List the buckets the MCP token has access to |
list_environments | List environments configured for the account (e.g. production, staging, development) |
list_redactions | List data redaction rules that mask sensitive data from captured requests |
list_exclusions | List exclusion rules that prevent matching requests from being captured |
Event Tools
| Tool | Description |
|---|---|
list_events | List event markers (deployments, incidents, config changes) to correlate with traffic data |
create_event | Create an event marker to correlate with API traffic (e.g. deployments, incidents) |
Quick Start
1. Create an MCP Token
Navigate to Settings → MCP Tokens in the ApiTraffic dashboard and create a new token. Select the environment and buckets you want the AI assistant to access.2. Configure Your AI Client
Config file locations:
- Windsurf:
~/.codeium/windsurf/mcp_config.json - Cursor:
~/.cursor/mcp.json(global) or.cursor/mcp.json(project) - Claude Desktop:
~/Library/Application Support/Claude/claude_desktop_config.json(macOS) - VS Code:
.vscode/mcp.json(project)
3. Start Querying
Once connected, you can ask your AI assistant questions like:- “Show me the most recent API errors in production”
- “What’s the average response time for the last hour?”
- “Search for all POST requests to /api/users that returned 500”
- “Give me throughput metrics for the past 24 hours”
- “List my environments and redaction rules”
- “Create a deployment event for v2.3.1”
Transport Protocols
ApiTraffic supports two MCP transport protocols:Streamable HTTP (Recommended)
The primary transport with stateful session management. Sessions persist across requests using themcp-session-id header and are backed by Redis for multi-instance reliability.
- Endpoint:
POST https://api.apitraffic.io/mcp - Auth:
Authorization: Bearer YOUR_MCP_TOKENheader - Session: Automatic via
mcp-session-idresponse header
SSE (Server-Sent Events)
Legacy transport maintained for backward compatibility with older MCP clients.- Endpoint:
GET https://api.apitraffic.io/mcp/sse - Message Endpoint:
POST https://api.apitraffic.io/mcp/sse/message - Auth:
Authorization: Bearer YOUR_MCP_TOKENheader
Security
- Scoped access — MCP tokens are read-only for traffic data; the only write operation is
create_event - Environment scoped — Each token is locked to a single environment
- Bucket scoped — Tokens can be restricted to specific buckets or granted access to all buckets in the environment
- Instant revocation — Revoking a token immediately terminates access
- No cross-tenant leakage — Token context is enforced on every tool call
- Session isolation — Each MCP session is isolated with its own server instance
Example Interactions
Searching for Errors
Ask your AI assistant:“Find all 500 errors in the last 2 hours and show me the request details”The assistant will use
get_errors to find matching requests, then get_request_detail to retrieve full request/response bodies for debugging.
Monitoring Performance
“What are the p95 response times for my API over the last day? Are there any anomalies?”The assistant will use
get_response_times and get_throughput to analyze performance patterns and identify spikes.
Investigating Issues
“Search for all requests from user-agent containing ‘bot’ that returned 429 status codes”The assistant will use
search_requests with the appropriate filters to find rate-limited bot traffic.
Tracking Deployments
“Create a deployment event for version 2.3.1 released from GitHub Actions”The assistant will use
create_event to create a marker that correlates with your traffic metrics, making it easy to spot regressions.
Reviewing Configuration
“Show me all my redaction rules and exclusions”The assistant will use
list_redactions and list_exclusions to display your data protection and traffic filtering configuration.