DEV Community

Shib™ 🚀
Shib™ 🚀

Posted on • Originally published at apistatuscheck.com

How to Check API Status from Your AI Coding Assistant (MCP Server Guide)

Originally published on API Status Check

You're deep in a checkout flow integration. The Stripe API call fails with a generic timeout. You spend 20 minutes adding debug logs, checking your API keys, reviewing your code—only to eventually Google "is Stripe down?" and discover there's a documented outage.

That debugging time? Completely wasted.

What if your AI coding assistant could check API status for you, right in the middle of solving the problem? No context switching, no browser tabs, no breaking your flow.

That's exactly what the API Status Check MCP server does. It's the first Model Context Protocol tool built specifically for API status monitoring, and it works with Claude Desktop, Cursor, Windsurf, and any other MCP-compatible assistant.

Here's how to set it up and why it matters for AI-assisted development.

The Problem: Debugging Ghost Issues

The scenario every developer knows:

// Your perfectly fine code
const payment = await stripe.charges.create({
  amount: 2000,
  currency: 'usd',
  source: token
})
// ❌ Error: connect ETIMEDOUT
Enter fullscreen mode Exit fullscreen mode

You waste time on:

  • ✅ Checking your API key (it's fine)
  • ✅ Testing network connectivity (it's fine)
  • ✅ Reading Stripe docs (your code is fine)
  • ✅ Adding retry logic (doesn't help)
  • Realizing Stripe's API is having issues (20 minutes later, via Twitter)

If you knew Stripe was experiencing degraded performance before you started debugging, you'd skip straight to the workaround or communicate the issue to stakeholders.

AI assistants are supposed to help you code faster. They should also help you debug faster—by checking if the problem is even yours to solve.

What is Model Context Protocol (MCP)?

The 30-second version: MCP is an open protocol that lets AI assistants use external tools and data sources.

Think of it like this:

  • Your AI assistant is smart but isolated—it only knows what's in its training data
  • MCP gives it "hands"—it can call APIs, read files, query databases, check system status
  • You define which tools your assistant can access

Why it matters:
Anthropic (Claude), Cursor, Windsurf, and others are adopting MCP as the standard way to extend AI assistants. Instead of building custom plugins for each tool, developers build one MCP server that works everywhere.

API Status Check is the first API monitoring tool to ship an MCP server. That means checking API status becomes a native capability for any MCP-compatible assistant—no browser, no separate dashboard, no context switching.

Introducing the API Status Check MCP Server

The apistatuscheck-mcp-server wraps our public API endpoints and exposes five tools:

  1. check_status — Check a specific API by slug
  2. list_apis — See all monitored APIs
  3. list_categories — Browse by category (Payments, Cloud, AI, etc.)
  4. check_category — Check all APIs in a category
  5. check_url — Check any public URL's HTTP status

Your AI assistant can now answer questions like:

"Is Stripe down?"

"Check the status of all payment APIs before I deploy."

"What's the current uptime of OpenAI's API?"

And it gets real-time data from our monitoring network—not stale training data, not guesses.

Setup Guide: Claude Desktop

Claude Desktop is the easiest way to start using MCP tools.

Installation

No npm install required—just configure Claude Desktop to run the server on demand:

1. Open your Claude Desktop config file:

macOS:

open ~/Library/Application\ Support/Claude/claude_desktop_config.json
Enter fullscreen mode Exit fullscreen mode

Windows:

notepad %APPDATA%\Claude\claude_desktop_config.json
Enter fullscreen mode Exit fullscreen mode

2. Add the MCP server:

{
  "mcpServers": {
    "apistatuscheck": {
      "command": "npx",
      "args": ["-y", "apistatuscheck-mcp-server"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

3. Restart Claude Desktop

That's it. Claude now has access to real-time API status data.

Using It

Start a conversation and ask:

You:

"Is OpenAI down?"

Claude:

Using tool: check_status("openai")

OpenAI's API is currently operational. Status: ✅ No issues reported.
Last checked: 2 minutes ago.

Or check your entire dependency stack before deploying:

You:

"Check status of Stripe, AWS, and Cloudflare"

Claude:

Using tools: check_status("stripe"), check_status("aws"), check_status("cloudflare")

Stripe: ✅ Operational

AWS: ⚠️ Degraded Performance (us-east-1)

Cloudflare: ✅ Operational

Note: AWS is reporting elevated error rates in us-east-1. You may want to delay deployment.

This is the power of MCP: Your assistant becomes situationally aware of infrastructure health, in the context of your work.

Setup Guide: Cursor

Cursor is a fork of VS Code with built-in AI assistance. MCP support is newer but works great.

Installation

1. Install the MCP server globally:

npm install -g apistatuscheck-mcp-server
Enter fullscreen mode Exit fullscreen mode

2. Open Cursor settings:

  • macOS: Cmd+Shift+P → "Preferences: Open Settings (JSON)"
  • Windows/Linux: Ctrl+Shift+P → "Preferences: Open Settings (JSON)"

3. Add the MCP configuration:

{
  "mcp.servers": {
    "apistatuscheck": {
      "command": "apistatuscheck-mcp-server"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

4. Restart Cursor

Using It in Your Workflow

The killer feature: Cursor's AI can proactively check dependencies during code review.

Scenario: You're writing code that calls the OpenAI API.

# You write this code:
import openai

def generate_summary(text):
    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=[{"role": "user", "content": f"Summarize: {text}"}]
    )
    return response.choices[0].message.content
Enter fullscreen mode Exit fullscreen mode

You ask Cursor:

"Review this function. Is the OpenAI API currently stable?"

Cursor:

Using tool: check_status("openai")

Your code looks good. OpenAI's API is currently operational with no reported issues.

One suggestion: Add a retry mechanism in case of transient failures:

from tenacity import retry, stop_after_attempt, wait_exponential

@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=2, max=10))
def generate_summary(text):
    # ... your code

See what happened? Cursor combined code review with real-time API status. It checked if OpenAI was down before suggesting architectural changes.

This is the future of AI-assisted development: tools that understand both your code and the world it runs in.

Setup Guide: Windsurf

Windsurf (Codeium's AI IDE) supports MCP through a similar config-driven approach.

Installation

1. Install the MCP server:

npm install -g apistatuscheck-mcp-server
Enter fullscreen mode Exit fullscreen mode

2. Create or edit Windsurf's MCP config:

macOS/Linux:

mkdir -p ~/.windsurf/mcp
nano ~/.windsurf/mcp/config.json
Enter fullscreen mode Exit fullscreen mode

Windows:

mkdir %USERPROFILE%\.windsurf\mcp
notepad %USERPROFILE%\.windsurf\mcp\config.json
Enter fullscreen mode Exit fullscreen mode

3. Add the server configuration:

{
  "servers": {
    "apistatuscheck": {
      "command": "apistatuscheck-mcp-server",
      "description": "Check real-time API status from apistatuscheck.com"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

4. Restart Windsurf

Using It

Windsurf's AI agent (called "Cascade") can now check API status during planning and execution.

You:

"I need to build a payment flow using Stripe. Check if Stripe is currently stable."

Cascade:

Checking API status...

Stripe is operational. Let's proceed with the integration. I'll scaffold:

  • Payment intent creation endpoint
  • Webhook handler for payment confirmations
  • Error handling with exponential backoff

Would you like me to start with the backend API route?

The benefit: Cascade can factor infrastructure health into architectural decisions. If Stripe were experiencing issues, it might suggest adding a fallback payment provider or implementing a queuing system.

Available Tools: What Your Assistant Can Do

Here's the complete reference for the five MCP tools. Your assistant uses these automatically—you don't need to memorize the syntax.

1. check_status — Check a Specific API

What it does: Returns real-time status for a single API.

Example prompts:

  • "Is GitHub down?"
  • "Check Stripe status"
  • "What's the current state of the OpenAI API?"

Response format:

{
  "api": "stripe",
  "status": "operational",
  "lastCheck": "2026-02-06T15:23:00Z",
  "incidents": []
}
Enter fullscreen mode Exit fullscreen mode

Available slugs: See apistatuscheck.com for the full list (100+ APIs).

2. list_apis — See All Monitored APIs

What it does: Returns the complete list of APIs we monitor.

Example prompts:

  • "What APIs do you monitor?"
  • "Show me all available services"
  • "List all APIs in your database"

Response format:

{
  "total": 127,
  "apis": [
    { "slug": "stripe", "name": "Stripe", "category": "Payments" },
    { "slug": "openai", "name": "OpenAI", "category": "AI/ML" },
    // ... 125 more
  ]
}
Enter fullscreen mode Exit fullscreen mode

Use case: Discovery—see what you can monitor.

3. list_categories — Browse by Category

What it does: Returns all available categories (Payments, Cloud, AI/ML, etc.).

Example prompts:

  • "What categories of APIs do you track?"
  • "Show me service categories"
  • "List all monitored API types"

Response format:

{
  "categories": [
    { "name": "Payments", "slug": "payments" },
    { "name": "Cloud Infrastructure", "slug": "cloud" },
    { "name": "AI/ML", "slug": "ai" },
    // ... more
  ],
  "total": 12
}
Enter fullscreen mode Exit fullscreen mode

Use case: Exploring related services (e.g., "show me all payment APIs").

4. check_category — Check All APIs in a Category

What it does: Returns status for every API in a category (e.g., all payment processors).

Example prompts:

  • "Check status of all payment APIs"
  • "Are any AI APIs experiencing issues?"
  • "Show me the health of all cloud providers"

Response format:

{
  "category": "payments",
  "apis": [
    { "slug": "stripe", "name": "Stripe", "status": "operational" },
    { "slug": "paypal", "name": "PayPal", "status": "operational" },
    { "slug": "square", "name": "Square", "status": "degraded" }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Use case: Pre-deployment checks—verify all critical dependencies in one go.

5. check_url — Check Any Public URL

What it does: Performs an HTTP health check on any URL you provide.

Example prompts:

Response format:

{
  "url": "https://api.stripe.com",
  "status": 200,
  "responseTime": 87,
  "reachable": true
}
Enter fullscreen mode Exit fullscreen mode

Use case: Ad-hoc checks for internal APIs or services not in our monitoring list.

Real-World Use Cases

1. "Is Stripe Down?" During Checkout Debugging

Scenario: You're debugging a failing payment flow.

Old workflow:

  1. Add debug logs (5 min)
  2. Test with different cards (5 min)
  3. Check Stripe docs (10 min)
  4. Google "is Stripe down?" (finally realize it's an outage)
  5. Total wasted time: 20+ minutes

New workflow with MCP:

You (in Claude/Cursor):

"My Stripe payment is failing with a timeout. Is Stripe down?"

Assistant:

Using tool: check_status("stripe")

Stripe is currently reporting degraded performance on payment processing. Last incident: "Elevated error rates on charges API" (started 12 minutes ago).

Recommended actions:

  1. Implement retry logic with exponential backoff
  2. Show users a "payment processing may be delayed" message
  3. Monitor Stripe's status page: https://status.stripe.com

Would you like me to add retry logic to your payment code?

Time saved: 20 minutes. Plus, you get actionable next steps.

2. Pre-Deployment Dependency Check

Scenario: You're about to deploy a feature that depends on OpenAI, Stripe, and AWS.

You:

"I'm deploying a new feature that uses OpenAI for text generation, Stripe for payments, and AWS S3 for storage. Check if all three are stable."

Assistant:

Using tools: check_status("openai"), check_status("stripe"), check_status("aws")

Dependency Health Check:

OpenAI: Operational

Stripe: Operational

⚠️ AWS: Partial outage in us-east-1 (S3 write operations degraded)

Recommendation: Delay deployment. S3 issues in us-east-1 could cause upload failures. Consider:

  • Waiting for AWS to resolve the issue (~30 min ETA per their status page)
  • OR deploying with a fallback to us-west-2
  • OR adding a retry queue for failed uploads

Would you like me to implement a multi-region S3 fallback?

Result: You avoid deploying during a partial outage, saving support tickets and user frustration.

3. Monitoring Your Entire Dependency Stack

Scenario: You maintain a SaaS app with 15+ third-party dependencies.

You:

"Check status of all payment APIs and all AI APIs. Show me any issues."

Assistant:

Using tools: check_category("payments"), check_category("ai")

Payment APIs (8 checked):

  • ✅ Stripe, PayPal, Square, Braintree, Adyen: Operational
  • ⚠️ Paddle: Degraded (checkout API slow)

AI APIs (12 checked):

  • ✅ OpenAI, Anthropic, Google Gemini, Cohere: Operational
  • ❌ AI21: Major outage (API unreachable)
  • ⚠️ Hugging Face: Degraded (inference API slow)

Action items:

  1. If you use Paddle, warn users of potential slow checkout
  2. If you use AI21, consider failing over to OpenAI
  3. Monitor Hugging Face if you're running time-sensitive inference

Would you like me to implement automatic failover logic?

Benefit: One command gives you a complete infrastructure health snapshot. No manual checking, no stale dashboards.

4. Post-Mortem Analysis

Scenario: Your app had an outage. Was it your fault or a dependency?

You:

"We had a spike in 500 errors between 2-3 PM. Check if any of our dependencies (Stripe, AWS, Cloudflare, SendGrid) had issues during that window."

Assistant:

Cross-referencing dependency status history...

Found: AWS reported an outage in us-east-1 from 2:05 PM to 2:47 PM (API Gateway and Lambda affected).

Timeline:

  • 2:05 PM: AWS issues started
  • 2:08 PM: Your error rate spiked (matches)
  • 2:47 PM: AWS resolved
  • 2:50 PM: Your error rate normalized

Conclusion: The outage correlates with AWS's API Gateway issues. Your app's errors were a symptom, not the cause.

Recommendation: Document this in your post-mortem. Consider:

  • Implementing retries for AWS API calls
  • Adding circuit breakers to fail fast during AWS outages
  • Setting up proactive alerts for AWS status changes

Time saved: Hours of log diving. You immediately know the root cause.

Why This Matters: AI-Assisted Development is the Future

Traditional development workflow:

  1. Human writes code
  2. Human debugs code
  3. Human deploys code
  4. Human monitors production

AI-assisted workflow (with MCP):

  1. Human describes intent
  2. AI writes code and checks dependencies
  3. AI warns of infrastructure issues before deployment
  4. Human reviews and deploys
  5. AI monitors production context during debugging

The shift: AI isn't just autocompleting your code—it's checking the environment your code runs in.

What this unlocks:

  • Faster debugging — "Is this my bug or an external outage?" answered instantly
  • Safer deployments — "Are all dependencies healthy?" checked automatically
  • Proactive monitoring — Your assistant notices infrastructure issues before you ask

The API Status Check MCP server is a small example of a big trend: AI assistants that understand not just code, but the world the code runs in—infrastructure health, API status, system dependencies, deployment readiness.

As more MCP tools ship (databases, cloud platforms, monitoring services), AI-assisted development will feel less like "autocomplete" and more like having a senior engineer who already checked everything you were about to Google.

Get Started: Install the MCP Server

Claude Desktop (Easiest)

Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
  "mcpServers": {
    "apistatuscheck": {
      "command": "npx",
      "args": ["-y", "apistatuscheck-mcp-server"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart Claude Desktop. Done.

Cursor or Windsurf

Install globally:

npm install -g apistatuscheck-mcp-server
Enter fullscreen mode Exit fullscreen mode

Add to your IDE's MCP config (see setup sections above). Restart your IDE.

Try It

Ask your assistant:

  • "Is Stripe down?"
  • "Check status of all payment APIs"
  • "What APIs do you monitor?"

Your assistant now has real-time API status superpowers.

Beyond the MCP Server: More API Status Check Tools

MCP is just one way to integrate API status monitoring into your workflow. We also offer:

  • RSS feeds — Get alerts in Slack, Discord, email
  • Webhooks — Trigger custom logic when APIs go down
  • Public API — Build your own integrations
  • Status embeds — Show dependency health on your own status page

For teams: Our paid plans include advanced alerting, uptime SLA tracking, and priority support.

For developers: The MCP server is open source. Contribute or fork it on GitHub.

Start Checking API Status from Your AI Assistant

The API Status Check MCP server brings real-time infrastructure awareness to AI-assisted development.

What you get:

  • ✅ Check API status without leaving your IDE
  • ✅ Pre-deployment dependency health checks
  • ✅ Faster debugging (know if it's an outage before you waste time)
  • ✅ Works with Claude Desktop, Cursor, Windsurf, and any MCP-compatible assistant

Install in 2 minutes:

  1. Add apistatuscheck-mcp-server to your MCP config
  2. Restart your assistant
  3. Ask "Is [API] down?"

Get started:


API Status Check monitors 100+ APIs and delivers real-time status via MCP, RSS, webhooks, and API. Start free at apistatuscheck.com.

Top comments (0)