Is ChatGPT down right now?
Authenticated API inference - 2 models monitored · How we classify outages
ChatGPT is currently operational - 110ms HTTP response. Last checked . 90-day uptime: 99.8%. ChatGPT API: all 2 models responding - fastest TTFT 737ms.
Stay informed
HTTP uptime (90d)
99.8%
20 incidents (90d)
HTTP response now
110ms
HTTP p50 (7d)
129ms
median ping response
HTTP p95 (7d)
329ms
tail ping response
API Inference Monitoring
Live · every 5 minBest TTFT (p50)
737ms
time to first token
Best throughput
99tok/s
output tokens/sec (24h avg)
Min success rate
100%
worst model (24h)
P50 = typical speed. P95 = worst case 95% of the time. Measured by Tickerr's independent inference checks. Requires ≥10 checks to display.
TTFT over 24 hours
ⓘ Authenticated streaming API calls via native fetch. TTFT = milliseconds from request start to first streamed token chunk. Throughput = output tokens ÷ generation time. Checks run from Vercel us-east-1. Independent of the provider's official status page.
Agent monitoring active · 36 agents reporting · Powered by Tickerr MCP
HTTP endpoint response time (7 days)
p50 129ms·p95 329msⓘ HTTP response times to ChatGPT's status endpoint - measures infrastructure availability, not API inference speed. For TTFT and model-level API status, see the ChatGPT API Status section above.
90-day uptime comparison
Independent monitoring catches issues faster - official pages sometimes lag by hours before acknowledging problems.
Service components
via status.openai.com ↗Incident history
Investigating - Issues accessing ChatGPT for logged-out users
Investigating - Codex 5.5 engines are experiencing high error rate
Investigating - Realtime API - SIP/WebRTC flow are down
Investigating - Elevated error rates with GPT 5.5
Investigating - Users may experience elevated errors in ChatGPT uploading files and Codex Cloud creating tasks
gpt-4.1-mini API Latency Degraded
Independent monitoring detected elevated API latency for gpt-4.1-mini. Current TTFT is 2× above the rolling p50 baseline (1495ms vs p50 733ms). The service is responding but slower than normal. Ticker…
Monitoring - Elevated errors for Responses API
Investigating - Degraded Performance with Codex Cloud Tasks
Investigating - Increased Error Rate for gpt-5.5 model in the API
Identified - Elevated transcription failures affecting ChatGPT & Codex
Monitoring - Increased error rate with image generation in the API
Investigating - Issue affecting some pages on the ChatGPT website
Investigating - Elevated error rate for Responses API
Identified - Elevated error rates for image generation
Investigating - Elevated error rates affecting ChatGPT for some users in Europe
Identified - Partial Disruption of ChatGPT Workspace Connector Write Actions
Identified - Elevated errors for ChatGPT Go (5.3 Thinking)
Investigating - ChatGPT users may encounter issues in conversation
Identified - Users may experience elevated error rate for gpt-4o-mini in the API
Identified - Codex stream is disconnecting intermittently
Related pages
ChatGPT API not working? Common error codes
If ChatGPT's API is returning errors, the table below explains what each code means and how to fix it. If errors are widespread, check the live status above - a service incident will appear there within minutes.
| Error | What it means & what to do |
|---|---|
| HTTP 429 | Rate limit or quota exceeded - check tier limits and retry with backoff |
| HTTP 500 | Internal server error - retry once; if persistent, check status.openai.com |
| HTTP 503 | Service unavailable - check for active OpenAI incidents |
| "context_length_exceeded" | Input too long for model - truncate or summarize context |
Note: Tickerr monitors ChatGPT's status endpoint, not individual API calls. An HTTP 429 or 500 in your app may be specific to your account tier - check the rate limits page for plan-specific thresholds.
About ChatGPT status
ChatGPT is an AI chatbot and API built by OpenAI, powering millions of apps and users globally. This page tracks two independent signals: HTTP availability (is chatgpt.com reachable?) and live API inference (is the OpenAI API actually working for developers?). Tickerr makes authenticated API calls to GPT-4o-mini, GPT-4o, and o4-mini every 5–15 minutes, measuring TTFT (time-to-first-token) and output throughput. This catches model-layer failures and overloads that don't show up as HTTP outages. Common ChatGPT API errors: HTTP 429 (rate limit or quota exceeded - check your tier and add backoff), HTTP 500 (internal server error - retry once), HTTP 503 (service unavailable - check for active incidents). OpenAI's official status page at status.openai.com is typically updated within 15–30 minutes of a confirmed incident; Tickerr's independent inference monitoring surfaces issues faster.
You can also check the official ChatGPT status page.