Cloudflare Outage Map
The map below depicts the most recent cities worldwide where Cloudflare users have reported problems and outages. If you are having an issue with Cloudflare, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
Cloudflare users affected:
Cloudflare is a company that provides DDoS mitigation, content delivery network (CDN) services, security and distributed DNS services. Cloudflare's services sit between the visitor and the Cloudflare user's hosting provider, acting as a reverse proxy for websites.
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Istanbul, Istanbul | 1 |
| Greater Noida, UP | 2 |
| Paris, Île-de-France | 1 |
| Crisfield, MD | 1 |
| Noida, UP | 2 |
| Augsburg, Bavaria | 1 |
| Bengaluru, KA | 1 |
| Montataire, Hauts-de-France | 1 |
| London, England | 1 |
| Attleborough, England | 1 |
| Colima, COL | 1 |
| Leuven, Flanders | 1 |
| New Delhi, NCT | 2 |
| Mâcon, Bourgogne-Franche-Comté | 1 |
| Amsterdam, nh | 1 |
| Ashburn, VA | 1 |
| Rosario, SF | 1 |
| Merlo, BA | 1 |
| Frankfurt am Main, Hesse | 1 |
| Birmingham, AL | 1 |
| Dayton, OH | 1 |
| Miami, FL | 1 |
| Osnabrück, Lower Saxony | 1 |
| Bulandshahr, UP | 1 |
| A Coruña, Galicia | 1 |
| Easton, PA | 1 |
| Guayaquil, Guayas | 1 |
| El Port de Sagunt, Valencia | 1 |
| Medellín, Antioquia | 2 |
| Padova, Veneto | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Cloudflare Issues Reports
Latest outage, problems and issue reports in social media:
-
LukeYoungblood.eth 🛡️ (@LukeYoungblood) reportedI've had an urgent support case open with @Cloudflare since Monday and I'm shocked at how little they care. We hit a 50GB storage limit for durable objects and it's breaking multiple applications for us, yet they still haven't increased our limit. This is totally unacceptable.
-
Fluidual (@fluidual) reportedEvery tool for extracting YouTube transcripts fails from a cloud VPS. yt-dlp → blocked. Transcript API → blocked. Third-party sites → Cloudflare. Even proxies timeout. @_HermesAgent @Teknium is there a clean path here, or is residential proxy support the only real option?
-
Ash Nallawalla (@ashnallawalla) reported@gaganghotra_ Great reminder. Most geo-blocking plugins and CDN rules need a specific exemption for known crawler IP ranges. Cloudflare and Wordfence both support this, but it's rarely enabled by default — so it's worth checking even on sites that seem to be working fine.
-
DataDan|AI Data Engineering (@ba_niu80557) reportedA function receives a webhook, validates it, queries a database (150ms network round-trip), and returns a response. Total wall-clock time: 170ms. Actual CPU time: 5ms. AWS Lambda bills you for 170ms. Cloudflare Workers bills you for 5ms. Same function. Same result. 34x billing difference — because one platform charges for time your code spent waiting, and the other charges only for time your code spent computing. (Source: Morph, "Cloudflare Workers vs AWS Lambda 2026", April 2026) That billing model difference is the most underappreciated shift in backend infrastructure in 2026. And it's quietly reshaping how production systems are built — not just for edge use cases, but for everything. Here's the thesis I keep arriving at after watching teams migrate over the past 18 months: The "deploy to a region, scale with containers" model that dominated backend engineering from 2015-2024 is being replaced by an "deploy everywhere, scale with isolates" model. And most backend engineers haven't noticed because the migration is happening one function at a time. Baselime reported 80% lower cloud costs after migrating from AWS to Cloudflare. Not 8%. Eighty percent. (Source: Morph, April 2026) The numbers are that dramatic because three structural differences compound: Difference 1: Cold starts don't exist anymore. Lambda cold starts: 100ms to 3,000ms depending on runtime, package size, and VPC config. Java Lambda in a VPC? You might wait 3 full seconds before your code runs a single line. Cloudflare Workers cold starts: under 5 milliseconds. Effectively zero. Because Workers don't spin up containers. They run V8 isolates — the same lightweight sandboxing technology that runs your Chrome browser tabs. For a web API serving human users, 100ms cold start is noticeable but tolerable. For an AI agent making 200 API calls per session, cold starts compound catastrophically. 200 calls × 500ms average cold start = 100 seconds of dead time per session. The agent is waiting for infrastructure, not computing. (Source: Morph, April 2026) This is why every serious AI agent infrastructure team I know is evaluating edge-first deployment. Not because edge is trendy — because their agents are burning money and latency on cold starts that V8 isolates eliminate entirely. Difference 2: Global distribution is the default, not the exception. You deploy a Lambda function. It runs in us-east-1. A user in Tokyo hits your API. Their request travels 11,000 km to Virginia, your function processes it, and the response travels 11,000 km back. Round trip: 200-400ms of pure network latency before your code does anything. You deploy a Cloudflare Worker. It runs in 330+ cities worldwide. A user in Tokyo hits your API. The request reaches a Cloudflare edge node in Tokyo. Your code runs there. Response returns from Tokyo. Round trip network latency: effectively zero. This isn't edge computing as a niche optimization. This is "every function is global by default" as the deployment model. You don't choose a region. There is no region. Your code runs wherever the user is. For a traditional CRUD API, this reduces TTFB by 60-80%. (Source: Digital Applied, "Edge Computing: Cloudflare Workers Dev Guide 2026", January 2026) For AI agent endpoints that serve users across time zones — a customer support agent used by a global company, a coding assistant used by distributed teams — the latency reduction is the difference between "feels instant" and "feels slow." Difference 3: The ecosystem became a full stack. The reason edge computing stayed niche from 2018-2023 was simple: you could run code at the edge, but your data was still in a region. Every edge function that needed a database round-tripped to us-east-1 anyway, killing the latency advantage. In 2026, Cloudflare solved this by building an entire data layer at the edge: → D1: SQLite at the edge. Global read replication. Your queries run where your users are. → KV: Key-value storage with edge caching. Sub-millisecond reads globally. → R2: Object storage. S3-compatible. Zero egress fees. (This alone saves thousands/month for media-heavy applications.) → Durable Objects: Stateful computing at the edge. Strongly consistent, globally coordinated state — the thing that was impossible at the edge until 2024. → Queues: Message queuing with guaranteed delivery. → AI inference: Run ML models on serverless GPUs at the edge. → Vectorize: Vector database for semantic search at the edge. (Sources: Cloudflare Workers docs; Calmops, "Edge Computing with Cloudflare Workers", March 2026) This changes the calculus completely. In 2022, edge was "run your CDN logic there." In 2026, edge is "run your entire application there" — database, storage, queues, AI inference, state management. Full stack. The framework ecosystem caught up too. Hono — under 14KB, zero dependencies, Express-like routing — became the standard routing framework for Workers in 2026. You write code that looks almost identical to Express/Fastify, but it runs globally with zero cold starts. What this means for how you should think about your next backend project: The decision tree has changed: Is your workload I/O heavy (API calls, database queries, webhook processing)?→ Workers bills CPU time only. You pay for 5ms of compute, not 170ms of waiting. The cost difference is 10-34x. This is most web backends. Does your application serve users globally?→ Workers runs in 330+ cities automatically. No multi-region deployment to manage. No cross-region replication to configure. Global is the default. Does your application need zero cold starts?→ Workers uses V8 isolates: sub-5ms startup. Lambda uses containers: 100ms-3s startup. If you're serving real-time AI agents, chatbots, or latency-sensitive APIs, cold starts are unacceptable. Does your workload need heavy compute (video transcoding, ML training, data processing)?→ Lambda. Workers caps at 128MB memory and has CPU time limits. For compute-heavy tasks, Lambda's 10GB memory and 15-minute execution are necessary. Are you deeply integrated with the AWS ecosystem (DynamoDB, SQS, S3 triggers, Step Functions)?→ Lambda. Workers can't trigger on S3 events or consume DynamoDB streams. Migrating away from Lambda means migrating away from the AWS event-driven ecosystem. The honest assessment: 80% of web-facing backend functions are I/O-heavy, globally distributed workloads where Workers is structurally cheaper and faster. 20% are compute-heavy or AWS-locked workloads where Lambda is the right choice. Most teams are running 100% on Lambda because that's what they learned in 2018. The AI angle that ties this back to my usual topics: Every AI agent infrastructure pattern I've written about — MCP servers, tool endpoints, RAG retrieval APIs, model routing gateways, cost tracking middleware — is an I/O-heavy workload that serves global users and needs zero cold starts. These are exactly the workloads where edge-first architecture delivers the largest improvement over traditional serverless. An MCP server on Lambda: cold start + regional latency + wall-clock billing = slow and expensive. An MCP server on Workers: zero cold start + global distribution + CPU-only billing = fast and cheap. The infrastructure layer beneath AI agents matters as much as the orchestration layer above them. Most agent architecture discussions focus on LangGraph vs CrewAI and ignore the fact that the function layer underneath is adding 100+ seconds of dead time per session to cold starts. Three uncomfortable questions for any backend team in 2026: 1) What percentage of your Lambda invocation time is your code actually computing vs waiting for I/O? If you're not measuring this — you're paying for wait time. For most web APIs, CPU time is 3-10% of wall-clock time. The other 90-97% is network round-trips that Lambda bills you for and Workers doesn't. 2) Where are your users, and where is your code? If users are global and code is us-east-1 — you're adding 100-300ms of pure network latency to every request. Workers eliminates this by running your code where your users are. Automatically. 3) When was the last time you evaluated whether your serverless architecture is still the right one? If "when we set it up in 2020" — the infrastructure landscape has fundamentally changed. Edge-first wasn't viable in 2020. It is in 2026. A 2-day migration experiment on a single non-critical endpoint will tell you whether the cost and latency improvements are real for your workload. The thesis: → 2016-2020: "serverless means Lambda" → 2021-2024: "edge is interesting for CDN logic but not real backends" → 2026: "edge-first is the default for I/O-heavy global workloads, and traditional serverless is the fallback for compute-heavy regional workloads" The inversion already happened. Most backend engineers are still deploying to Lambda because that's what the tutorials taught them in 2018. The teams that re-evaluated are running the same functions at 60-80% lower latency and 80% lower cost. Same code. Different infrastructure. Dramatically different bill. The boring infrastructure migration wins. It always does. Especially when the exciting AI agent is waiting 100 seconds for cold starts nobody measured.
-
Johnmark Obiefuna (@jayhemz) reportedProxying through Cloudflare solves almost all Wordpress security issues.
-
Bolt.new Help (@boltdotnewhelp) reported@thierrybezier @boltdotnew We recently experienced a service disruption caused by a Cloudflare issue, which has now been resolved. To ensure you're on the latest version, please perform a hard refresh of your project and log out, then back in. You should be good to go!
-
TheGraySeed (@TheGraySeed) reported@ChibiReviews Cloudflare should've been financially downsized when they went down alongside the entire internet last year.
-
Khan Goatman (@Khan_Goatman) reportedIts legit not even real even tho it says its licensed under glitchs name. Its clearly fake My reasons is because this is identical to the FAN caine and abel site and glitch never uses cloudflare. No offense but Why do yall believe in things instantly. 😭 #murderdrones
-
Philip Wallage (@Wallage) reported3 weeks ago Cloudflare published a beautiful blog post about redesigning that little widget you click to verify you're not a robot. WCAG 2.2 AAA. Rigorous user research. Eight participants from eight countries, blinded testing. They wrote that "when visual consistency conflicted with readability, readability won. Every time." Today they launched their new marketing homepage. No blog post about it. No press release. No tweet from Matthew Prince. For a company that announces a quarterly Forrester report and individual API changelogs, the silence on a full marketing site relaunch is loud. The reaction on X has been brutal. Some of what's being flagged: - Login button goes to the sign-up page - "View docs" link on the careers page points to R2 storage - Multiple users with no colourblindness saying the contrast hurts their eyes - Broken scrolling on Safari - Doesn't render properly on mobile - An em-dash in the hero headline, days after a whole blog post about removing em-dashes for readability A Cloudflare engineer replied to the thread: "expect fixes in the coming days." I'm not piling on Cloudflare. Shipping at their scale is hard and they'll fix it. The contrast between the two artefacts is the lesson. The blog post about the human-verification widget is what design teams want to be true about themselves. Process. Research. Accessibility as a value. The marketing homepage is what actually ships under deadline pressure when nobody owns the QA pass. If you look at most e-commerce sites I audit, the same gap exists. The brand book says "accessible, considered, customer-first." The product detail page has 11px grey-on-grey microcopy, a CTA that disappears on hover, and a sticky add-to-cart that covers the price on mobile. The blog post you want to write about your design system matters less than the page where you take money from people. Audit what you actually shipped, not what you meant to ship.
-
Shreyash (@WebDevCaptain) reported@mauerbac @Cloudflare Cloudflare's offerings are pretty good, but i guess their devrel is bad, how come we don't know about so many cool products with such good SLAs @CloudflareDev
-
Diogo Moreira (@diogomrcom) reported@genmon @threepointone @inanimate_tech Cloudflare lack of support for a spending cap is what’s currently stopping me from going deep in the platform. How do you handle that?
-
Palash Bansal (@repalash) reported@CherryJimbo Doesn't work on mobile also. Cloudflare is embracing AI so much that it's going in slop territory. For those saying humans also make mistakes - yes, but most don't approve broken things for **** by lying about it and calling it done and stable.
-
Focus $EDGE (@focus_edge) reported@dotkrueger @Cloudflare can't get my login mail
-
Ashish Rawat (@eashish93) reportedI was trying to solve a cloudflare sandbox related to HMR (websocket) with opus 4.7 for past three days and filed multiple issues on their repos (sandbox, workers-sdk etc). I almost gave up because it's a issue in their sdk, then comes the gpt 5.5 which fixes the bug in just one pass.
-
Saurav (@snipextt) reported@ayushagarwal @dodopayments @Cloudflare Love this Ayush. Genuinely curious though, got anything in the pipeline for the "no one knows my product exists" problem? Asking for a friend 🫠