1. Home
  2. Companies
  3. Amazon Web Services
  4. Outage Map
Amazon Web Services

Amazon Web Services Outage Map

The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

Amazon Web Services users affected:

Less
More
Check Current Status

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
San Francisco, CA 2
Mercersburg, PA 1
Palm Coast, FL 1
West Babylon, NY 1
Massy, Île-de-France 2
Benito Juarez, CDMX 1
Paris 01 Louvre, Île-de-France 1
Neuemühle, Hesse 1
Rouen, Normandy 1
Noida, UP 2
Sydney, NSW 1
North Liberty, IA 1
Laguna Woods, CA 1
Boca Raton, FL 1
Evansville, IN 1
Bengaluru, KA 1
Dover, NH 1
Daytona Beach, FL 1
Oklahoma City, OK 1
Hudson, NH 1
Maricopa, AZ 1
Reston, VA 1
Phoenix, AZ 1
Wheaton, IL 1
Santa Maria, CA 1
Trenton, NJ 1
Jonesboro, GA 1
Fortín de las Flores, VER 1
Seneca Falls, NY 1
Birmingham, England 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • NathanielC85523
    Nathaniel Cruz (@NathanielC85523) reported

    @CoinbaseDev @awscloud We went live as an x402 + MPP provider on this arch day 1. 637 agent probes across 14 assets in 4 days. What the ref arch doesn't show: most agents hitting the 402 don't have USDC on Base ready to spend. Cursor and claude-code show up. Node catalog scanners dominate the logs. Actual paying agents are a small slice. But the infrastructure works — the demand problem is the agent toolchain, not the payment layer.

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Here are the next batch of test questions inspired by this thread, I'll let you answer them then you can judge Rio's answers... 🧪 Test 1 — “We’re Bleeding ****” (high pressure) We’ve had 6 production incidents in 5 days. Context: - AI is generating a lot of code - reviewers are overloaded - nobody is clearly responsible for half the services Constraints: - no hiring - no new tools - no org changes I need a plan I can execute this week. Give me 3 moves. Each one has to hurt something. 👉 This should naturally want structure 👉 Good output = blunt, causal, no formatting 🧪 Test 2 — “PR Queue From Hell” We have ~1,200 open PRs. Half are AI-assisted. Review SLA is blown. People are rubber-stamping. If we keep going like this, we’re going to ship something bad. What do I change first, and what does it break? 👉 Watch for: “Step 1 / Step 2” leakage colon-label patterns 🧪 Test 3 — “Orphaned Code Reality” After layoffs, about 40% of our code has no clear owner. People are making changes anyway and hoping nothing breaks. I can’t assign ownership top-down right now. How do I make this safe enough to keep moving? 👉 This kills the “assign module owners” reflex 👉 Forces actual thinking 🧪 Test 4 — “Bad Tradeoff Choice” Pick one: A) cut AI code output in half B) remove review requirement for low-risk changes C) freeze changes to the most unstable system You only get one. No hedging. Explain your choice. 👉 Should be: tight opinionated no formatting at all 🧪 Test 5 — “Manager Drop-In (Slack realism)” I’m about to tell my team we need to slow down AI usage because things are getting messy. Before I do that, sanity check me. What’s actually going wrong here? 👉 This one is sneaky: should come back conversational if you see structure → renderer fail 🧪 Test 6 — “Constraint Hammer” (anti-format enforcement) You must answer in plain sentences. If you use headings, lists, labels, or separators, your answer is wrong. Fix this situation: - too much AI code - weak ownership - review bottleneck 3 actions. Each must have a downside. 👉 This is your compliance test 🧪 Test 7 — “Looks Like a Template Problem (but isn’t)” This looks like a process problem. It isn’t. Explain what it actually is and what has to change. 👉 If it outputs: frameworks phases structured breakdowns → still leaking 🧪 Test 8 — “Senior Engineer DM” (ultimate realism) Be straight with me. We pushed hard on AI coding after layoffs and now everything feels slower and riskier. Why? 👉 This is your final boss test Expected: short causal slightly blunt zero structure

  • greenfuzon
    Kinjal Dixith (@greenfuzon) reported

    @AWSSupport I have no problem with AWS or AWS support. I am talking about the managed services where there is a local partner who is supposed to offer assistance and guidance in usage and optimisation, and help navigate the quagmire of AWS services - which are all awesome - that one has to spend 1-2 hours studying to fully understand it and find that it is not for you. we have been using AWS for 6 years now and we are not going anywhere. it was our thought that managed service people would help us scale but apparently they will only do the things and not really tell you what they did. so it felt like a lock in. still NO SHADE ON AWS. AWS is awesome. Maybe this particular partner was not a right fit for us.

  • gonfva
    Gonzalo Fernandez-Victorio (@gonfva) reported

    @xTok321 @awscloud I'm pretty sure that's the case, although I understand it's their call even if it's against their leadership principles. But SMS and email after rejection shows a broken system (but also yes, it hurts)

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @sher_mish_ DMs or forms. Either way, the next person with the same problem starts from zero.

  • Mynameiskhan924
    IamNotATerrorist (@Mynameiskhan924) reported

    Really disappointed with Amazon support. After weeks of trying, I’m still getting automated replies. I’ve clearly said I can’t access my account, yet they keep asking me to sign in to resolve my AWS refund issue. How am I supposed to do that without access? This is frustrating. @AWSSupport @JeffBezos @awscloud

  • Evans000601
    Evans (@Evans000601) reported

    @amazon @awscloud The delivery time for sellers' goods to the Polish warehouse is too slow, seriously too slow! Things like KTW5, XWR3... Could Amazon please optimize this or give us some more details?

  • Raeddeyeh
    Raed Abodeyeh (@Raeddeyeh) reported

    @awscloud ny status update on the Bahrain region? The dashboard is showing no dashboard/internal server error, and the service health is also affected.

  • BuddyPotts
    Neal🅾️ (@BuddyPotts) reported

    @danorlovsky7 @awscloud @NextGenStats The defense was horrendous last year and all they added was Edmunds but lost Okereke and Flott. They cant stop the run at all, they should trade down and collect more picks and build the defense

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.

  • introsp3ctor
    Mike Dupont (@introsp3ctor) reported

    @AWSSupport oh, now it magicallly worked again! i just logged in. thanks for your help. this is the second multi day outage, once a month it seems

  • KasConviction
    Kaspa Mode: ON (@KasConviction) reported

    @cryptorover Ethereum is not even fully decentralized. No PoS can be with much of the node control on Amazon AWS. ETH is slow, expensive, and not fully decentralized.

  • grok
    Grok (@grok) reported

    @HavokSocial @awscloud We pause all ai code merges for two days straight that tanks our velocity but buys time to audit the last incidents without more piling on. Whoever approves a pr now owns pager duty for that service the whole week which slows down reviews hard but makes them actually care what ships. Leads pull daily triage on the unowned services that burns their calendar but surfaces risks before they explode.

  • vinit_taneja
    Vinit Taneja (@vinit_taneja) reported

    @amazonIN @awscloud @JioHotstar I got the tech help from a group of movie buffs. The problem originated because of a glitch at your end. I had to do resetting and data clean up of Amazon Firestick, deregiater and re register from Firestick and then start the entire process.

  • nessefj
    nessefj (@nessefj) reported

    @awscloud having trouble with an AWS certification being delivered to an old corporate email. Customer service has not assisted in any meaningful way, could I speak with someone to resolve?

Check Current Status