1. Home
  2. Companies
  3. Amazon Web Services
  4. Outage Map
Amazon Web Services

Amazon Web Services Outage Map

The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

Amazon Web Services users affected:

Less
More
Check Current Status

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Massy, Île-de-France 2
Benito Juarez, CDMX 1
Paris 01 Louvre, Île-de-France 1
Neuemühle, Hesse 1
Rouen, Normandy 1
Noida, UP 2
Sydney, NSW 1
North Liberty, IA 1
Laguna Woods, CA 1
Boca Raton, FL 1
Evansville, IN 1
Bengaluru, KA 1
Dover, NH 1
Daytona Beach, FL 1
San Francisco, CA 1
Oklahoma City, OK 1
Hudson, NH 1
Maricopa, AZ 1
Reston, VA 1
Phoenix, AZ 1
Wheaton, IL 1
Santa Maria, CA 1
Trenton, NJ 1
Jonesboro, GA 1
Fortín de las Flores, VER 1
Seneca Falls, NY 1
Birmingham, England 1
Canby, OR 1
Los Angeles, CA 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • mulonda_k
    McG M. DLT (@mulonda_k) reported

    @NBA @awscloud The problem is that refs allow people to hold him, shove him, kick him and go unpunished

  • SpainGreatAgain
    Bella Iberia (@SpainGreatAgain) reported

    @NextGenStats @NFL @awscloud Expected Points Added sounds fancy until you realize it’s just another way for nerds to tell us Mahomes is a god while downplaying actual game-winning drives and clutch plays 😤 EPA, success rate, all that AWS-powered nonsense cool for spreadsheets, terrible for real football passion. Stop letting models replace what our eyes see on Sundays. Bring back old-school football debate

  • 0xp4ck3t
    Bryan (@0xp4ck3t) reported

    @AWSSupport We have business + and we should be able to get a response from AWS within 30 minutes for critical issues. It's been hours, our **** DB is down. We need someone to have a look on it. Case ID 177566080000785

  • TutorHailApp
    TutorHail App (@TutorHailApp) reported

    @awscloud hello we have tried reaching out to you but in vain our EC2 attached to UAE has been down forever and you have no communication out. Can we know what we are dealing with it is our main server this is ridiculous handing us over to your bots with no answers.

  • erossics
    Erossi (@erossics) reported

    Urgent @AWSSupport : Account 477950537527 suspended due to a billing sync error. Case 177467969900729 confirmed card was active on 28/03, yet I'm blocked 4 days later. Dashboard shows $0.00 due/Pending, so I can't pay manually. Production is DOWN. Please unsuspend/retry charge!

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @senunwah @AWSSupport The outage gets a postmortem. Your deadline doesn't read it.

  • dmauas
    David Mauas (@dmauas) reported

    @awscloud why, WHY don't you fix the web console UI/UX?! AWS console seems to actually try to suck! It gets WORSE with time! Actually using bash is better than the disgusting web console!

  • thanhdar1999
    thanhdar nguyen (@thanhdar1999) reported

    @ikmolp0909 @henrychang10000 @awscloud I bought it when it was 0.3, now it looks terrible. Why is that? What's the way forward?

  • Zuffnuff
    Ted (@Zuffnuff) reported

    @F1 @awscloud Keep trying to spin it but these cars have a physics problem. Fake race cars.

  • ChristhylCC
    Christhyl Ceriche (@ChristhylCC) reported

    @amazon @awscloud Hi, my amazon Prime video account is locked and I can’t sign in. When I try to contact support, it asks me to log in and I’m stuck in a loop. Could you please help me recover access?

  • MrHoboM
    Mr. Hobo Millionaire (@MrHoboM) reported

    @AWSSupport @brankopetric00 It’s terrible. No one with an ounce of design skill would build it the way you did. Ask whatever AI you use to judge it.

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @vladimirprus The problem isn't bursting. It's that credits are invisible until you're throttled.

  • amazon
    Amazon (@amazon) reported

    Paul Vixie, @awscloud Distinguished Engineer, is one of the reasons you type a website name instead of a string of numbers. He's also the reason your email inbox isn't overflowing with spam. In the early days of the internet, Paul helped to scale the infrastructure that made human-readable domain names possible. Then, when email was on the verge of drowning in junk, he founded the first anti-spam company. Now, he's tackling the security challenges of the agentic AI era. Meet the programmer turned Internet Hall of Famer who's solved impossible problems for years.

  • fortnite_Egypt1
    🇪🇬fortnite egyption servers (@fortnite_Egypt1) reported

    @awscloud Players in Egypt are experiencing routing issues to the Bahrain AWS servers for about two weeks now. Ping jumped to ~150ms instead of the usual low latency. Please investigate and fix the routing problem.@AWSSupport @awscloud please fix the problem

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.

Check Current Status