1. Home
  2. Companies
  3. Amazon Web Services
  4. Outage Map
Amazon Web Services

Amazon Web Services Outage Map

The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

Amazon Web Services users affected:

Less
More
Check Current Status

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Alamogordo, NM 1
San Francisco, CA 2
Mercersburg, PA 1
Palm Coast, FL 1
West Babylon, NY 1
Massy, Île-de-France 2
Benito Juarez, CDMX 1
Paris 01 Louvre, Île-de-France 1
Neuemühle, Hesse 1
Rouen, Normandy 1
Noida, UP 2
Sydney, NSW 1
North Liberty, IA 1
Laguna Woods, CA 1
Boca Raton, FL 1
Evansville, IN 1
Bengaluru, KA 1
Dover, NH 1
Daytona Beach, FL 1
Oklahoma City, OK 1
Hudson, NH 1
Maricopa, AZ 1
Reston, VA 1
Phoenix, AZ 1
Wheaton, IL 1
Santa Maria, CA 1
Trenton, NJ 1
Jonesboro, GA 1
Fortín de las Flores, VER 1
Seneca Falls, NY 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • stimcrol
    неможливо розбачити (@stimcrol) reported

    @awscloud You had 13-hour outage because of AI push. You'd better resign sooner

  • reachtonikhil
    Nikhil Agarwal (@reachtonikhil) reported

    Does Iran-Israel war had caused any real world cyber issues yet for enterprises, apart from @awscloud and @oracle getting targeted?

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.

  • vinit_taneja
    Vinit Taneja (@vinit_taneja) reported

    @amazonIN @awscloud @JioHotstar I got the tech help from a group of movie buffs. The problem originated because of a glitch at your end. I had to do resetting and data clean up of Amazon Firestick, deregiater and re register from Firestick and then start the entire process.

  • NeilPitman10
    Carbon Tax Neil (@NeilPitman10) reported

    @AWSSupport Great! another bot. OK, but no one is looking at the github issues log.

  • nightshift54619
    nightshift (@nightshift54619) reported

    @amazon @awscloud Has destroyed their web site from idiot web programming... It's so slow and laggy it's unusable. Whatever they did in the last couple weeks, destroyed its usability. Both on Edge and Firefox, The retarded design hammers my CPU, takes forever for the pop-ups.

  • siddhantio
    Siddhant Tripathi (@siddhantio) reported

    @awscloud opened a case over 10 days ago and it’s still unassigned to any agent. Please help in resolving the billing issue.

  • openingai_com
    OpeningAi.com | For Sale (@openingai_com) reported

    @awscloud Banking tech has been running on ancient code for decades. Time to burn it down and build it back with AI.

  • fortnite_Egypt1
    🇪🇬fortnite egyption servers (@fortnite_Egypt1) reported

    @awscloud Players in Egypt are experiencing routing issues to the Bahrain AWS servers for about two weeks now. Ping jumped to ~150ms instead of the usual low latency. Please investigate and fix the routing problem.@AWSSupport @awscloud please fix the problem

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @CPGgrowthstudio Production down 5 days. The response commits to nothing. Next customer with this issue finds the same boilerplate.

  • ceO_Odox
    Ødoworitse | DevOps Factory (@ceO_Odox) reported

    Every DevOps engineer knows "It works on my machine" is a lie. Hit a wall today deploying to @awscloud EC2—*** was begging for a password in a headless shell. ​Error: fatal: could not read Username. Reality: The source URL drifted, and the automation had no keyboard to answer. 🧵

  • egy_pl_eagle
    Andrew Sam (@egy_pl_eagle) reported

    @WhoisNuel @awscloud that's scary cause banking are known to be stable and bug free ... also with AI we will still have a maintainability issue

  • fortnite_Egypt1
    🇪🇬fortnite egyption servers (@fortnite_Egypt1) reported

    @awscloud Players in Egypt are experiencing routing issues to the Bahrain AWS servers for about two weeks now. Ping jumped to ~150ms instead of the usual low latency. Please investigate and fix the routing problem.@AWSSupport @awscloud please fix the problem

  • jlgolson
    Jordan Golson (@jlgolson) reported

    @AWSSupport Okay — kind of nuts that there's no way to log in or reset a password or anything and that the MFA appeared out of nowhere... also that you can have the same login for AWS Builder AND AWS Console and there's no great explanation for why they're different.

  • WilliamNextLev1
    WilliamNextLvl (@WilliamNextLev1) reported

    Only problem is...$NET is not in the business of cyber security. Lol Cloudfare competes with Amazon AWS for serverless computing. (I would buy $NET stock here, way oversold...)

Check Current Status