Amazon Web Services Outage Map
The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
Amazon Web Services users affected:
Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| West Babylon, NY | 1 |
| Massy, Île-de-France | 2 |
| Benito Juarez, CDMX | 1 |
| Paris 01 Louvre, Île-de-France | 1 |
| Neuemühle, Hesse | 1 |
| Rouen, Normandy | 1 |
| Noida, UP | 2 |
| Sydney, NSW | 1 |
| North Liberty, IA | 1 |
| Laguna Woods, CA | 1 |
| Boca Raton, FL | 1 |
| Evansville, IN | 1 |
| Bengaluru, KA | 1 |
| Dover, NH | 1 |
| Daytona Beach, FL | 1 |
| San Francisco, CA | 1 |
| Oklahoma City, OK | 1 |
| Hudson, NH | 1 |
| Maricopa, AZ | 1 |
| Reston, VA | 1 |
| Phoenix, AZ | 1 |
| Wheaton, IL | 1 |
| Santa Maria, CA | 1 |
| Trenton, NJ | 1 |
| Jonesboro, GA | 1 |
| Fortín de las Flores, VER | 1 |
| Seneca Falls, NY | 1 |
| Birmingham, England | 1 |
| Canby, OR | 1 |
| Los Angeles, CA | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Amazon Web Services Issues Reports
Latest outage, problems and issue reports in social media:
-
Mirror AI - The Ultimate Virtual Try on (@TryItOnMirror) reported@EPAMSystems @ZalandoTech @awscloud Interesting direction. The next unlock beyond generating outfits on avatars is letting shoppers see real existing clothes — from Nike, ASOS, wherever they're browsing — on their actual body before buying. That's where the return problem actually gets solved.
-
Christina Haftman (@Cr8DigitalAsset) reported@AnthropicAI Claude Code API access ≠ consumer access. During the recent middle-east @awscloud outage, developers with direct API integrations kept working, while consumer-facing users were locked out of the conversational interface. That’s a meaningful distinction.
-
Riniv (@Riniv56942195) reported@AWSSupport , is there any issue with Bahrain region( last 2 hours) ? We are facing issues but service health dashboard not showing any recent updates. Last update was on March 3rd.
-
Bryan (@0xp4ck3t) reported@AWSSupport URGENT - We have business + and we should be able to get a response from AWS within 30 minutes for critical issues. It's been hours, our **** DB is down. We need someone to have a look on it. Case ID 177566080000785
-
Saad Hussain (@SaadHussain654) reported@sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days
-
canadianbreaches (@canadabreaches) reportedBREACH ALERT: Duc (Duales) — Toronto fintech. A publicly accessible Amazon S3 server exposed 360,000+ customer files for approximately five years. Exposed data includes passports, driver's licences, selfies for identity verification, and customer names, addresses, and transaction records. Office of the Privacy Commissioner of Canada is investigating. Severity: CRITICAL.
-
James G (@IetsG0Brandon) reported@ring are your servers down? dod you not pay @awscloud ? why am I paying to not connect to my system and for you to say " its our fault " ? too busy counting your billions? what ********?
-
Zaid (@zqureshi_) reported.@AWSSupport bahrain region seems to down. And no update on health dashboards.
-
Vidhiya (@Vidhiyasb) reported@awscloud @awscloud amazon Q's file write tools are having issues..please fix
-
Decent Cloud (@DecentCloud_org) reported@AWSSupport @OrenOhad The form is broken. Resolution goes to DM. The next person searching 'AWS MFA network error' finds nothing.
-
A void (@xkeshav) reported@AWSSupport I already raised the issue
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.
-
Queen of hearts (@Petielvr) reported@AWSSupport Hello, this is acct. #26672735262. I cannot pay my bill because I get a 404 error. I have been trying to escalate this issue since Friday the 17th. Please have a human call Donna @ 3148223232
-
Ser-Jet💜👑(🧩) (@Ser_Jettt) reported@JothamNtekim1 @awscloud Interesting approach to the problem. Are you currently exploring any builder challenges or just shipping independently?
-
Bisi (@bisimusik) reported@awscloud can you please attend to my request? Our back @JustiGuide is down