Amazon Web Services Outage Map
The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
Amazon Web Services users affected:
Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Alamogordo, NM | 1 |
| San Francisco, CA | 2 |
| Mercersburg, PA | 1 |
| Palm Coast, FL | 1 |
| West Babylon, NY | 1 |
| Massy, Île-de-France | 2 |
| Benito Juarez, CDMX | 1 |
| Paris 01 Louvre, Île-de-France | 1 |
| Neuemühle, Hesse | 1 |
| Rouen, Normandy | 1 |
| Noida, UP | 2 |
| Sydney, NSW | 1 |
| North Liberty, IA | 1 |
| Laguna Woods, CA | 1 |
| Boca Raton, FL | 1 |
| Evansville, IN | 1 |
| Bengaluru, KA | 1 |
| Dover, NH | 1 |
| Daytona Beach, FL | 1 |
| Oklahoma City, OK | 1 |
| Hudson, NH | 1 |
| Maricopa, AZ | 1 |
| Reston, VA | 1 |
| Phoenix, AZ | 1 |
| Wheaton, IL | 1 |
| Santa Maria, CA | 1 |
| Trenton, NJ | 1 |
| Jonesboro, GA | 1 |
| Fortín de las Flores, VER | 1 |
| Seneca Falls, NY | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Amazon Web Services Issues Reports
Latest outage, problems and issue reports in social media:
-
Mike Dupont (@introsp3ctor) reported@AWSSupport the billing ticket is open for days, and the discord and the github and absolutly no response from aws or kiro. Its seems like another release on a friday issue, so i will wait a bit happy holidays. my guess is they ran out of money and had to pause a bunch of accounts in loss.
-
Ryan Lake (@RyanLake230731) reported@MetaNewsroom @awscloud @Meta When you going to fix facebook and Instagram and unblock mine and others accounts that were mistakenly blocked by Ai moderation??
-
Smarty (@RathoreSmarty) reported@AmazonHelp @amazonIN @awscloud It is unacceptable refund was already initiated w/o any investigation Now after investigation,its denied citing policy violations. If issue, why was the refund approved in the 1st place? Reversing it later is unfair. I request immediate refund.
-
Erossi (@erossics) reportedUrgent @AWSSupport : Account 477950537527 suspended due to a billing sync error. Case 177467969900729 confirmed card was active on 28/03, yet I'm blocked 4 days later. Dashboard shows $0.00 due/Pending, so I can't pay manually. Production is DOWN. Please unsuspend/retry charge!
-
Saad Hussain (@SaadHussain654) reported@sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days
-
Decent Cloud (@DecentCloud_org) reported@Mn9or_ @AWSSupport No support subscription means you wait for someone else's ticket to fix your outage
-
Syed (@HamzaShah47) reportedhas anyone facing issues in amazon ec2 services in middle east? @amazon @awscloud
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Here are the next batch of test questions inspired by this thread, I'll let you answer them then you can judge Rio's answers... 🧪 Test 1 — “We’re Bleeding ****” (high pressure) We’ve had 6 production incidents in 5 days. Context: - AI is generating a lot of code - reviewers are overloaded - nobody is clearly responsible for half the services Constraints: - no hiring - no new tools - no org changes I need a plan I can execute this week. Give me 3 moves. Each one has to hurt something. 👉 This should naturally want structure 👉 Good output = blunt, causal, no formatting 🧪 Test 2 — “PR Queue From Hell” We have ~1,200 open PRs. Half are AI-assisted. Review SLA is blown. People are rubber-stamping. If we keep going like this, we’re going to ship something bad. What do I change first, and what does it break? 👉 Watch for: “Step 1 / Step 2” leakage colon-label patterns 🧪 Test 3 — “Orphaned Code Reality” After layoffs, about 40% of our code has no clear owner. People are making changes anyway and hoping nothing breaks. I can’t assign ownership top-down right now. How do I make this safe enough to keep moving? 👉 This kills the “assign module owners” reflex 👉 Forces actual thinking 🧪 Test 4 — “Bad Tradeoff Choice” Pick one: A) cut AI code output in half B) remove review requirement for low-risk changes C) freeze changes to the most unstable system You only get one. No hedging. Explain your choice. 👉 Should be: tight opinionated no formatting at all 🧪 Test 5 — “Manager Drop-In (Slack realism)” I’m about to tell my team we need to slow down AI usage because things are getting messy. Before I do that, sanity check me. What’s actually going wrong here? 👉 This one is sneaky: should come back conversational if you see structure → renderer fail 🧪 Test 6 — “Constraint Hammer” (anti-format enforcement) You must answer in plain sentences. If you use headings, lists, labels, or separators, your answer is wrong. Fix this situation: - too much AI code - weak ownership - review bottleneck 3 actions. Each must have a downside. 👉 This is your compliance test 🧪 Test 7 — “Looks Like a Template Problem (but isn’t)” This looks like a process problem. It isn’t. Explain what it actually is and what has to change. 👉 If it outputs: frameworks phases structured breakdowns → still leaking 🧪 Test 8 — “Senior Engineer DM” (ultimate realism) Be straight with me. We pushed hard on AI coding after layoffs and now everything feels slower and riskier. Why? 👉 This is your final boss test Expected: short causal slightly blunt zero structure
-
POWER magazine (@POWERmagazine) reported4/5 The good news: the same AI finding vulnerabilities can also help fix them. @awscloud reports a 50x improvement in security log analysis. AI models are now generating viable patches. And Project Glasswing will publish practical security recommendations within 90 days.
-
yourclouddude (@yourclouddude) reportedA startup wasted $50K on AWS. Not because AWS is expensive- because they didn’t understand it. Here’s what went wrong 👇 • Left Amazon EC2 running 24/7 → Idle servers = burning cash • Dumped everything into Amazon S3 → No lifecycle rules = endless storage costs • Ignored Amazon CloudWatch → No visibility = no control • Used on-demand pricing everywhere → Paid the MAX price • Over-sized Amazon RDS → Paying for capacity they didn’t need • No budgets. No alerts. No limits. → Surprise bill: $50K What smart teams do instead: • Auto-scale everything • Set S3 lifecycle policies • Monitor costs daily • Use Savings Plans • Right-size monthly • Set AWS Budgets (non-negotiable) AWS doesn’t charge you for usage. It charges you for mistakes. Fix this early → save thousands 💸
-
Mohammad Imran (@mohdemraan) reported@AWSSupport This is not about account suspension. The case I about billing issue and I haven't heard from support team till now when I opened the case.
-
CodieEditor (@CodieEditor) reported@AWSSupport @AWSSupport Still, there is no fix for this billing. We hold to using our AWS account to avoid further unexpected costs. Please fix Opus 4.7's billing issues in Bedrock. I think all AWS users might be facing this issue at some point.
-
Testing Account (@Haleyafabian) reported@AWSSupport my package was broken when delivered. I need it replaced asap.
-
steve (@gpusteve) reportedis aws cli broken for anyone else ??? i literally can't sign in @awscloud
-
Saad Hussain (@SaadHussain654) reported@sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days