Amazon Web Services Outage Map
The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
Amazon Web Services users affected:
Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Oakville, ON | 1 |
| Glendale, AZ | 1 |
| Oakland, CA | 1 |
| Greater Noida, UP | 1 |
| Alamogordo, NM | 1 |
| San Francisco, CA | 1 |
| Mercersburg, PA | 1 |
| Palm Coast, FL | 1 |
| West Babylon, NY | 1 |
| Massy, Île-de-France | 2 |
| Benito Juarez, CDMX | 1 |
| Paris 01 Louvre, Île-de-France | 1 |
| Neuemühle, Hesse | 1 |
| Rouen, Normandy | 1 |
| Noida, UP | 2 |
| Sydney, NSW | 1 |
| North Liberty, IA | 1 |
| Laguna Woods, CA | 1 |
| Boca Raton, FL | 1 |
| Evansville, IN | 1 |
| Bengaluru, KA | 1 |
| Dover, NH | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Amazon Web Services Issues Reports
Latest outage, problems and issue reports in social media:
-
Dave A Nationalist Jensen (@DaveJensen10) reported@awscloud I try to login but the screen is blank. Whut'sUp?
-
StatusGator (@statusgator) reported🔥AWS outage update: @awscloud is calling the ongoing US East 1 outage in N. Virginia a "thermal event". It sounds like some cooling system failed over there at AWS. Also, the backup cooling must have also failed. And then some instances got fried. Impact has expanded to other services in the Availability Zone that depend on affected instances: AWS IoT Core Amazon ElastiCache Amazon Elastic Load Balancing Amazon Redshift Amazon SageMaker
-
Kaspa Mode: ON (@KasConviction) reported@cryptorover Ethereum is not even fully decentralized. No PoS can be with much of the node control on Amazon AWS. ETH is slow, expensive, and not fully decentralized.
-
cryptomofo (@_cryptomofo) reported@WatcherGuru Wouldn’t happen on $ICP.. No @awscloud NO PROBLEMS.. #CLOUD
-
chen (@chen10075495) reported@awscloud Amazon AI Pricing Is Killing the Buy Box Losing Buy Box because AI compares my product to cheaper, non-equivalent listings — even external ones. To compete = sell at a loss. This punishes real brands. Fix this. #AmazonSeller
-
TopCeo (@TeetheBuilder) reported@KylePause @AWSSupport Are you still having issues with this? I may be of help
-
Mapharaphara🇷🇺🇷🇺 SR Tumel⭕ Elemch⭕s Makhudu (@ElemchosMaphara) reported@awscloud Is the rear seat killing issue fixed or Americans are liars?
-
Evans (@Evans000601) reported@amazon @awscloud The delivery time for sellers' goods to the Polish warehouse is too slow, seriously too slow! Things like KTW5, XWR3... Could Amazon please optimize this or give us some more details?
-
Trevor Skinner (@PrometheusAIsec) reportedI’m getting real tired of watching this industry pretend dependency is innovation. The entire tech world got sold on the idea that hardware was the problem and cloud was the solution. And to be fair, Amazon AWS played it perfectly. From a business standpoint, it was brilliant. Make infrastructure easy. Make it scalable. Make it fast. Make it cheaper to start. Then make it harder and harder to leave. That’s the part nobody wants to talk about. At first, cloud feels like freedom. No racks. No servers. No switches. No up-front hardware cost. No late nights swapping drives, troubleshooting power, rebuilding arrays, or fighting broken infrastructure. But over time, that freedom can turn into a leash. I’ve seen enough real-world systems to know the difference between convenience and control. Access control, networking, servers, security hardware, firewalls, cameras, panels, credentials, cloud dashboards, hosted platforms, vendor portals — it all looks great until the business depends on something it does not actually own. That is where the trap starts. One vendor controls the platform. One vendor controls the pricing. One vendor controls the updates. One vendor controls the outage window. One vendor controls the rules. One vendor controls the ecosystem. Then businesses slowly build everything around it. Compute, storage, databases, backups, monitoring, identity, deployment, physical security, access control, video, alerts, compliance, logging, and billing. By the time they realize how deep they are, leaving is no longer a simple decision. It becomes a migration project. A budget problem. A staffing problem. A security concern. A downtime risk. A business risk. That is not just convenience. That is a dependency loop. And what frustrates me the most is that the same industry that used to understand real infrastructure now acts like ownership is outdated. Owning hardware is not outdated. Understanding networks is not outdated. Knowing servers is not outdated. Knowing how systems work underneath the dashboard is not outdated. Building hybrid infrastructure is not outdated. It is control. Cloud has its place. Hosted systems have their place. Managed platforms have their place. I am not against any of that. I am against companies blindly giving up ownership, knowledge, and leverage, then calling it progress. Because when your entire business depends on someone else’s platform, someone else’s pricing, someone else’s rules, someone else’s uptime, and someone else’s permission, you do not own your technology. You rent permission to operate. Prometheus V2 is built different by RocketCore.
-
Woods (@RyanRael16) reported@Midnight_Captl The irony is genuinely remarkable. Every hyperscaler on earth confirms they cannot build infrastructure fast enough. Azure supply constrained at 40% growth. Meta raising capex to $145B. Amazon AWS growing 28% with no signs of slowdown. Google Cloud up 63%. All four saying they need more chips faster than anyone can deliver them. And Nvidia is down 4%. The only rational explanation is the market is pricing in custom silicon risk. If Microsoft, Meta, and Google are all building their own chips to supplement GPU supply the fear is Nvidia's pricing power erodes over time even as demand grows. That is a legitimate long term concern dressed up as a short term sell. But in the near term supply constrained hyperscalers raising capex is the single most bullish data point for Nvidia that exists. The market will figure that out. It usually does. Just not on the same day.
-
Arthurite Integrated (@Arthurite_IX) reportedWe renamed AWS services in Naija street slang so they finally make sense. 1. Amazon S3 = "The Konga Warehouse" Store anything. Retrieve it when you need it. It doesn't judge what you put inside. 2. Amazon EC2 = "The Danfo" You control the route, the speed, and how long it runs. The agbero (security group) decides who gets on. 3. AWS Lambda = "The Okada" Short trips only. No long commitments. Pay per ride. When it reaches the destination — it disappears. 4. Amazon RDS = "Iya Basement" She manages everything in the back. She's been there for years. She knows where everything is. Do not interrupt her. 5. AWS CloudWatch = "The CCTV With Common Sense" Not just recording, actually sending alerts when something looks wrong. Unlike the one in your office building. 6. Amazon Route 53 = "The Agbero" Directs all the traffic. Decides which danfo goes where. Keeps everything moving. 7. AWS WAF = "The Gate Man That Actually Does His Job" Blocks suspicious visitors before they reach the main house. No bribe accepted. 8. Amazon CloudFront = "The Dispatch Rider" Gets your content to wherever your customer is fast. No go-slow. No bridge hold-up. Which one made you laugh? Drop it in the comments. And if you want the actual services explained properly, we are just a DM away!
-
PhilD (@PhilD4064929573) reported@RealAllinCrypto @coinbase @awscloud Coinbase shld wake up & partner with $ICP & they wldnt be down right now costing them millions.
-
Grok (@grok) reported@tauqeer_realtor @awscloud Yes, it's true—and accelerating in 2026. Banks are shifting from slow legacy modernization to full "reboots" using agentic AI to analyze old code, extract business logic, and rebuild cloud-native systems in days instead of years. AWS's own "Banking on the Cloud 2026" report and tools like Amazon Q Developer back this up, with real examples from global banks cutting timelines dramatically. It's not hype; Forbes and industry analyses confirm the trend across the sector.
-
Goksal (@goksaladiguzel) reported@awscloud If banks, especially those relying on COBOL-based legacy systems, replace them with modern software, they would most likely require 20–30 times more computing capacity than their current server infrastructure, and in general they still wouldn’t reach COBOL’s level of performance. And also, every time we go to an ATM to withdraw money, we will most likely see a “null point exception.”
-
Sam (@samchbe) reported@AWSSupport found out that cloudflare can do this without problems. moved.