Amazon Web Services

Amazon Web Services Outage Map

The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

Amazon Web Services users affected:

Less
More

amazon-web-services-aws Hero Image

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Map Check Current Status

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Birmingham, England 2
San Jose, CA 2
Fortín de las Flores, VER 1
Seneca Falls, NY 1
Canby, OR 1
Los Angeles, CA 1
Greater Noida, UP 1
Hamburg, HH 1
Uniontown, PA 1
New York City, NY 1
Cali, Departamento del Valle del Cauca 1

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • MiggsSD1964 Michael L. (@MiggsSD1964) reported

    @BuffaloBills @awscloud Only tells half of the story. They are getting dominated by the run. As a result, the passing stats are down.

  • vennemp Matthew Venne (@vennemp) reported

    @marekq @QuinnyPig @awscloud My point (though terse) was without these issues the CSPs wouldn’t be motivated to improve their architecture and design. They had a single point of failure that disrupted a huge portion of the internet. Now we all know it and the ball is on their court

  • fapo39 fapo (@fapo39) reported

    on a side note amazon AWS tokyo is terrible, I had to downscale to 360p to even be able to listen to the stream...

  • jimtalksdata Jim (@jimtalksdata) reported

    @QuinnyPig @lizthegrey @awscloud Or the IT version of "always have a manual override". For interdependent systems, it's just a matter of time before another "unexpected event" causes a similar outage, with associated reactive fixes

  • ramosbugs David Ramos.vanguard (@ramosbugs) reported

    I wish @awscloud Lambda had a way to restrict outbound connections (i.e., security groups) without having to resort to a VPC, which makes cold starts intolerably slow and caused lots of deployment issues last time I tried to use Lambdas within a VPC...

  • lwfromltsince theboyr (@lwfromltsince) reported

    @ArizonaCoyotes @wyshynski Outage was resolved multiple days ago. @AWSSupport probably needs you to pay your bill.

  • HelpedHope Golding (@HelpedHope) reported

    Great in the middle of doing a cloud build for my final exam and @awscloud went down again.

  • e_k_anderson Evan Anderson (@e_k_anderson) reported

    @richarddlarson @QuinnyPig @awscloud Line coverage (at least) can actually mislead you here unless you use a very specific coding style. Consider: if err != nil && !IsNotFound(err) { // handle error and return } You can miss one or two of three cases and still achieve 100% line coverage here.

  • stolenalobs Lobs (@stolenalobs) reported

    @Techpatch2 @QuinnyPig @awscloud 3 - there was a bug in step2 that caused aws-meta to burst into flames 4 - they moved very important stuff from aws-meta to aws-meta2 5 - fires and smoke everywhere, they couldn't see the source of the problem

  • stolenalobs Lobs (@stolenalobs) reported

    @Techpatch2 @QuinnyPig @awscloud 11 - same thing for everything else that was impacted. some stuff was failing because it runs on aws-meta, some stuff they turned off to help fix things 12 - it was a bug we never knew existed, will fix in 2 weeks 13 - sorry bros, will do better next time

  • AaronBoothUK Aaron Booth (he/him) (@AaronBoothUK) reported

    @poiThePoi @QuinnyPig @awscloud Yeah not biased advice at all. Most people's risk profiles do not require multi region. The risk of having aws issues compared to the likelihood of breakages and issues due to your own poor implementation or losing skills in the workforce.

  • wrd83 Alex (@wrd83) reported

    @QuinnyPig @awscloud Does it matter what the fine grained failure models are? Whats important is that your design can mitigate failures such that you don't go down with them..

  • AngryLib3000 Alex B (@AngryLib3000) reported

    @QuinnyPig @awscloud Isn't "service event" actually an euphemism for "outage"?

  • TonyOnTheTweetr Tony on the Tweetbox (@TonyOnTheTweetr) reported

    @QuinnyPig @awscloud In doing so most stuff recovered before we could even tell why it had broken. So here’s a partial description containing what little we do know for sure and a promise we’re going to theorize real hard about what we didn’t actually capture’

  • Kahnighticus Bryan K (@Kahnighticus) reported

    @mmceach476 I didn’t realize the AWS outage was still a thing… terrible look @awscloud

Map Check Current Status