Amazon Web Services

Amazon Web Services Outage Map

The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

Amazon Web Services users affected:

Less
More

amazon-web-services-aws Hero Image

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Map Check Current Status

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Düsseldorf, NRW 2
Paris, Île-de-France 2
San Gil, Departamento de Santander 2
Bogotá, Distrito Capital de Bogotá 2
New Orleans, LA 1
Pensacola, FL 1
New Delhi, NCT 1
Tampico, TAM 1
Uberlândia, MG 1
New Kensington, PA 1
El Pueblito, QUE 1
Müllheim, Baden-Württemberg Region 1
Culiacán, SIN 1
Toronto, ON 1
Tyler, TX 1
Winona, MN 1
Chicago, IL 1
Newport News, VA 1
Des Moines, WA 1
Lahore, Punjab Province 1
Seattle, WA 1
Rennes, Bretagne 1
Copenhagen, Region Hovedstaden 1
Nicosia, Eparchía Lefkosías 1
Strasbourg, ACAL 1
Budapest, Budapest 1
Torrejón de Ardoz, Comunidad de Madrid 1
San Jose, CA 1
Vic, Catalunya 1
Belton, MO 1

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • stolenalobs Lobs (@stolenalobs) reported

    @Techpatch2 @QuinnyPig @awscloud 9 - aws-meta is where api /aws/v2/ec2/create lives, so no new ec2 instances until the problem was fixed 10 - api gateway uses /aws/v2/ec2/create under the hood, to recreate itself, if you were unlucky your api gateway was also impacted until meta-aws was restored.

  • Si_ShaunRyan shaun ryan (@Si_ShaunRyan) reported

    @ItaiYaffe @awscloud lol... at least it wasn't a person this time that accidentally shut down a name server with terminal cmd. It was a computer that accidentally started everything up.

  • AaronBoothUK Aaron Booth (he/him) (@AaronBoothUK) reported

    @poiThePoi @QuinnyPig @awscloud Yeah not biased advice at all. Most people's risk profiles do not require multi region. The risk of having aws issues compared to the likelihood of breakages and issues due to your own poor implementation or losing skills in the workforce.

  • fapo39 fapo (@fapo39) reported

    on a side note amazon AWS tokyo is terrible, I had to downscale to 360p to even be able to listen to the stream...

  • ryan_sb Ryan "Non-Fundable Token" Scott Brown (@ryan_sb) reported

    @g_bonfiglio @QuinnyPig @awscloud Because for example in the 2017 S3, 2021 Kinesis, and this latest availability event outage reports, there is a section mentioning dependency on internal AWS networks/services. If that's all about to change that is great news, but hard to see as a customer to know 2/2

  • Kahnighticus Bryan K (@Kahnighticus) reported

    @mmceach476 I didn’t realize the AWS outage was still a thing… terrible look @awscloud

  • _T11_ Tim (@_T11_) reported

    @AWSSupport is there an API gateway outage at Sydney region atm? My serverless lambda randomly get an error saying connection refused

  • jimtalksdata Jim (@jimtalksdata) reported

    @QuinnyPig @lizthegrey @awscloud Or the IT version of "always have a manual override". For interdependent systems, it's just a matter of time before another "unexpected event" causes a similar outage, with associated reactive fixes

  • stolenalobs Lobs (@stolenalobs) reported

    @Techpatch2 @QuinnyPig @awscloud 3 - there was a bug in step2 that caused aws-meta to burst into flames 4 - they moved very important stuff from aws-meta to aws-meta2 5 - fires and smoke everywhere, they couldn't see the source of the problem

  • e_k_anderson Evan Anderson (@e_k_anderson) reported

    @richarddlarson @QuinnyPig @awscloud Line coverage (at least) can actually mislead you here unless you use a very specific coding style. Consider: if err != nil && !IsNotFound(err) { // handle error and return } You can miss one or two of three cases and still achieve 100% line coverage here.

  • craigfis craigfis (@craigfis) reported

    @QuinnyPig @awscloud No mention of Cloudfront. I was seeing Cloudfront configuration errors.

  • Si_ShaunRyan shaun ryan (@Si_ShaunRyan) reported

    @ItaiYaffe @awscloud lol... at least it wasn't a person this time that accidentally shut it down a name server. It was a computer that accidentally started everything up.

  • ashishlogmaster Ashish “Logmaster” (@ashishlogmaster) reported

    @QuinnyPig @awscloud Will be interesting how they will “improve” AWS SSO. We were SOL as we could not login at all! It’s a single region service and does not support “dual” instances (2 diff regions) by AWS org.

  • OLayer8 OSILayer8 (@OLayer8) reported

    @QuinnyPig @awscloud Move the cloud they say, it's resilient. It doesn't go down. Then it goes down. And they don't know how to fix it.

  • wrd83 Alex (@wrd83) reported

    @QuinnyPig @awscloud Does it matter what the fine grained failure models are? Whats important is that your design can mitigate failures such that you don't go down with them..

Map Check Current Status