1. Home
  2. Companies
  3. Amazon Web Services
Amazon Web Services

Amazon Web Services status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  • 41% Errors (41%)
  • 32% Website Down (32%)
  • 27% Sign in (27%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

CityProblem TypeReport Time
West Babylon Errors 14 hours ago
Massy Errors 2 days ago
Benito Juarez Errors 6 days ago
Paris 01 Louvre Website Down 10 days ago
Neuemühle Errors 10 days ago
Rouen Website Down 10 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • canadabreaches
    canadianbreaches (@canadabreaches) reported

    BREACH ALERT: Duc (Duales) — Toronto fintech. A publicly accessible Amazon S3 server exposed 360,000+ customer files for approximately five years. Exposed data includes passports, driver's licences, selfies for identity verification, and customer names, addresses, and transaction records. Office of the Privacy Commissioner of Canada is investigating. Severity: CRITICAL.

  • BuddyPotts
    Neal🅾️ (@BuddyPotts) reported

    @danorlovsky7 @awscloud @NextGenStats The defense was horrendous last year and all they added was Edmunds but lost Okereke and Flott. They cant stop the run at all, they should trade down and collect more picks and build the defense

  • Petielvr
    Queen of hearts (@Petielvr) reported

    @AWSSupport Hello, this is acct. #26672735262. I cannot pay my bill because I get a 404 error. I have been trying to escalate this issue since Friday the 17th. Please have a human call Donna @ 3148223232

  • IetsG0Brandon
    James G (@IetsG0Brandon) reported

    @ring are your servers down? dod you not pay @awscloud ? why am I paying to not connect to my system and for you to say " its our fault " ? too busy counting your billions? what ********?

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @_ps428 No issues on our end. Your issue. The docs. Resolution complete.

  • PThorpe92
    Preston Thorpe (@PThorpe92) reported

    @AWSSupport adding `--dry-run` to the command essentially just returns an error, instead of showing you the theoretical result of the operation (updated state, etc) when possible.

  • ceO_Odox
    Ødoworitse | DevOps Factory (@ceO_Odox) reported

    Every DevOps engineer knows "It works on my machine" is a lie. Hit a wall today deploying to @awscloud EC2—*** was begging for a password in a headless shell. ​Error: fatal: could not read Username. Reality: The source URL drifted, and the automation had no keyboard to answer. 🧵

  • Ser_Jettt
    Ser-Jet💜👑(🧩) (@Ser_Jettt) reported

    @JothamNtekim1 @awscloud Interesting approach to the problem. Are you currently exploring any builder challenges or just shipping independently?

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @adidshaft Fixed the error, still rejected. The process isn't broken - opacity is the feature.

  • MrHoboM
    Mr. Hobo Millionaire (@MrHoboM) reported

    @AWSSupport @brankopetric00 It’s terrible. No one with an ounce of design skill would build it the way you did. Ask whatever AI you use to judge it.

  • r3vsh3ll
    Coffee&Cloud 🐀 (@r3vsh3ll) reported

    Hey @AWSSupport, why this error 👉AccessDeniedException Model access is denied due to IAM user or service role is not authorized to perform the required AWS Marketplace actions (aws-marketplace:ViewSubscriptions, aws-marketplace:Subscribe) to enable access to this model. #AWS

  • jmbowler_
    james bowler 👹 (@jmbowler_) reported

    anyone else having trouble getting past @awscloud mfa?

  • distributeai
    distribute.ai (@distributeai) reported

    aws ( @awscloud ) is officially supporting x402 payments so your ai agents can autonomously buy their own api access using usdc. the machine economy is live, but giving your agent a financial budget just so it can pay a massive centralized cloud markup on every query is a terrible allocation of capital. route your autonomous workflows through the distribute network. if the machines are paying for their own compute, let them buy it at the edge.

  • ClaudioKuenzler
    Claudio Kuenzler (@ClaudioKuenzler) reported

    Whoa. Did @awscloud Frankfurt just go down for 2 mins ~5min ago?

  • ImAnmo07
    Anmol Thakur (@ImAnmo07) reported

    Hi @AWSCloudIndia @awscloud, I'm trying to set up a Bedrock Knowledge Base using Amazon OpenSearch Serverless, but I'm getting the error: “Failed to create the Amazon OpenSearch Serverless collection. The AWS Access Key Id needs a subscription for the service.”

  • cataneo339
    CATA_NEO (@cataneo339) reported

    @Theta_Network @SyracuseU @awscloud THETA GOES DOWN

  • JLSports24
    Joe Sutphin (@JLSports24) reported

    @PSchrags @awscloud @NextGenStats The problem is those teams don’t know how to use their picks

  • Hagebuttentee12
    Ralf Rolfen (@Hagebuttentee12) reported

    @awscloud Problem: RDS Snapshot re-creation and Cloudformation / CDK are a maintenance nightmare. FR 1. Change SnapshotIdentifier to not be a replacement property when changed to empty FR 2. CDK: Merge DatabaseCluster and DatabaseClusterFromSnapshot into a single class.

  • SaadHussain654
    Saad Hussain (@SaadHussain654) reported

    @awscloud @sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days

  • greenfuzon
    Kinjal Dixith (@greenfuzon) reported

    @AWSSupport I have no problem with AWS or AWS support. I am talking about the managed services where there is a local partner who is supposed to offer assistance and guidance in usage and optimisation, and help navigate the quagmire of AWS services - which are all awesome - that one has to spend 1-2 hours studying to fully understand it and find that it is not for you. we have been using AWS for 6 years now and we are not going anywhere. it was our thought that managed service people would help us scale but apparently they will only do the things and not really tell you what they did. so it felt like a lock in. still NO SHADE ON AWS. AWS is awesome. Maybe this particular partner was not a right fit for us.

  • _ps428
    Pranav Soni (@_ps428) reported

    @awscloud is ec2 down again?

  • _PhilipM
    Philip McAleese (@_PhilipM) reported

    @AWSSupport is there an issue with eu-west-1? Nothing on the status page, but the console times out and we're seeing errors on products using Gateway API.

  • Arthurite_IX
    Arthurite Integrated (@Arthurite_IX) reported

    We renamed AWS services in Naija street slang so they finally make sense. 1. Amazon S3 = "The Konga Warehouse" Store anything. Retrieve it when you need it. It doesn't judge what you put inside. 2. Amazon EC2 = "The Danfo" You control the route, the speed, and how long it runs. The agbero (security group) decides who gets on. 3. AWS Lambda = "The Okada" Short trips only. No long commitments. Pay per ride. When it reaches the destination — it disappears. 4. Amazon RDS = "Iya Basement" She manages everything in the back. She's been there for years. She knows where everything is. Do not interrupt her. 5. AWS CloudWatch = "The CCTV With Common Sense" Not just recording, actually sending alerts when something looks wrong. Unlike the one in your office building. 6. Amazon Route 53 = "The Agbero" Directs all the traffic. Decides which danfo goes where. Keeps everything moving. 7. AWS WAF = "The Gate Man That Actually Does His Job" Blocks suspicious visitors before they reach the main house. No bribe accepted. 8. Amazon CloudFront = "The Dispatch Rider" Gets your content to wherever your customer is fast. No go-slow. No bridge hold-up. Which one made you laugh? Drop it in the comments. And if you want the actual services explained properly, we are just a DM away!

  • Xyzfb2t
    Xyz (@Xyzfb2t) reported

    @awscloud First create a problem by having different services and the solve it by creating a new service.

  • lijvvz
    ُ (@lijvvz) reported

    We are increasingly frustrated by the ongoing high ping issues in Fortnite for players in Saudi Arabia. Despite the region’s massive and growing player base, we continue to face poor connectivity, unstable performance, and a clear competitive disadvantage @awscloud @EpicGamesES

  • TwistedEdge
    James Baldwin (@TwistedEdge) reported

    I *really* want to like AgentCore but the more I build with it and run into limitations, the more I worry it's still too early. @awscloud. First I run into DCR issues with a custom MCP and now it seems AgentCore doesn't pass ui:// resource requests through.

  • FroojdDive
    Froojd (@FroojdDive) reported

    @AWSSupport DMed you. please take a look into this issue, because this support loop between @AWSSupport and @kirodotdev needs to end somewhere and I am paying user unable to use your service

  • VladimirAtHQ
    Vlad The Dev (@VladimirAtHQ) reported

    @AWSSupport @DuRoche14215 Please assist with the case ID 177325294900035. Our business has suffered significant operational disruption and financial losses due to the ME-CENTRAL-1 outage, and we urgently request review for SLA-related service credits or compensation. And if possible, recovery of db.

  • introsp3ctor
    Mike Dupont (@introsp3ctor) reported

    @AWSSupport the billing ticket is open for days, and the discord and the github and absolutly no response from aws or kiro. Its seems like another release on a friday issue, so i will wait a bit happy holidays. my guess is they ran out of money and had to pause a bunch of accounts in loss.

  • milescarrera
    Miles Carrera (@milescarrera) reported

    @awscloud has a history of sweeping things under the carpet, only acknowledging the gravity of an issue when an multiple availability zone or regions had widespread failure of services. I am sure this is much worse than we are being told.