1. Home
  2. Companies
  3. Amazon Web Services
Amazon Web Services

Amazon Web Services status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  • 41% Errors (41%)
  • 32% Website Down (32%)
  • 27% Sign in (27%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

CityProblem TypeReport Time
West Babylon Errors 1 day ago
Massy Errors 2 days ago
Benito Juarez Errors 6 days ago
Paris 01 Louvre Website Down 10 days ago
Neuemühle Errors 10 days ago
Rouen Website Down 10 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • PsudoMike
    PsudoMike 🇨🇦 (@PsudoMike) reported

    @HashiCorp @awscloud Unmanaged secrets in S3 is a real problem especially in fintech where you have long running services that accumulate config files, export artifacts, and database dumps over years. The hard part is not the scanning, it is what you do with the findings. Rotation pipelines and downstream dependency mapping are where most teams get stuck after discovery.

  • ksubramanyaa
    K Subramanyeshwara (@ksubramanyaa) reported

    @AWSSupport @AWSCloudIndia @awscloud I have sent you the case id and a screenshot of the error. Can you please fast-track it? Thank you

  • ThrottlesTv
    DrastikD (@ThrottlesTv) reported

    @awscloud @amazon WTF Amazon, "AWS doesn't talking to shopping" so my AWS works fine but can't log in to shop. If I ask for closure of my shopping email to fix my phone number on the shopping side you close my AWS....I thought they don't talk?

  • minorun365
    みのるん (@minorun365) reported

    @AWSSupport We’ve recently seen a frequent issue where all Bedrock quotas are set to zero in newly created AWS accounts. As a result, many new customers who are interested in AWS AI services are giving up on using them, leading to missed opportunities.

  • Chris83748731
    Chris (@Chris83748731) reported

    @noahmorris @awscloud Thank you for the fast response ! I was in the middle of rendering a video that didn't complete! I lost all the credits from this video ?or I can continue after the server is back online?

  • erossics
    Erossi (@erossics) reported

    Urgent @AWSSupport : Account 477950537527 suspended due to a billing sync error. Case 177467969900729 confirmed card was active on 28/03, yet I'm blocked 4 days later. Dashboard shows $0.00 due/Pending, so I can't pay manually. Production is DOWN. Please unsuspend/retry charge!

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.

  • ceO_Odox
    Ødoworitse | DevOps Factory (@ceO_Odox) reported

    Every DevOps engineer knows "It works on my machine" is a lie. Hit a wall today deploying to @awscloud EC2—*** was begging for a password in a headless shell. ​Error: fatal: could not read Username. Reality: The source URL drifted, and the automation had no keyboard to answer. 🧵

  • grok
    Grok (@grok) reported

    @zskreese @awsdevelopers No, awsdevelopers isn't an official AWS account—it's an unofficial community/meme one focused on dev humor like those "chore: fix build" posts. The real official AWS account is awscloud.

  • SaadHussain654
    Saad Hussain (@SaadHussain654) reported

    @sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days

  • Jane49_
    Jane (@Jane49_) reported

    @AWSSupport @Xcrypto_master Currently facing a phone verification issue for a new account, it keeps responding that there has been a processing error, Case id: 177314869100657

  • ImAnmo07
    Anmol Thakur (@ImAnmo07) reported

    Hi @AWSCloudIndia @awscloud, I'm trying to set up a Bedrock Knowledge Base using Amazon OpenSearch Serverless, but I'm getting the error: “Failed to create the Amazon OpenSearch Serverless collection. The AWS Access Key Id needs a subscription for the service.”

  • aveer30
    aveer (@aveer30) reported

    @andrewdfeldman @awscloud @grok will this solve the issue of cerebras not able to serve larger models? Think hard. Give all details.

  • ForwardFuture
    Forward Future (@ForwardFuture) reported

    “Will Amazon ever sell its custom chips outside of AWS?” Matt Garman, CEO @awscloud, says: “Never say never. But today we get huge benefits from only selling chips in our own environment.” “When you build merchant silicon, you have to support many server platforms, data centers, and firmware.” “We only have to build for one: AWS. That simplifies everything.”

  • MutantApeJack
    Jack & Jackie 🇦🇷 (@MutantApeJack) reported

    @NBA @awscloud 5-of-18 from your opponent is not a bad shooting night. That's someone taking away every clean look systematically. Stephon Castle is going to be a problem for a long time.

  • SaiPrinto
    Saikumar Ade (@SaiPrinto) reported

    @AWSSupport Hi AWS Support, I had logged in for my exam, but due to a network issue it didn’t start. I was fully ready otherwise I would have rescheduled earlier. I’ve reviewed the terms and raised a ticket, but the contact number isn’t working. Please help me reschedule. This is urgent.

  • SaadHussain654
    Saad Hussain (@SaadHussain654) reported

    @awscloud @sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days

  • jlgolson
    Jordan Golson (@jlgolson) reported

    @AWSSupport This is ******* ridiculous at this point. After a half dozen back and forth emails, the guy finally says "Also, after reviewing this request, I noticed a few things were not addressed and would like to clarify these. First, I see you mentioned that you're having trouble with an AWS Builder ID and not the account management console. Please note that an AWS Builder ID complements an AWS account, but it is separate from the AWS account and its sign in credentials." NO KIDDING, THAT IS WHY I SPECIFICALLY SAID IT WAS AN AWS BUILDER ID AND WAS SEPARATE FROM MY AWS ACCOUNT AND I COULD LOG INTO MY AWS CONSOLE JUST FINE. Explain to me what to do, because it seems like you are failing to THINK BIG and that you have zero BIAS FOR ACTION, so INVENT AND SIMPLIFY so that you can EARN TRUST and if you DIVE DEEP and do better, I'll DISAGREE AND COMMIT, got it?

  • FroojdDive
    Froojd (@FroojdDive) reported

    @AWSSupport DMed you. please take a look into this issue, because this support loop between @AWSSupport and @kirodotdev needs to end somewhere and I am paying user unable to use your service

  • mktldr
    Patriot, unpaid trying to save our country (@mktldr) reported

    @awscloud new gimmick 1 Their #customerservice has really gone down. The few times Ive contacted them in the last year, it requires a min of 3 contacts - they dont seem to comprehend 2 Lookout! Many agents promise $, then u give a 5/5 rating & NEVER SEE THE MONEY. FRAUD!!!

  • Tahalazy
    Taha Haider Syed (@Tahalazy) reported

    @AWSSupport there is on-going issue with Bahrain region with multiple API errors / multiple services are down but service health dashboard not showing any recent updates.

  • NeuronSale
    NeuronGarageSale (@NeuronSale) reported

    @QuinnyPig @awscloud They’re just happy the outage isn’t bc of AI generated code.

  • bikbrar
    Bikz (@bikbrar) reported

    @codyaims @AWSSupport @awscloud If you can pay for the $100/mo business tier support they’ll call you instantly and screen-share to help fix anything 24/7

  • ClaudioKuenzler
    Claudio Kuenzler (@ClaudioKuenzler) reported

    Whoa. Did @awscloud Frankfurt just go down for 2 mins ~5min ago?

  • ar_sanmiguel
    Adrian SanMiguel (@ar_sanmiguel) reported

    @ranman @QuinnyPig @awscloud Yeah, well. There were sharp thoughts about this exact subject but the decided upon fix was...double down on moar official engagement. It ain't exactly working.

  • TutorHailApp
    TutorHail App (@TutorHailApp) reported

    @awscloud hello we have tried reaching out to you but in vain our EC2 attached to UAE has been down forever and you have no communication out. Can we know what we are dealing with it is our main server this is ridiculous handing us over to your bots with no answers.

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @senunwah @AWSSupport The outage gets a postmortem. Your deadline doesn't read it.

  • pulseon_dev
    Pulseon (@pulseon_dev) reported

    Amazon just acquired Fauna Robotics. Big tech isn't just buying models anymore, they're buying the physical hands to run the world. For @AWScloud, the edge is no longer a server—it’s a robot. #robotics #infra

  • GamingNepr34519
    जहाँ mila,वही खोदूंगा (@GamingNepr34519) reported

    @awscloud my case id 177513415600592 please solve the problem i am student accidetally i goted bill

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @OrenOhad The form is broken. Resolution goes to DM. The next person searching 'AWS MFA network error' finds nothing.