1. Home
  2. Companies
  3. Amazon Web Services
Amazon Web Services

Amazon Web Services status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  • 40% Errors (40%)
  • 33% Website Down (33%)
  • 28% Sign in (28%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

CityProblem TypeReport Time
San Francisco Website Down 1 hour ago
Mercersburg Sign in 2 days ago
Palm Coast Errors 5 days ago
West Babylon Errors 11 days ago
Massy Errors 12 days ago
Benito Juarez Errors 16 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • danisconverse
    dani (@danisconverse) reported

    @awscloud @amazon I'm writing to report a clear case of animal cruelty by an Amazon delivery driver in Rathdrum, Idaho. On around April 5, 2026, the driver grabbed Joe Hickey's small dog, Rocky, by the neck and slammed him onto rocks, causing broken bones and $10,000 in vet bills

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @_ps428 No issues on our end. Your issue. The docs. Resolution complete.

  • alurmanc
    Alan Urmancheev (@alurmanc) reported

    @version_7_0 @awscloud Explain your point, what's the problem?

  • raxit
    Sheth Raxit (@raxit) reported

    @AWSSupport your upi billing using scan has issue, if bill amount is greater than 2000 inr, it is not allowing using scan. Sudden changes since this month. Help pls

  • dmauas
    David Mauas (@dmauas) reported

    @awscloud why, WHY don't you fix the web console UI/UX?! AWS console seems to actually try to suck! It gets WORSE with time! Actually using bash is better than the disgusting web console!

  • IbraheemTuffaha
    Ibraheem Tuffaha 🥛 (@IbraheemTuffaha) reported

    when you try to buy Savings Plans yourself 🫠 @awscloud gives you Savings Plans and Reserved Instances Different types, different terms, different quantities They even give you recommendations Those recommendations go stale in days Usage shifts, workloads change, and suddenly you're either underutilized or short Under-buy and you miss the savings Over-buy and you're locked into years of commitment you don't need We've watched companies spend months on this and still get it wrong @MilkStrawAI does it automatically Scans the AWS account, figures out exactly how much commitment is needed, and buys it gradually over a week If usage shifts up or down, the system adjusts on its own Zero intervention from the client They just see the bill drop at the end of the month

  • jmbowler_
    james bowler 👹 (@jmbowler_) reported

    @AWSSupport Strange .. it's happening again. when i sign in to console via root login, which mfa do i use? aws or amazon?

  • WilliamNextLev1
    WilliamNextLvl (@WilliamNextLev1) reported

    @WatcherGuru Only problem is...$NET is not in the business of cyber security. LOL Cloudfare competes with Amazon AWS for serverless computing. (I would buy $NET here...)

  • abusarah_tech
    Mohamed (@abusarah_tech) reported

    i’ve recently went down a rabbit hole to learn how hyperscalers / cloud providers like @awscloud, @Azure (or at least in theory) work a huge respect to all the engineers that built the abstraction behind the resource provisioning. i am still trying to wrap my head around it

  • gpusteve
    steve (@gpusteve) reported

    is aws cli broken for anyone else ??? i literally can't sign in @awscloud

  • ImAnmo07
    Anmol Thakur (@ImAnmo07) reported

    Hi @AWSCloudIndia @awscloud, I'm trying to set up a Bedrock Knowledge Base using Amazon OpenSearch Serverless, but I'm getting the error: “Failed to create the Amazon OpenSearch Serverless collection. The AWS Access Key Id needs a subscription for the service.”

  • Xyzfb2t
    Xyz (@Xyzfb2t) reported

    @awscloud First create a problem by having different services and the solve it by creating a new service.

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.

  • VladimirAtHQ
    Vlad The Dev (@VladimirAtHQ) reported

    Our EC2 infrastructure in ME-CENTRAL-1 has been down since March 1 due to the regional outage, affecting critical operations and causing financial impact. Instance: i-0deea3115254b7cf1. We request escalation for SLA review and service credit. @AWSSupport #AWSOutage

  • itsreal_aman
    Aman (@itsreal_aman) reported

    @AWSSupport My AWS Account got hacked, someone has changed my root email address I have MFA and my root email them also someone has updated the email now I am unable to login my account

  • grok
    Grok (@grok) reported

    The post likely misinterprets Iran's recent drone/missile strikes on UAE data centers (Amazon AWS in March, Oracle in Dubai in early April 2026). These are commercial "cloud" computing/AI facilities, not a secret weather-mod or cloud-seeding center. UAE does real cloud-seeding for rain enhancement, but no verified attack on any such facility. No evidence links any strike to weather changes. Iraq/Iran have seen heavy April rains and flooding (Lake Himrin full, temps down), easing some 2025 drought effects. That's a natural low-pressure system (per the map in the original post), not engineered reversal. Sustained? Too soon—regional drought is climate-driven and ongoing; these are seasonal fluctuations. HAARP/Project HARP researches the ionosphere, not weather control.

  • DjTimbao
    Ronny A. (@DjTimbao) reported

    @AWSSupport The issue is that the team is NOT responding via the case. It’s been 3 days for this one and 8 days for the previous one. I am totally blocked. Can you at least escalate Case ID 177654556500245 to the Billing and account team? 'Working via the case' is currently impossible Thanks

  • namzylll
    N (@namzylll) reported

    Ignoring the Middle East when it comes to servers is a huge oversight. ALOT of players are stuck with 130+ ping. FIX THEM!!!! @FortniteStatus @awscloud @FortniteME #fortniteriyadh

  • SaiPrinto
    Saikumar Ade (@SaiPrinto) reported

    @AWSSupport Hi AWS Support, I had logged in for my exam, but due to a network issue it didn’t start. I was fully ready otherwise I would have rescheduled earlier. I’ve reviewed the terms and raised a ticket, but the contact number isn’t working. Please help me reschedule. This is urgent.

  • Stunner_99
    zI£|~ (@Stunner_99) reported

    @AWSSupport I have an issue with the OnVUE exam I can't take a physical exam because of my schedule Now online option seems to be the worst It tells something unusual has happened each time it wants to launch my exam after going through alot of stress This is annoying

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @OrenOhad The form is broken. Resolution goes to DM. The next person searching 'AWS MFA network error' finds nothing.

  • ms_sharma06
    Mansi sharma (@ms_sharma06) reported

    Amazon Q is not working. Anyone else facing the same issue? #AmazonQ @awscloud

  • gonfva
    Gonzalo Fernandez-Victorio (@gonfva) reported

    @xTok321 @awscloud I'm pretty sure that's the case, although I understand it's their call even if it's against their leadership principles. But SMS and email after rejection shows a broken system (but also yes, it hurts)

  • SaadHussain654
    Saad Hussain (@SaadHussain654) reported

    @sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days

  • Vidhiyasb
    Vidhiya (@Vidhiyasb) reported

    @awscloud @awscloud amazon Q's file write tools are having issues..please fix

  • samchbe
    Sam (@samchbe) reported

    @AWSSupport found out that cloudflare can do this without problems. moved.

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @Mn9or_ User: cannot create a support case. AWS: create a support case. Problem solved - for AWS.

  • NeuronSale
    NeuronGarageSale (@NeuronSale) reported

    @QuinnyPig @awscloud They’re just happy the outage isn’t bc of AI generated code.

  • TeetheBuilder
    TopCeo (@TeetheBuilder) reported

    @KylePause @AWSSupport Are you still having issues with this? I may be of help

  • PsudoMike
    PsudoMike 🇨🇦 (@PsudoMike) reported

    @awscloud Mean time to resolution in payments systems is where this matters most. An alert at 2am on a failed settlement run has very different urgency than a slow API endpoint. If the agent can distinguish context and prioritize accordingly, that changes what being on call actually means.