1. Home
  2. Companies
  3. Amazon Web Services
Amazon Web Services

Amazon Web Services status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  • 41% Errors (41%)
  • 32% Website Down (32%)
  • 27% Sign in (27%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

CityProblem TypeReport Time
West Babylon Errors 4 days ago
Massy Errors 6 days ago
Benito Juarez Errors 9 days ago
Paris 01 Louvre Website Down 14 days ago
Neuemühle Errors 14 days ago
Rouen Website Down 14 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • MC59785335
    NFT and CRYPTO Fan (@MC59785335) reported

    @AWSSupport Thank you for support . Actually i think with support + or even with regular plan such kind of issues like restore limits should be resolved in 1 hour range. However im still have not received any answer regarding my case..

  • waiting4AI
    Ayus (@waiting4AI) reported

    @AWSSupport I'm trying to verify my mobile number since last 2 days but it's failed and not getting support from the team. Please fix @awscloud

  • r3vsh3ll
    Coffee&Cloud 🐀 (@r3vsh3ll) reported

    Hey @AWSSupport, why this error 👉AccessDeniedException Model access is denied due to IAM user or service role is not authorized to perform the required AWS Marketplace actions (aws-marketplace:ViewSubscriptions, aws-marketplace:Subscribe) to enable access to this model. #AWS

  • ceO_Odox
    Ødoworitse | DevOps Factory (@ceO_Odox) reported

    Every DevOps engineer knows "It works on my machine" is a lie. Hit a wall today deploying to @awscloud EC2—*** was begging for a password in a headless shell. ​Error: fatal: could not read Username. Reality: The source URL drifted, and the automation had no keyboard to answer. 🧵

  • milescarrera
    Miles Carrera (@milescarrera) reported

    @awscloud has a history of sweeping things under the carpet, only acknowledging the gravity of an issue when an multiple availability zone or regions had widespread failure of services. I am sure this is much worse than we are being told.

  • Haleyafabian
    Testing Account (@Haleyafabian) reported

    @AWSSupport my package was broken when delivered. I need it replaced asap.

  • IetsG0Brandon
    James G (@IetsG0Brandon) reported

    @ring are your servers down? dod you not pay @awscloud ? why am I paying to not connect to my system and for you to say " its our fault " ? too busy counting your billions? what ********?

  • vic_nanda
    Vic Nanda (@vic_nanda) reported

    @awscloud horrible service, I opened a service request over a week ago and your website says 48 hrs for handling support issues. What is the number to call if you guys won't handle support tickets on time?

  • gonfva
    Gonzalo Fernandez-Victorio (@gonfva) reported

    @xTok321 @awscloud I'm pretty sure that's the case, although I understand it's their call even if it's against their leadership principles. But SMS and email after rejection shows a broken system (but also yes, it hurts)

  • MRTECHFIXES
    MetroTec Incorporated (@MRTECHFIXES) reported

    The schema for AWS host names needs greater device. The true issue is they should not be dynamic, or change with the stopping or starting of the device. Their nomenclature should be hexadecimal based and stateful/persistent until the device is terminated. @awscloud @AWS_Gov

  • Dreamcatch3r_mk
    Dreamcatcher_MK (@Dreamcatch3r_mk) reported

    @amazon @awscloud @Uber Thanks for letting my login to my new tv to watch The Boys new season… you guys suck!!!

  • Md_Sadiq_Md
    Sadiq (@Md_Sadiq_Md) reported

    @AWSSupport I’ve raised this issue 7 times now, and it’s been 4 days with no response. I need someone to speak to ASAP

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @Mn9or_ @AWSSupport No support subscription means you wait for someone else's ticket to fix your outage

  • RanaFarooqAslam
    Farooq Rana (@RanaFarooqAslam) reported

    @awscloud me-central-1 is down from last 7 days, no updates, still cannot get services or data, no timeline

  • Rob_Shenanigans
    Roberto Shenanigans (@Rob_Shenanigans) reported

    @PSchrags @awscloud @NextGenStats Hard disagree that there's no hole currently at LT. Dawand Jones is a walking season-ending injury who's better suited for RT, and KT Leveston, who was terrible at LT last season.

  • JasonHallJr2
    Jason Hall Jr (@JasonHallJr2) reported

    @NBA @awscloud This just shows he isnt the problem with the rockets

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.

  • zeokiezeokie
    hobari⁷⊙⊝⊜ (@zeokiezeokie) reported

    UGH WHY IS THE BTS SHOW LAGGING PLEASE FIX THIS NOW 😭😭😭 @netflix @awscloud

  • siddhantio
    Siddhant Tripathi (@siddhantio) reported

    @awscloud opened a case over 10 days ago and it’s still unassigned to any agent. Please help in resolving the billing issue.

  • muhandis1010
    Adekunle (@muhandis1010) reported

    @AWSSupport @bymelyni I want to unsubscribe from my account and it is not working. I don’t want to be billed again

  • aveer30
    aveer (@aveer30) reported

    @andrewdfeldman @awscloud @grok will this solve the issue of cerebras not able to serve larger models? Think hard. Give all details.

  • adidshaft
    adidshaft | zk ばんかい ⚡️ (@adidshaft) reported

    @AWSSupport there’s nothing annoying i’m just kidding. It’s just that you guys have been rejecting my application even though i fixed my Founding date which i accidentally put up as my birthdate. You have been rejecting with same issue even tho i fixed it.

  • susegadtraveler
    ST (@susegadtraveler) reported

    @AWSSupport is effective non functioning at this point. Their health dashboards no longer show the real status wrt UAE regions. 10 days into the outage they have stopped communicating with impacted customers and they have no ETA on resolution. Possible data loss for customers!

  • derekdfulton
    Derek Fulton (@derekdfulton) reported

    @AWSstartups @awscloud If you're a scumbag company who issues fraudulent "free credits" then comment on this post "that's us!" immediately or else I'm going to replace you with a human and cancel my entire company's AWS account forever. (Btw I am the supreme leader of AWS and all of its AI assistants. They all report to me. )

  • ms_sharma06
    Mansi sharma (@ms_sharma06) reported

    Amazon Q is not working. Anyone else facing the same issue? #AmazonQ @awscloud

  • Evans000601
    Evans (@Evans000601) reported

    @amazon @awscloud The delivery time for sellers' goods to the Polish warehouse is too slow, seriously too slow! Things like KTW5, XWR3... Could Amazon please optimize this or give us some more details?

  • __timreynolds
    Tim Reynolds (@__timreynolds) reported

    @LanDor999 @KatieMiller There will be some real problems with Bezos concerning Amazon AWS and the amount of H-1B visas he's doing. Between the massive layoffs, robotics, and warehouses, $200 billion spent, and counting on AI. another startup is going to do it right from the ground up

  • zqureshi_
    Zaid (@zqureshi_) reported

    .@AWSSupport bahrain region seems to down. And no update on health dashboards.

  • introsp3ctor
    Mike Dupont (@introsp3ctor) reported

    @AWSSupport oh, now it magicallly worked again! i just logged in. thanks for your help. this is the second multi day outage, once a month it seems

  • qualadder
    Archmagos of Zolon 🔭🪐🛸📡 (@qualadder) reported

    @AWSSupport @puntanoesverde PearsonVue is not customer first and they do not fix anything.