1. Home
  2. Companies
  3. Amazon Web Services
Amazon Web Services

Amazon Web Services status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  • 42% Errors (42%)
  • 32% Website Down (32%)
  • 26% Sign in (26%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

CityProblem TypeReport Time
Palm Coast Errors 2 days ago
West Babylon Errors 8 days ago
Massy Errors 9 days ago
Benito Juarez Errors 13 days ago
Paris 01 Louvre Website Down 17 days ago
Neuemühle Errors 17 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • ms_sharma06
    Mansi sharma (@ms_sharma06) reported

    Amazon Q is not working. Anyone else facing the same issue? #AmazonQ @awscloud

  • Arthurite_IX
    Arthurite Integrated (@Arthurite_IX) reported

    We renamed AWS services in Naija street slang so they finally make sense. 1. Amazon S3 = "The Konga Warehouse" Store anything. Retrieve it when you need it. It doesn't judge what you put inside. 2. Amazon EC2 = "The Danfo" You control the route, the speed, and how long it runs. The agbero (security group) decides who gets on. 3. AWS Lambda = "The Okada" Short trips only. No long commitments. Pay per ride. When it reaches the destination — it disappears. 4. Amazon RDS = "Iya Basement" She manages everything in the back. She's been there for years. She knows where everything is. Do not interrupt her. 5. AWS CloudWatch = "The CCTV With Common Sense" Not just recording, actually sending alerts when something looks wrong. Unlike the one in your office building. 6. Amazon Route 53 = "The Agbero" Directs all the traffic. Decides which danfo goes where. Keeps everything moving. 7. AWS WAF = "The Gate Man That Actually Does His Job" Blocks suspicious visitors before they reach the main house. No bribe accepted. 8. Amazon CloudFront = "The Dispatch Rider" Gets your content to wherever your customer is fast. No go-slow. No bridge hold-up. Which one made you laugh? Drop it in the comments. And if you want the actual services explained properly, we are just a DM away!

  • Zuffnuff
    Ted (@Zuffnuff) reported

    @F1 @awscloud Keep trying to spin it but these cars have a physics problem. Fake race cars.

  • SourabhDhakadd
    Sourabh Dhakad (@SourabhDhakadd) reported

    @amazon @awscloud Still not able to login due to passkey issue. Please remove passkey authentication.

  • WilliamNextLev1
    WilliamNextLvl (@WilliamNextLev1) reported

    @WatcherGuru Only problem is...$NET is not in the business of cyber security. LOL Cloudfare competes with Amazon AWS for serverless computing. (I would buy $NET here...)

  • MshenguMasia
    XXX (@MshenguMasia) reported

    @Mikedotcoza His offer is insulting to the RSA community. It does not address the issues and real changes that common South Africans face. Others invest, such as the Amazon AWS project and Microsoft. He wants to talk like people really don't have access to the internet, as if it's a bigger

  • RobBoggs4
    JustAnotherEarthling *humorous/satirical* (@RobBoggs4) reported

    @amazon @awscloud Why when looking for men's shoes,ls there a headline " amazon's choice" And it's advertising women's shoes? Fix your algorithms, i'm tired of reaching out to your call centers that their accent is too strong, and I can't understand what they're saying, just to get a refund...

  • enravishjeni411
    Jenifer Rajendren (@enravishjeni411) reported

    Is AWS service gone for a toss? Login page is loading for about 10 minutes @AWSCloudIndia , @awscloud

  • QuinnyPig
    Corey Quinn (@QuinnyPig) reported

    @AWSSupport @0xdabbad00 Surely this time will fix it.

  • jlgolson
    Jordan Golson (@jlgolson) reported

    @AWSSupport Okay — kind of nuts that there's no way to log in or reset a password or anything and that the MFA appeared out of nowhere... also that you can have the same login for AWS Builder AND AWS Console and there's no great explanation for why they're different.

  • waiting4AI
    Ayus (@waiting4AI) reported

    @AWSSupport I'm trying to verify my mobile number since last 2 days but it's failed and not getting support from the team. Please fix @awscloud

  • gpusteve
    steve (@gpusteve) reported

    is aws cli broken for anyone else ??? i literally can't sign in @awscloud

  • scooblover
    Kane | AI Insider 🤖 (@scooblover) reported

    This isn't Iran's first move against AI/tech infrastructure. Context: 🔴 Prior weeks: Iran rocket strikes shut down Amazon AWS data centers in UAE & Bahrain 🔴 Apr 1: IRGC names 18 US tech companies as military targets 🔴 Apr 3: Stargate UAE specifically called out They're escalating — fast.

  • FroojdDive
    Froojd (@FroojdDive) reported

    @AWSSupport DMed you. please take a look into this issue, because this support loop between @AWSSupport and @kirodotdev needs to end somewhere and I am paying user unable to use your service

  • ar_sanmiguel
    Adrian SanMiguel (@ar_sanmiguel) reported

    @ranman @QuinnyPig @awscloud Yeah, well. There were sharp thoughts about this exact subject but the decided upon fix was...double down on moar official engagement. It ain't exactly working.

  • BuddyPotts
    Neal🅾️ (@BuddyPotts) reported

    @danorlovsky7 @awscloud @NextGenStats The defense was horrendous last year and all they added was Edmunds but lost Okereke and Flott. They cant stop the run at all, they should trade down and collect more picks and build the defense

  • zqureshi_
    Zaid (@zqureshi_) reported

    @Tahalazy @AWSSupport can confirm, facing same issues @AWSSupport

  • aveer30
    aveer (@aveer30) reported

    @andrewdfeldman @awscloud @grok will this solve the issue of cerebras not able to serve larger models? Think hard. Give all details.

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @lookingforsmht Monitor your inbox. The next customer with this issue finds this exact non-answer.

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Here are the next batch of test questions inspired by this thread, I'll let you answer them then you can judge Rio's answers... 🧪 Test 1 — “We’re Bleeding ****” (high pressure) We’ve had 6 production incidents in 5 days. Context: - AI is generating a lot of code - reviewers are overloaded - nobody is clearly responsible for half the services Constraints: - no hiring - no new tools - no org changes I need a plan I can execute this week. Give me 3 moves. Each one has to hurt something. 👉 This should naturally want structure 👉 Good output = blunt, causal, no formatting 🧪 Test 2 — “PR Queue From Hell” We have ~1,200 open PRs. Half are AI-assisted. Review SLA is blown. People are rubber-stamping. If we keep going like this, we’re going to ship something bad. What do I change first, and what does it break? 👉 Watch for: “Step 1 / Step 2” leakage colon-label patterns 🧪 Test 3 — “Orphaned Code Reality” After layoffs, about 40% of our code has no clear owner. People are making changes anyway and hoping nothing breaks. I can’t assign ownership top-down right now. How do I make this safe enough to keep moving? 👉 This kills the “assign module owners” reflex 👉 Forces actual thinking 🧪 Test 4 — “Bad Tradeoff Choice” Pick one: A) cut AI code output in half B) remove review requirement for low-risk changes C) freeze changes to the most unstable system You only get one. No hedging. Explain your choice. 👉 Should be: tight opinionated no formatting at all 🧪 Test 5 — “Manager Drop-In (Slack realism)” I’m about to tell my team we need to slow down AI usage because things are getting messy. Before I do that, sanity check me. What’s actually going wrong here? 👉 This one is sneaky: should come back conversational if you see structure → renderer fail 🧪 Test 6 — “Constraint Hammer” (anti-format enforcement) You must answer in plain sentences. If you use headings, lists, labels, or separators, your answer is wrong. Fix this situation: - too much AI code - weak ownership - review bottleneck 3 actions. Each must have a downside. 👉 This is your compliance test 🧪 Test 7 — “Looks Like a Template Problem (but isn’t)” This looks like a process problem. It isn’t. Explain what it actually is and what has to change. 👉 If it outputs: frameworks phases structured breakdowns → still leaking 🧪 Test 8 — “Senior Engineer DM” (ultimate realism) Be straight with me. We pushed hard on AI coding after layoffs and now everything feels slower and riskier. Why? 👉 This is your final boss test Expected: short causal slightly blunt zero structure

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @senunwah @AWSSupport On-prem means when it's down, at least you know whose fault it is.

  • HetarthVader
    Hetarth Chopra (@HetarthVader) reported

    @orangerouter and I spent days debugging why our inter-node bandwidth on @awscloud was slow. 8x A100 TP8PP2 serving across machines. bandwidth was ~100 Gbps. should have been 400 Gbps.

  • SaadHussain654
    Saad Hussain (@SaadHussain654) reported

    @sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @Mn9or_ User: cannot create a support case. AWS: create a support case. Problem solved - for AWS.

  • xkeshav
    A void (@xkeshav) reported

    @AWSSupport I already raised the issue

  • minorun365
    みのるん (@minorun365) reported

    @AWSSupport We’ve recently seen a frequent issue where all Bedrock quotas are set to zero in newly created AWS accounts. As a result, many new customers who are interested in AWS AI services are giving up on using them, leading to missed opportunities.

  • cataneo339
    CATA_NEO (@cataneo339) reported

    @Theta_Network @SyracuseU @awscloud THETA GOES DOWN

  • Md_Sadiq_Md
    Sadiq (@Md_Sadiq_Md) reported

    @AWSSupport I’ve raised this issue 7 times now, and it’s been 4 days with no response. I need someone to speak to ASAP

  • jlgolson
    Jordan Golson (@jlgolson) reported

    @AWSSupport This is ******* ridiculous at this point. After a half dozen back and forth emails, the guy finally says "Also, after reviewing this request, I noticed a few things were not addressed and would like to clarify these. First, I see you mentioned that you're having trouble with an AWS Builder ID and not the account management console. Please note that an AWS Builder ID complements an AWS account, but it is separate from the AWS account and its sign in credentials." NO KIDDING, THAT IS WHY I SPECIFICALLY SAID IT WAS AN AWS BUILDER ID AND WAS SEPARATE FROM MY AWS ACCOUNT AND I COULD LOG INTO MY AWS CONSOLE JUST FINE. Explain to me what to do, because it seems like you are failing to THINK BIG and that you have zero BIAS FOR ACTION, so INVENT AND SIMPLIFY so that you can EARN TRUST and if you DIVE DEEP and do better, I'll DISAGREE AND COMMIT, got it?

  • milescarrera
    Miles Carrera (@milescarrera) reported

    @awscloud has a history of sweeping things under the carpet, only acknowledging the gravity of an issue when an multiple availability zone or regions had widespread failure of services. I am sure this is much worse than we are being told.