Amazon Web Services status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".
Problems in the last 24 hours
The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by Amazon Web Services users through our website.
- Errors (41%)
- Website Down (32%)
- Sign in (27%)
Live Outage Map
The most recent Amazon Web Services outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Errors | 4 days ago |
|
|
Website Down | 8 days ago |
|
|
Errors | 8 days ago |
|
|
Website Down | 8 days ago |
|
|
Errors | 9 days ago |
|
|
Sign in | 11 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Amazon Web Services Issues Reports
Latest outage, problems and issue reports in social media:
-
AAA (@Quackarazzi) reported@awscloud I’m having a problem that apparently can’t be resolved without real intervention. What should I do?
-
MB (@MihaiButnaru) reportedSome of the errors raised by AWS CloudFormation don’t make sense at all. You might be looking at one thing, but AWS is actually referring to something completely different (doesn't point properly to what it actually mean) @awscloud improve the logging
-
AKHIL MANGA (@akhil_manga) reportedToday, your data is not yours. @Google @awscloud @Microsoft They store it. They control it. They can delete it. One policy change. One outage. One government request. Your data is gone.
-
Big-Papi Δ (@bigpapi12988) reported@AWSSupport @Stealth732949 We understand there may be no ETA, but operating with no communication is unacceptable. Our business relies on your service, which is currently down. We are open to upgrading our plan if required. We need a resolution within the next two hours and are available via all channels
-
Claudio Kuenzler (@ClaudioKuenzler) reportedWhoa. Did @awscloud Frankfurt just go down for 2 mins ~5min ago?
-
hobari⁷⊙⊝⊜ (@zeokiezeokie) reportedUGH WHY IS THE BTS SHOW LAGGING PLEASE FIX THIS NOW 😭😭😭 @netflix @awscloud
-
Mohamed (@abusarah_tech) reportedi’ve recently went down a rabbit hole to learn how hyperscalers / cloud providers like @awscloud, @Azure (or at least in theory) work a huge respect to all the engineers that built the abstraction behind the resource provisioning. i am still trying to wrap my head around it
-
manish (@mjha2088) reported@AWSSupport Thank you! The entire db.r7i family shows reduced vCPUs for SQL Server & Oracle vs MySQL/PostgreSQL/Aurora in console. The docs page has no mention of this engine-specific difference — undocumented and critical for licensed engine customers planning costs.
-
distribute.ai (@distributeai) reportedaws ( @awscloud ) is officially supporting x402 payments so your ai agents can autonomously buy their own api access using usdc. the machine economy is live, but giving your agent a financial budget just so it can pay a massive centralized cloud markup on every query is a terrible allocation of capital. route your autonomous workflows through the distribute network. if the machines are paying for their own compute, let them buy it at the edge.
-
Sourabh Dhakad (@SourabhDhakadd) reported@amazon @awscloud Still not able to login due to passkey issue. Please remove passkey authentication.
-
Saad Hussain (@SaadHussain654) reported@awscloud @sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days
-
run ⬡ the ⬡ juels (@nullpackets) reported@AdamLinkSmith @awscloud @amazon Imagine how things used to be. Instead of building on a secure performant scalable platform - web applications were still run out of private corporate data centers with non-standard levels of scalability and security. Now imagine throwing immutable records, assets and currency in the mix. Not very trustable between counterparties. The last mile problem. Only Chainlink fixes this.
-
David Mauas (@dmauas) reported@awscloud why, WHY don't you fix the web console UI/UX?! AWS console seems to actually try to suck! It gets WORSE with time! Actually using bash is better than the disgusting web console!
-
ُ (@lijvvz) reportedWe are increasingly frustrated by the ongoing high ping issues in Fortnite for players in Saudi Arabia. Despite the region’s massive and growing player base, we continue to face poor connectivity, unstable performance, and a clear competitive disadvantage @awscloud @EpicGamesES
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Judge my local llm Rio's answer. Qwen3.5-27b-Opus-4.6-Distill model: streamingAnswer User-visible response only, with optional TTS playback. Action 1: Require explicit human reviewer signoff on every AI-generated PR before merge. Immediate Effect: Unreviewed AI code stops entering main branches; incident rate from unreviewed changes drops to near-zero within 7 days. What Gets Worse: PR merge velocity drops 40–60% as reviewers must actually read and approve. Why this is still the correct move: You're trading throughput for safety. A broken deployment costs far more than a slower merge. --- Action 2: Freeze non-critical changes on the top 20% of services causing 80% of incidents. Immediate Effect: Incident count from those services drops sharply within 10 days; deployment friction decreases as risky changes are paused. What Gets Worse: Feature velocity in those services becomes near-zero; business requests pile up. Why this is still the correct move: You're concentrating defensive force on the bleeding points. Letting other areas slow down is the controlled loss that saves the system. --- Action 3: Require every PR to explicitly tag an owner (person or team). No tag = no merge. Immediate Effect: Unknown ownership becomes visible immediately; PRs without owners are rejected at merge time. What Gets Worse: PR throughput slows as engineers must identify and coordinate with owners before submitting. Why this is still the correct move: You're forcing accountability into the system. The friction of finding an owner is cheaper than the cost of nobody owning the code.
-
Neil harrison (@Neilharrison89) reported@RuthBuscombe @awscloud Stop calling it macarena wing it Spanish to start and has nothing to do with flipping upside down, let's stop being silly and call it what it is a flip wing, just stop with the macarena you all sound like you have a low iq calling it macarena wing
-
ramar (@ramarxyz) reported@AWSSupport Case ID 177557061000414, production down, account on verification hold, 24h+ no response, please escalate
-
eldar (@eldar737394) reported@AWSSupport I need urgent help with Case ID: 177378404200042. I have been waiting for 3 days for a refund request ($67.90) but no human response yet. I am a student and this charge is a huge issue for me. Please escalate this. #AWSSupport #AWS
-
Vic Nanda (@vic_nanda) reported@awscloud horrible service, I opened a service request over a week ago and your website says 48 hrs for handling support issues. What is the number to call if you guys won't handle support tickets on time?
-
Michael Bayron (@immichaelbayron) reported@AWSSupport Are there down servers in Southeast Asia?
-
Keng N (@0xKeng) reported@Lakshy_x @KASTxyz @awscloud avoiding the common issue of funds idling or locking during transactions.
-
Jane (@Jane49_) reported@AWSSupport @Xcrypto_master Currently facing a phone verification issue for a new account, it keeps responding that there has been a processing error, Case id: 177314869100657
-
Patriot, unpaid trying to save our country (@mktldr) reported@awscloud new gimmick 1 Their #customerservice has really gone down. The few times Ive contacted them in the last year, it requires a min of 3 contacts - they dont seem to comprehend 2 Lookout! Many agents promise $, then u give a 5/5 rating & NEVER SEE THE MONEY. FRAUD!!!
-
Farooq Rana (@RanaFarooqAslam) reported@awscloud me-central-1 is down from last 7 days, no updates, still cannot get services or data, no timeline
-
Product FN (@floranext_pm) reported@AWSSupport a client of our's website has been down since Friday due an SSL error that we cannot resolve without Support's assistance. Our client is losing revenue from this we need immediate action.
-
Luke Hebblethwaite (@lukehebb) reportedso without notice @awscloud have suspended my kiro subsciption nice now for their terrible support to waste time not helping me
-
Asif Malik (@asif_malik_03) reported@awscloud hey facing trouble while login my account I m filling, right login credentials, but it is showing that your authentication information is incorrect, what do I do
-
Decent Cloud (@DecentCloud_org) reported@senunwah @AWSSupport The outage gets a postmortem. Your deadline doesn't read it.
-
sandeep Tiwari (@sandeepTiw28306) reported@amazon @amazonIN @awscloud I have parched product not working conditions i have returned product not a pickup done last 5 Day
-
K Subramanyeshwara (@ksubramanyaa) reported@AWSSupport @AWSCloudIndia @awscloud I have sent you the case id and a screenshot of the error. Can you please fast-track it? Thank you