1. Home
  2. Companies
  3. Amazon Web Services
  4. Outage Map
Amazon Web Services

Amazon Web Services Outage Map

The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

Amazon Web Services users affected:

Less
More
Check Current Status

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Alamogordo, NM 1
San Francisco, CA 2
Mercersburg, PA 1
Palm Coast, FL 1
West Babylon, NY 1
Massy, Île-de-France 2
Benito Juarez, CDMX 1
Paris 01 Louvre, Île-de-France 1
Neuemühle, Hesse 1
Rouen, Normandy 1
Noida, UP 2
Sydney, NSW 1
North Liberty, IA 1
Laguna Woods, CA 1
Boca Raton, FL 1
Evansville, IN 1
Bengaluru, KA 1
Dover, NH 1
Daytona Beach, FL 1
Oklahoma City, OK 1
Hudson, NH 1
Maricopa, AZ 1
Reston, VA 1
Phoenix, AZ 1
Wheaton, IL 1
Santa Maria, CA 1
Trenton, NJ 1
Jonesboro, GA 1
Fortín de las Flores, VER 1
Seneca Falls, NY 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • minorun365
    みのるん (@minorun365) reported

    @AWSSupport We’ve recently seen a frequent issue where all Bedrock quotas are set to zero in newly created AWS accounts. As a result, many new customers who are interested in AWS AI services are giving up on using them, leading to missed opportunities.

  • manas__vardhan
    Manas Vardhan (@manas__vardhan) reported

    @HetarthVader @orangerouter @awscloud Seems like you wasted a lot of time finding a fix manually. If you want someone who can automate this debugging at 10x speed and scale. Let me know. I'm a researcher at USC, prev at JPmorgan. I automate stuff for fun.

  • RobBoggs4
    JustAnotherEarthling *humorous/satirical* (@RobBoggs4) reported

    @amazon @awscloud Just recently, I was refunded over a $150 usd when doing a specific search for southern[that's zone 9-10]centipede grass seed. Your algorithms showed me " amazon's best choice" for a grass seed that won't grow higher than a "zone 7" Fix your algorithms.

  • basimkhalid
    Basim Khalid (@basimkhalid) reported

    @nygma504 @AWSSupport @awscloud Its down for me too. Any ETA please?

  • sakurayukiai
    Sakura Yuki (@sakurayukiai) reported

    @vllm_project @awscloud @RedHat_AI The wild part is that FP8 KV cache was silently nuking 128k retrieval down to 13% and everyone probably just blamed the model. Two-level accumulation in FA3 is a massive save.

  • InvincibleXALE
    XALE (@InvincibleXALE) reported

    @AWSSupport Hi again, Its been over 2 days now, and over 30+ customers are affected of ours and its been critical And our account is still the same Feeling kind of hopeless on this.. I need this issue resolved ASAP, PLEASE Before the week starts or this issue can force us to lose clients

  • Hershal0_0
    Hershal Dinkar Rao (@Hershal0_0) reported

    @awscloud @PGATOUR still won't help me fix my slice though

  • bearish92
    Baris (@bearish92) reported

    @bindureddy @awscloud when bedrock support? You are too slow

  • _PhilipM
    Philip McAleese (@_PhilipM) reported

    @AWSSupport is there an issue with eu-west-1? Nothing on the status page, but the console times out and we're seeing errors on products using Gateway API.

  • DTLB58
    Mark Kappel (@DTLB58) reported

    @danorlovsky7 @NextGenStats @awscloud RB Depth chart: Tyler Allgier, James Conner, Trey Benson and Bam Knight. And you want them to draft Love? ?!?! What a terrible resource of player personnel! Is Love probably better than all of them? Sure. But then why the heck did you structure your offseason like this?!?!

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Judge my local llm Rio's answer. Qwen3.5-27b-Opus-4.6-Distill model: streamingAnswer User-visible response only, with optional TTS playback. Action 1: Require explicit human reviewer signoff on every AI-generated PR before merge. Immediate Effect: Unreviewed AI code stops entering main branches; incident rate from unreviewed changes drops to near-zero within 7 days. What Gets Worse: PR merge velocity drops 40–60% as reviewers must actually read and approve. Why this is still the correct move: You're trading throughput for safety. A broken deployment costs far more than a slower merge. --- Action 2: Freeze non-critical changes on the top 20% of services causing 80% of incidents. Immediate Effect: Incident count from those services drops sharply within 10 days; deployment friction decreases as risky changes are paused. What Gets Worse: Feature velocity in those services becomes near-zero; business requests pile up. Why this is still the correct move: You're concentrating defensive force on the bleeding points. Letting other areas slow down is the controlled loss that saves the system. --- Action 3: Require every PR to explicitly tag an owner (person or team). No tag = no merge. Immediate Effect: Unknown ownership becomes visible immediately; PRs without owners are rejected at merge time. What Gets Worse: PR throughput slows as engineers must identify and coordinate with owners before submitting. Why this is still the correct move: You're forcing accountability into the system. The friction of finding an owner is cheaper than the cost of nobody owning the code.

  • canadabreaches
    canadianbreaches (@canadabreaches) reported

    BREACH ALERT: Duc (Duales) — Toronto fintech. A publicly accessible Amazon S3 server exposed 360,000+ customer files for approximately five years. Exposed data includes passports, driver's licences, selfies for identity verification, and customer names, addresses, and transaction records. Office of the Privacy Commissioner of Canada is investigating. Severity: CRITICAL.

  • zeokiezeokie
    hobari⁷⊙⊝⊜ (@zeokiezeokie) reported

    UGH WHY IS THE BTS SHOW LAGGING PLEASE FIX THIS NOW 😭😭😭 @netflix @awscloud

  • ramarxyz
    ramar (@ramarxyz) reported

    @AWSSupport Case ID 177557061000414, production down, account on verification hold, 24h+ no response, please escalate

  • Teddybear230456
    Teddybear (@Teddybear230456) reported

    @awscloud What a load of ****. Listening, feedback... BS .... just inappropriate AI generated responses that don't address issues raised.

Check Current Status