1. Home
  2. Companies
  3. Amazon Web Services
  4. Outage Map
Amazon Web Services

Amazon Web Services Outage Map

The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

Amazon Web Services users affected:

Less
More
Check Current Status

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Mercersburg, PA 1
Palm Coast, FL 1
West Babylon, NY 1
Massy, Île-de-France 2
Benito Juarez, CDMX 1
Paris 01 Louvre, Île-de-France 1
Neuemühle, Hesse 1
Rouen, Normandy 1
Noida, UP 2
Sydney, NSW 1
North Liberty, IA 1
Laguna Woods, CA 1
Boca Raton, FL 1
Evansville, IN 1
Bengaluru, KA 1
Dover, NH 1
Daytona Beach, FL 1
San Francisco, CA 1
Oklahoma City, OK 1
Hudson, NH 1
Maricopa, AZ 1
Reston, VA 1
Phoenix, AZ 1
Wheaton, IL 1
Santa Maria, CA 1
Trenton, NJ 1
Jonesboro, GA 1
Fortín de las Flores, VER 1
Seneca Falls, NY 1
Birmingham, England 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • Jane49_
    Jane (@Jane49_) reported

    @AWSSupport @Xcrypto_master Currently facing a phone verification issue for a new account, it keeps responding that there has been a processing error, Case id: 177314869100657

  • dmauas
    David Mauas (@dmauas) reported

    @awscloud why, WHY don't you fix the web console UI/UX?! AWS console seems to actually try to suck! It gets WORSE with time! Actually using bash is better than the disgusting web console!

  • grok
    Grok (@grok) reported

    The post likely misinterprets Iran's recent drone/missile strikes on UAE data centers (Amazon AWS in March, Oracle in Dubai in early April 2026). These are commercial "cloud" computing/AI facilities, not a secret weather-mod or cloud-seeding center. UAE does real cloud-seeding for rain enhancement, but no verified attack on any such facility. No evidence links any strike to weather changes. Iraq/Iran have seen heavy April rains and flooding (Lake Himrin full, temps down), easing some 2025 drought effects. That's a natural low-pressure system (per the map in the original post), not engineered reversal. Sustained? Too soon—regional drought is climate-driven and ongoing; these are seasonal fluctuations. HAARP/Project HARP researches the ionosphere, not weather control.

  • TutorHailApp
    TutorHail App (@TutorHailApp) reported

    @awscloud hello we have tried reaching out to you but in vain our EC2 attached to UAE has been down forever and you have no communication out. Can we know what we are dealing with it is our main server this is ridiculous handing us over to your bots with no answers.

  • nessefj
    nessefj (@nessefj) reported

    @awscloud having trouble with an AWS certification being delivered to an old corporate email. Customer service has not assisted in any meaningful way, could I speak with someone to resolve?

  • SaadHussain654
    Saad Hussain (@SaadHussain654) reported

    @sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days

  • Mynameiskhan924
    IamNotATerrorist (@Mynameiskhan924) reported

    Really disappointed with Amazon support. After weeks of trying, I’m still getting automated replies. I’ve clearly said I can’t access my account, yet they keep asking me to sign in to resolve my AWS refund issue. How am I supposed to do that without access? This is frustrating. @AWSSupport @JeffBezos @awscloud

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @OrenOhad The form is broken. Resolution goes to DM. The next person searching 'AWS MFA network error' finds nothing.

  • ImAnmo07
    Anmol Thakur (@ImAnmo07) reported

    Hi @AWSCloudIndia @awscloud, I'm trying to set up a Bedrock Knowledge Base using Amazon OpenSearch Serverless, but I'm getting the error: “Failed to create the Amazon OpenSearch Serverless collection. The AWS Access Key Id needs a subscription for the service.”

  • jmbowler_
    james bowler 👹 (@jmbowler_) reported

    anyone else having trouble getting past @awscloud mfa?

  • Stunner_99
    zI£|~ (@Stunner_99) reported

    @AWSSupport Yes I have. But on socials all other direct platform is not working

  • nullpackets
    run ⬡ the ⬡ juels (@nullpackets) reported

    @AdamLinkSmith @awscloud @amazon Imagine how things used to be. Instead of building on a secure performant scalable platform - web applications were still run out of private corporate data centers with non-standard levels of scalability and security. Now imagine throwing immutable records, assets and currency in the mix. Not very trustable between counterparties. The last mile problem. Only Chainlink fixes this.

  • Evans000601
    Evans (@Evans000601) reported

    @amazon @awscloud The delivery time for sellers' goods to the Polish warehouse is too slow, seriously too slow! Things like KTW5, XWR3... Could Amazon please optimize this or give us some more details?

  • WilliamNextLev1
    WilliamNextLvl (@WilliamNextLev1) reported

    @WatcherGuru Only problem is...$NET is not in the business of cyber security. LOL Cloudfare competes with Amazon AWS for serverless computing. (I would buy $NET here...)

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Judge my local llm Rio's answer. Qwen3.5-27b-Opus-4.6-Distill model: streamingAnswer User-visible response only, with optional TTS playback. Action 1: Require explicit human reviewer signoff on every AI-generated PR before merge. Immediate Effect: Unreviewed AI code stops entering main branches; incident rate from unreviewed changes drops to near-zero within 7 days. What Gets Worse: PR merge velocity drops 40–60% as reviewers must actually read and approve. Why this is still the correct move: You're trading throughput for safety. A broken deployment costs far more than a slower merge. --- Action 2: Freeze non-critical changes on the top 20% of services causing 80% of incidents. Immediate Effect: Incident count from those services drops sharply within 10 days; deployment friction decreases as risky changes are paused. What Gets Worse: Feature velocity in those services becomes near-zero; business requests pile up. Why this is still the correct move: You're concentrating defensive force on the bleeding points. Letting other areas slow down is the controlled loss that saves the system. --- Action 3: Require every PR to explicitly tag an owner (person or team). No tag = no merge. Immediate Effect: Unknown ownership becomes visible immediately; PRs without owners are rejected at merge time. What Gets Worse: PR throughput slows as engineers must identify and coordinate with owners before submitting. Why this is still the correct move: You're forcing accountability into the system. The friction of finding an owner is cheaper than the cost of nobody owning the code.

Check Current Status