1. Home
  2. Companies
  3. Amazon Web Services
  4. Outage Map
Amazon Web Services

Amazon Web Services Outage Map

The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

Amazon Web Services users affected:

Less
More
Check Current Status

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
San Francisco, CA 2
Mercersburg, PA 1
Palm Coast, FL 1
West Babylon, NY 1
Massy, Île-de-France 2
Benito Juarez, CDMX 1
Paris 01 Louvre, Île-de-France 1
Neuemühle, Hesse 1
Rouen, Normandy 1
Noida, UP 2
Sydney, NSW 1
North Liberty, IA 1
Laguna Woods, CA 1
Boca Raton, FL 1
Evansville, IN 1
Bengaluru, KA 1
Dover, NH 1
Daytona Beach, FL 1
Oklahoma City, OK 1
Hudson, NH 1
Maricopa, AZ 1
Reston, VA 1
Phoenix, AZ 1
Wheaton, IL 1
Santa Maria, CA 1
Trenton, NJ 1
Jonesboro, GA 1
Fortín de las Flores, VER 1
Seneca Falls, NY 1
Birmingham, England 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • Abomination81
    Abomination (@Abomination81) reported

    @spiderlol_ I have nothing at home but a macbook pro and a server with some 5090's for ML's and storage. I use amazon aws ec2

  • xkeshav
    A void (@xkeshav) reported

    @AWSSupport I already raised the issue

  • SpainGreatAgain
    Bella Iberia (@SpainGreatAgain) reported

    @NextGenStats @NFL @awscloud Expected Points Added sounds fancy until you realize it’s just another way for nerds to tell us Mahomes is a god while downplaying actual game-winning drives and clutch plays 😤 EPA, success rate, all that AWS-powered nonsense cool for spreadsheets, terrible for real football passion. Stop letting models replace what our eyes see on Sundays. Bring back old-school football debate

  • Petielvr
    Queen of hearts (@Petielvr) reported

    @AWSSupport Hello, this is acct. #26672735262. I cannot pay my bill because I get a 404 error. I have been trying to escalate this issue since Friday the 17th. Please have a human call Donna @ 3148223232

  • Md_Sadiq_Md
    Sadiq (@Md_Sadiq_Md) reported

    @AWSSupport Wow, which issues are those which are not been resolved from past 3 days

  • mjha2088
    manish (@mjha2088) reported

    @AWSSupport Thank you! The entire db.r7i family shows reduced vCPUs for SQL Server & Oracle vs MySQL/PostgreSQL/Aurora in console. The docs page has no mention of this engine-specific difference — undocumented and critical for licensed engine customers planning costs.

  • dmauas
    David Mauas (@dmauas) reported

    @awscloud why, WHY don't you fix the web console UI/UX?! AWS console seems to actually try to suck! It gets WORSE with time! Actually using bash is better than the disgusting web console!

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Judge my local llm Rio's answer. Qwen3.5-27b-Opus-4.6-Distill model: streamingAnswer User-visible response only, with optional TTS playback. Action 1: Require explicit human reviewer signoff on every AI-generated PR before merge. Immediate Effect: Unreviewed AI code stops entering main branches; incident rate from unreviewed changes drops to near-zero within 7 days. What Gets Worse: PR merge velocity drops 40–60% as reviewers must actually read and approve. Why this is still the correct move: You're trading throughput for safety. A broken deployment costs far more than a slower merge. --- Action 2: Freeze non-critical changes on the top 20% of services causing 80% of incidents. Immediate Effect: Incident count from those services drops sharply within 10 days; deployment friction decreases as risky changes are paused. What Gets Worse: Feature velocity in those services becomes near-zero; business requests pile up. Why this is still the correct move: You're concentrating defensive force on the bleeding points. Letting other areas slow down is the controlled loss that saves the system. --- Action 3: Require every PR to explicitly tag an owner (person or team). No tag = no merge. Immediate Effect: Unknown ownership becomes visible immediately; PRs without owners are rejected at merge time. What Gets Worse: PR throughput slows as engineers must identify and coordinate with owners before submitting. Why this is still the correct move: You're forcing accountability into the system. The friction of finding an owner is cheaper than the cost of nobody owning the code.

  • MRTECHFIXES
    MetroTec Incorporated (@MRTECHFIXES) reported

    The schema for AWS host names needs greater device. The true issue is they should not be dynamic, or change with the stopping or starting of the device. Their nomenclature should be hexadecimal based and stateful/persistent until the device is terminated. @awscloud @AWS_Gov

  • KasConviction
    Kaspa Mode: ON (@KasConviction) reported

    @cryptorover Ethereum is not even fully decentralized. No PoS can be with much of the node control on Amazon AWS. ETH is slow, expensive, and not fully decentralized.

  • namzylll
    N (@namzylll) reported

    Ignoring the Middle East when it comes to servers is a huge oversight. ALOT of players are stuck with 130+ ping. FIX THEM!!!! @FortniteStatus @awscloud @FortniteME #fortniteriyadh

  • ZackD0x
    ZacD (@ZackD0x) reported

    @awscloud feels like banks finally saw the glitch and decided to hit ctrl+alt+del on themselves

  • qualadder
    Archmagos of Zolon 🔭🪐🛸📡 (@qualadder) reported

    @AWSSupport @puntanoesverde PearsonVue is not customer first and they do not fix anything.

  • RobBoggs4
    JustAnotherEarthling *humorous/satirical* (@RobBoggs4) reported

    @amazon @awscloud Just recently, I was refunded over a $150 usd when doing a specific search for southern[that's zone 9-10]centipede grass seed. Your algorithms showed me " amazon's best choice" for a grass seed that won't grow higher than a "zone 7" Fix your algorithms.

  • gpusteve
    steve (@gpusteve) reported

    is aws cli broken for anyone else ??? i literally can't sign in @awscloud

Check Current Status