1. Home
  2. Companies
  3. GitHub
  4. Outage Map
GitHub

GitHub Outage Map

The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

GitHub users affected:

Less
More
Check Current Status

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Tlalpan, CDMX 1
Quilmes, BA 1
Bengaluru, KA 1
Yokohama, Kanagawa 1
Gustavo Adolfo Madero, CDMX 1
Nice, Provence-Alpes-Côte d'Azur 1
Brasília, DF 1
Montataire, Hauts-de-France 3
Colima, COL 1
Poblete, Castille-La Mancha 1
Ronda, Andalusia 1
Hernani, Basque Country 1
Tortosa, Catalonia 1
Culiacán, SIN 1
Haarlem, nh 1
Villemomble, Île-de-France 1
Bordeaux, Nouvelle-Aquitaine 1
Ingolstadt, Bavaria 1
Paris, Île-de-France 1
Berlin, Berlin 2
Dortmund, NRW 1
Davenport, IA 1
St Helens, England 1
Nové Strašecí, Central Bohemia 1
West Lake Sammamish, WA 3
Parkersburg, WV 1
Perpignan, Occitanie 1
Piura, Piura 1
Tokyo, Tokyo 1
Brownsville, FL 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • Validate_QA
    validate.qa (@Validate_QA) reported

    cursor can now auto-fix ci failures agents that watch github, hunt down the issues, and push prs with real fixes. no more endless debugging loops this changes how fast teams can ship without breaking stuff

  • neetintel
    NEET INTEL (@neetintel) reported

    A post "decoding" X's new algorithm has gone viral. It tells you what's dead, what wins, and to screenshot it. X open-sourced the entire algorithm on GitHub, so I downloaded it and checked the claims against the real code. Most of it doesn't hold up. What the post got WRONG: → "Small accounts get a 3x boost from out-of-network reach." It's the opposite. One part of the code (a file called oon_scorer) exists purely to turn DOWN posts from people you don't follow. Its own comment says "prioritize in-network." The thread printed the algorithm backwards. → "Media gets 2x the weight." There's no 2x. The code just records whether a post has an image. It's a plain yes/no without any multiplier attached. → "Posting 4+ times a day triggers a penalty." There's a real rule that stops one person flooding your feed. But here's the deal: it only spaces out how often you show up in a single scroll. There's no daily count, and no number 4. That was invented. → "Closers like 'what do you think?' get you flagged." There is no engagement-bait detector anywhere in the code. → "Long 4,000-character posts get boosted." I searched the whole codebase for "4000." Nothing. What it got RIGHT (one thing): → Replies really are judged by WHO replies, not just how many. The code has a setting for whether a large account joined your thread. Credit where due. The irony? The repo ships a file that scores post quality. One thing it measures is literally called a "slop score" — X built a tool to detect low-effort filler. A recycled "what's dead / what wins" thread is exactly that. The takeaway? X's algorithm is public. Anyone can open it, but almost nobody does. Instead, they reshare a thread that summarized a blog that paraphrased a tweet. When a post hits you with confident numbers, ask the one question that matters: did they actually open the file?

  • peter_szilagyi
    Péter Szilágyi (@peter_szilagyi) reported

    @josefprusa @Mojee3d @Prusa3D I have a Prusa, across the parts and kits spent probably over 2K EUR on it. The multi-material printer fails incredibly often, software issues / hangs, random overvoltage errors, ignored github issues, etc. It’s not only about hw price, support is also very lacking, unfortunately.

  • Fatima7223
    Fatima (@Fatima7223) reported

    @cb_doge 𝕏 open sourced its recommendation algorithm on GitHub. Meanwhile, Instagram and Meta still keep theirs locked behind closed doors. No public code. No independent audits. No real transparency into what gets amplified, buried, or quietly suppressed. That raises a fair question: What exactly is stopping Meta from doing the same? Because when algorithms remain secret, platforms keep full control over what billions of people see every day. • Users can’t verify claims about bias or shadowbanning. • Researchers can’t properly audit ranking systems. • Harmful amplification patterns stay hidden behind “trust us.” • Public narratives can be shaped without visible accountability. Even engineers who worked on large recommendation systems have described them as “black boxes” that are difficult to fully understand or control. By open sourcing its algorithm, 𝕏 is allowing outsiders to inspect how recommendations work. Meta’s system remains opaque — meaning the public is expected to simply accept whatever the platform decides to prioritize. Transparency doesn’t solve every problem. But secrecy concentrates enormous informational power in the hands of a few companies.

  • isaac_yeang
    isaac (@isaac_yeang) reported

    jk just lazy error message handling another bajillion dollars to github

  • alpinoWolf
    Kea (@alpinoWolf) reported

    @Bambardini @Polymarket @DegenApe99 What is the solution sir ? I tried to everything, but can't find a solution. AI says default wallets are proxy contracts, you are forced to use the POLY_1271 signature flow, which is currently bugged in Python see GitHub under Issues #55, #56, and #57.

  • BenittoJD
    Benitto J D (@BenittoJD) reported

    Github actions are down again

  • 38twelveDaily
    38twelveDaily (@38twelveDaily) reported

    Problem: Claude Code's popularity undermined GitHub Copilot CLI, Microsoft's own command-line coding tool. So the Experiences + Devices team (Windows, 365, Outlook, Teams, Surface) is sunsetting it by June 30.

  • aki_ranin
    Aki Ranin (@aki_ranin) reported

    New Claude Code master prompt: "/goal assign next GitHub issue and start PR, iterate until no critical or high issues found with PR review skill"

  • atulcode
    Atul (@atulcode) reported

    @GithubProjects Github is down

  • joshua__b
    𝙹𝚘𝚜𝚑𝚞𝚊 𝙱𝚒𝚍𝚍𝚕𝚎 (@joshua__b) reported

    @MagneticNorse You're right, they are hedging. But look at the board: open-sourcing the algorithm to GitHub is a brilliant tactical move because it creates the illusion of total transparency. The problem is that the code itself isn't where the suppression happens. The suppression happens in the training data, the safety filters, and the jurisdictional legal compliance that Musk himself admitted the algorithm is subject to. Hedging against criticism by showing us the code is like showing us the engine of a car while the administrative state still holds the steering wheel. It's an improvement, but it doesn't change our destination.

  • iproductAI
    Priyanshu (@iproductAI) reported

    For example: "Sign up for early GitHub Copilot Desktop access." Please fix this issue.

  • AtxaTrades
    ATXA (@AtxaTrades) reported

    This is the ONE problem the X Algorithm has: It contradicts itself. Here is why: X has shared their algorithm update in Github today. Everyone is going crazy about it. So i decided to go take a look at it. I asked Grok to analyze it and explain it to me. Once it did, i took my last post and shared it with grok. I asked him to analyze the post (based on the Algorithm shared in Github) and rank it based on the metrics and steps the algorithm takes. This is the crazy part. It gave it a score of 72-82/100!! Not so bad right? I am a small account, i am not expecting a 100 score. But wait, there is more. It said it would likely rank in the top 20-40% of candidates in the mixed batch for the right users, and strong enough to appear HIGH in the "For You" tab. Reality Result: 22 views. So my question is: If Grok is a big part of the algorithm dictating what´s good and what is not, and technically Grok just told me my post was suppose to do good in the "For You" tab... Why only 22 views?

  • LeeLeepenkman
    Lee Penkman (@LeeLeepenkman) reported

    @gxjo_dev stupidity... no... frupidity basically. like the exec cfo team is like well what if we reduce headcount wouldnt profitability go up? Like yes but you just wont be a good product company without a good product... like you are already struggling to compete with GitHub lmao... how u gna compere with codex n claude when they do repos? Also theres just fear that these devs cant learn AI which is kind of wrong because devs seem to be best placed to leverage AI of all? idk. im just guessing. lots of saas companies just doing layoffs had hired too many people having thought they would keep growing then they didnt their stock went way down and becomes harder to raise money for them because of bearish outlook for them competing with claude so investors scared off so harder for them to afford lots of developers so kind of start sinking. the devs would do better elsewhere anyway better to be on a new ship instead of sinking one.

  • ST_Automation
    ST-Automation (@ST_Automation) reported

    @cnakazawa @amadeus @fat Local diff viewers are the sleeper category. We do code review on five repos a week and the GitHub UI is just slow. If Codiff handles 10k line diffs without choking it replaces the GitHub tab entirely.

Check Current Status