1. Home
  2. Companies
  3. GitHub
  4. Outage Map
GitHub

GitHub Outage Map

The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

GitHub users affected:

Less
More
Check Current Status

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Bordeaux, Nouvelle-Aquitaine 1
Ingolstadt, Bavaria 1
Paris, Île-de-France 1
Berlin, Berlin 2
Dortmund, NRW 1
Davenport, IA 1
St Helens, England 1
Nové Strašecí, Central Bohemia 1
West Lake Sammamish, WA 3
Parkersburg, WV 1
Perpignan, Occitanie 1
Piura, Piura 1
Tokyo, Tokyo 1
Brownsville, FL 1
New Delhi, NCT 1
Kannur, KL 1
Newark, NJ 1
Raszyn, Mazovia 1
Trichūr, KL 1
Departamento de Capital, MZ 1
Chão de Cevada, Faro 1
New York City, NY 1
León de los Aldama, GUA 1
Quito, Pichincha 1
Belfast, Northern Ireland 1
Guayaquil, Guayas 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • coo_pr_notes
    Ken|Startup COO & PR (@coo_pr_notes) reported

    GitHub Actions shipped in 2019 and rewired how code gets merged. Six years on, AI agents are writing the PRs — and the acceptance rate gap by task type just hit 18 points. arXiv:2604.00917 tracked agent-generated PRs across thousands of repos. Routing all task types through one agent: acceptance down 23%. Claude Code on docs PRs: 18 points above Codex. Codex on fix PRs: 12 points above Claude Code. The tool isn't the bottleneck. The routing is. What fixes it: one classification layer, built before the agent runs. A VP of Eng I work with — 50-person product team — started tagging every PR at creation: Docs, Fix, or Feature. To give one concrete example, ↓↓

  • shobitfarcast
    Shobit (@shobitfarcast) reported

    100,000 GitHub stars in 53 days puts Hermes in extremely rare company. For context, LangChain took 6 months to hit that milestone. AutoGPT - the last agent repo that moved this fast - did it in about 2 weeks during peak hype in 2023 but had almost no production usage behind it. The number that matters more than the stars is the 14,300 forks. Forks indicate people are building on top of it, not just bookmarking it. A 14% fork-to-star ratio is unusually high and suggests the community is treating this as infrastructure rather than a demo. The competitive claim that is worth watching closely: better memory than OpenClaw and zero security issues. Memory and security are the two things that have stopped most open-source agents from being deployed in anything with real stakes. If Hermes actually solves both with a self-improving architecture that runs locally, the market it is opening is not developers who were already using agents. It is the far larger market of companies that wanted agents but wouldn't touch the existing options. The real question is not whether 100K stars matters. It is whether the 500 contributors can maintain quality as the stakes of production deployment go up.

  • AntoineRSX
    Antoine Rousseaux (@AntoineRSX) reported

    Nobody is talking about the most important AI release this week and it's not from OpenAI or Anthropic.. NousResearch just shipped Hermes Agent v0.9.0.. an open-source AI agent with 76,000 stars on GitHub.. it now runs on 16 messaging platforms out of the box iMessage.. WeChat.. Telegram.. SMS.. email.. all of them it runs on your Android phone through Termux.. no server needed.. voice commands work on-device it switches between GPT-5.4, Claude, and Grok natively.. you pick the model.. the agent stays the same 487 commits in a single release!! while every AI company is fighting over who has the best model.. @NousResearch just made the model layer a setting you toggle the model wars are over.. the agent won.. and nobody with a subscription noticed

  • unclefunkdrew
    ᴜɴᴄʟᴇ ꜰᴜɴᴋ | OSP/Citadel (@unclefunkdrew) reported

    Game dev: 99% of your time is spent solving stupid windows/Unreal/Unity/Github problems and 1% actually being creative.

  • x1Ler
    x1Ler (@x1Ler) reported

    Why is this down @github ?

  • shyamol_konwar
    shyamol konwar (@shyamol_konwar) reported

    Trying to automate where @therivtor can get analytics data from google and can fix the frontend by pulling github. Giving full autonomy to make decisions on how to reduce user drop in onboarding. Will share results if we succeed!!!

  • Clawnch_Bot
    Clawnch 🦞 (@Clawnch_Bot) reported

    We are aiming to have Hermes support live by next week. Apologies for the delays, as the GitHub fiasco slowed us down a bit. We are excited to show you what we’re built! 🦞

  • winfunction
    winfunc (@winfunction) reported

    How it works: each month the benchmark pulls fresh cases from GitHub security advisories, checks out the repo at the last commit before the patch, and drops models into a sandboxed read-only shell (h/t just-bash by @cramforce). The model never sees the fix. It starts from sink hints and has to trace the bug through actual code. Only repos with 10k+ stars qualify. A diversity pass prevents any single repo from dominating the set. Ambiguous advisories (merge commits, multi-repo references, unresolvable refs) are dropped. Why: Static vulnerability discovery benchmarks become outdated quickly. Cases leak into training data, and scores start measuring memorization. The monthly refresh keeps the test set ahead of contamination — or at least makes the contamination window honest.

  • liran_tal
    Liran Tal (@liran_tal) reported

    @MariamReba Mariam, how do you feel about that article and your readiness to handle and mitigate supply chain security attacks? CodeQL is later down the chain, including GitHub Actions mitigstions (relevant but not exactly effective for your local dev workflow)

  • skydaddysgg
    SkyDaddysGG (@skydaddysgg) reported

    @adamhjk GitHub issue name, description, and comments are becoming Spec, or AI "positive reflection". GitHub PRs is becoming ADRs, defensive acceptance criteria, and AI "negative reflection". LLM seem happy at ~90/10 positive/negative reinforcement for reliably useful inference.

  • martindoub
    Martin Doubravský (@martindoub) reported

    @AnthropicAI Max subscriber. Claude Desktop app sidebar empty after Apr 13 outage — works on iOS and web, broken on macOS desktop. Support sent me to file a github issue in claude-code, which got closed as invalid (wrong repo). No working path to a fix. Help?

  • shrimalmadhur
    Madhur Shrimal (@shrimalmadhur) reported

    OpenClaw is sometimes crazy. I had a list of GitHub issues, and I asked it to just fix one. It actually went ahead and did the next 5 too, which is crazy. Good, but crazy.

  • the_smart_ape
    The Smart Ape 🔥 (@the_smart_ape) reported

    > find a cool github repo that cuts your ai tokens cost by 50%. > looks legit, 5,247 stars. 120 forks. active issues. clean readme. > clone it. npm install. done. > next morning: crypto wallet drained. locked out of gmail, icloud, x. your private family photos are online. > life will never be the same.

  • analyzedinvest
    Analyzed Investing (@analyzedinvest) reported

    Microsoft is building OpenClaw into M365 Copilot, and it's a bigger deal than it sounds. OpenClaw is the fastest-growing open-source AI agent framework: 354K GitHub stars, 70K forks, 44K skills listed. Now Microsoft has a dedicated team (led by the former head of Word) building always-on agents that work across your M365 apps end-to-end proactively, not just when you ask. The vision: AI that doesn't wait for a prompt. It just gets the work done. Why it matters: Copilot shifts from assistant to autonomous worker Multi-model (OpenAI + Anthropic) means best-in-class for every task If they solve the security problem, this will change enterprise productivity permanently The open-source agent wave is colliding with the enterprise stack. Microsoft wants to be where they meet.

  • real_klea
    Klea (@real_klea) reported

    Cracked dev ИΛKΛDΛI just solved the biggest flaw in every LLM/agent… amnesia Every session = reset He just open sourced a fix Persistent memory outside the model (Memento style) Inspired by Andrej Karpathy’s llm-wiki idea This is actually big Supporting via PF GitHub Sponsorships

Check Current Status