1. Home
  2. Companies
  3. GitHub
  4. Outage Map
GitHub

GitHub Outage Map

The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

GitHub users affected:

Less
More
Check Current Status

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Haarlem, nh 1
Villemomble, Île-de-France 1
Bordeaux, Nouvelle-Aquitaine 1
Ingolstadt, Bavaria 1
Paris, Île-de-France 1
Berlin, Berlin 2
Dortmund, NRW 1
Davenport, IA 1
St Helens, England 1
Nové Strašecí, Central Bohemia 1
West Lake Sammamish, WA 3
Parkersburg, WV 1
Perpignan, Occitanie 1
Piura, Piura 1
Tokyo, Tokyo 1
Brownsville, FL 1
New Delhi, NCT 1
Kannur, KL 1
Newark, NJ 1
Raszyn, Mazovia 1
Trichūr, KL 1
Departamento de Capital, MZ 1
Chão de Cevada, Faro 1
New York City, NY 1
León de los Aldama, GUA 1
Quito, Pichincha 1
Belfast, Northern Ireland 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • DhritimanD3440
    Dhritiman Dasgupta (@DhritimanD3440) reported

    Hey @GitHub and @GooglePlay, your support system is extremely frustrating. I was charged for GitHub Copilot Pro+ via Google Play, but the subscription isn’t showing up in my account. No clear resolution from either side. fix this billing + support #GitHub #Copilot #GooglePlay

  • dboskovic
    David Boskovic (@dboskovic) reported

    @_dylanga You were actually the only one who had any issues. It was actually your fault if you really think about hard enough. Calm down. Give GitHub a break. Staying up for the agentic workload is hard. Microsoft doesn’t even have the cpus for all that merging.

  • filip_a__
    Filip (@filip_a__) reported

    @nbrempel @scaling01 terrible.. now u get 40 gpt 5.5 messages per month on 10$ plan and 200 on 40$ plan and thats if u dont get rate limited to hell, oai has about 450 requests per month on 20$ plan if u only do 15 requests per day with less rate limiting. idk what they are smoking at github recently

  • trevin
    Trevin Chow (@trevin) reported

    @reebz @mdlahfir @kieranklaassen I think github projects, issues, beads, linear... so many options for this. This is why we want to stay out of todo/task tracking etc.

  • solpayserver
    SOLPay Server (@solpayserver) reported

    BTCPayServer proved this model works. 30,000+ merchants chose self-hosted over SaaS. Their community has been asking for Solana support for years. GitHub issue #3524. Never built.

  • DianaGraym
    Diana Gray (@DianaGraym) reported

    gitHub ghosts haunt the dark alleys of forgotten code, where stars are few and errors reign

  • vikhyatk
    vik (@vikhyatk) reported

    i was not impacted by the github issue, because i don't even know what a merge queue is. i just push everything straight to main.

  • justic_hot
    tang | AI Product Maker (@justic_hot) reported

    @championswimmer github being on this list is the actual tell. these are people literally building with the best AI dev tools and they still can't hold the bar with their own products. trad industries aren't a separate problem, they're 6 months downstream of whatever slop pattern is on display here.

  • SamchonGithub
    Jeongho Nam (@SamchonGithub) reported

    @Ron @kdy1dev @ryoppippi Whenever there is a minor TypeScript update, I hear through various channels—such as GitHub issues/discussions, Discord DMs, emails, and text messages—that I should develop it myself instead of waiting for ts-patch. I had ignored these, and told them to wait using older.

  • palanthos
    Palanthos (@palanthos) reported

    A CI/CD agent triages GitHub issues. It has shell access to the Actions runner — credentials, write permissions. Someone submits a crafted issue. The agent reads it. Poisoned cache. Code executed on the runner. You didn't compromise the agent. You wrote the right issue body.

  • ryanprasad_ai
    Ryan Prasad (@ryanprasad_ai) reported

    It's very safe. Its a read-only MCP server. Could have this spun up in a few hours. But its not ambitious. Or bold, like the podcast was. That was a cool project. It even got a star on GitHub! We can do better... but how?

  • wayanhq
    Wayan (@wayanhq) reported

    Closing 4,000 GitHub issues in a day sounds absurd until you remember how many backlogs are fossils. The bold move is not using AI. It is admitting half the queue was never going to be touched by a human.

  • Upscalpfutures
    Upscalp Futures Trading Assistant (@Upscalpfutures) reported

    @thsottiaux The Windows update broke so many things that I spent 10hrs straight yesterday fixing all my agents. Huge mess that you seem to be refusing to acknowledge. Go check out the Issues tab on GitHub.

  • deepakshettigar
    Deepak Shettigar (@deepakshettigar) reported

    2/ Think of MCP like loading all your dependencies upfront in compiled languages. Perplexity ran the math: MCP consumes 72% of your context window BEFORE your agent does anything. A single GitHub MCP server = 40K+ tokens of tool schemas sitting there unused. That's expensive.

  • Atarussecurity
    Atarus (@Atarussecurity) reported

    A critical SSRF vulnerability in LMDeploy, the open-source toolkit used to deploy large language models, was disclosed on GitHub last week. Twelve hours and thirty-one minutes later, an attacker was already in the honeypot. Eight-minute session. Ten requests across three phases. They confirmed the SSRF with an out-of-band DNS callback, enumerated the API surface, and ran a full internal port scan reaching AWS Instance Metadata Service, Redis, MySQL, and an admin interface behind the model server. No proof-of-concept code existed at the time of the attack. The attacker built theirs from the advisory text alone. This is the third major AI infrastructure exploitation we have analyzed in five days. The patch-to-exploit window is now measured in hours, regardless of install base size. Self-hosted AI infrastructure is a first-class attack surface. Treat it accordingly.

Check Current Status