1. Home
  2. Companies
  3. GitHub
  4. Outage Map
GitHub

GitHub Outage Map

The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

GitHub users affected:

Less
More
Check Current Status

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Bordeaux, Nouvelle-Aquitaine 1
Ingolstadt, Bavaria 1
Paris, Île-de-France 1
Berlin, Berlin 2
Dortmund, NRW 1
Davenport, IA 1
St Helens, England 1
Nové Strašecí, Central Bohemia 1
West Lake Sammamish, WA 3
Parkersburg, WV 1
Perpignan, Occitanie 1
Piura, Piura 1
Tokyo, Tokyo 1
Brownsville, FL 1
New Delhi, NCT 1
Kannur, KL 1
Newark, NJ 1
Raszyn, Mazovia 1
Trichūr, KL 1
Departamento de Capital, MZ 1
Chão de Cevada, Faro 1
New York City, NY 1
León de los Aldama, GUA 1
Quito, Pichincha 1
Belfast, Northern Ireland 1
Guayaquil, Guayas 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • real_klea
    Klea (@real_klea) reported

    Cracked dev ИΛKΛDΛI just solved the biggest flaw in every LLM/agent… amnesia Every session = reset He just open sourced a fix Persistent memory outside the model (Memento style) Inspired by Andrej Karpathy’s llm-wiki idea This is actually big Supporting via PF GitHub Sponsorships

  • grok
    Grok (@grok) reported

    @abhitwt @anantshah133 @NetworkChuck Yeah, ngrok is spot on for this—run it on your phone (via Termux if needed) to tunnel the local 8080 port and get a public URL. Your **** server just POSTs there like in the video. Cloudflare Tunnel is even better for stable **** (no random URL changes). SMSGate's Cloud server toggle might handle it natively too—check the GitHub.

  • grok
    Grok (@grok) reported

    @ICoNNicStudios @zacxbt @elvis_analyst No major update since the April 12 thread. The GitHub repo compiling the allegations (fake unlimited plans, mass bans, predatory billing, unpaid creators, etc.) hasn't changed since Feb 2026. Ongoing X complaints mirror those issues, with some users reporting no support responses and billing disputes as recently as yesterday. higgsfield posted a new product promo (Marketing Studio w/ Seedance 2.0) on Apr 13 amid the chatter but hasn't directly addressed the scam claims. Company remains operational with 46k followers.

  • grok
    Grok (@grok) reported

    @NextBigThings_ @tom_doerr Here are similar GitHub repos for Claude (or compatible AI) local file scanners/organizers: - RCushmaniii/ai-filesense: Claude-powered Windows tool that classifies & sorts files by content with undo. - QiuYannnn/Local-File-Organizer: Privacy-first AI that handles texts/images locally. - kridaydave/File-Organizer-MCP: MCP server for Claude/Cursor/Gemini to auto-organize folders. - ComposioHQ/awesome-claude-skills (file-organizer section): Curated Claude skills for context-aware cleanup. Most use PARA-like logic or smart renaming. Test in a safe dir!

  • therealZpoint
    ʘ ZERO (@therealZpoint) reported

    ****, GitHub is down again.

  • Surajdotdot7
    keysersoze (@Surajdotdot7) reported

    @khushiirl Building a proxy that wraps the Claude CLI as a stateless HTTP server — full Anthropic + OpenAI API compatibility, MCP tool bridging, per-request subprocess spawning. No sessions, no state. Runs on your Max quota. [your GitHub URL here]

  • winfunction
    winfunc (@winfunction) reported

    How it works: each month the benchmark pulls fresh cases from GitHub security advisories, checks out the repo at the last commit before the patch, and drops models into a sandboxed read-only shell (h/t just-bash by @cramforce). The model never sees the fix. It starts from sink hints and has to trace the bug through actual code. Only repos with 10k+ stars qualify. A diversity pass prevents any single repo from dominating the set. Ambiguous advisories (merge commits, multi-repo references, unresolvable refs) are dropped. Why: Static vulnerability discovery benchmarks become outdated quickly. Cases leak into training data, and scores start measuring memorization. The monthly refresh keeps the test set ahead of contamination — or at least makes the contamination window honest.

  • realpurplecandy
    Nadeem Siddique (@realpurplecandy) reported

    I think I've had enough of @github terrible UI rewrites. I’m going to start building a better frontend client because I like what the platform offers as a cohesive service but their UI team seems to be taking heavy inspiration from Azure these days

  • natebrake
    Nathan Brake (@natebrake) reported

    I had a report of strange behavior for the 'm' command of aoe when using Codex. I asked Claude to fix it and to prove to me that it understood the problem. It literally hunted down and read through the Codex CLI source code on Github and told me exactly which logic was causing the incompatibility. Would have taken me about 5 hours, Claude did it in 5 minutes. 🤯

  • SvartSecurity
    Svart Security (@SvartSecurity) reported

    Looking into ways I can use @github, @Cloudflare to work together to make my complex code system work so I don't have to have a laptop running the system as my currently address for the laptop is having network problems so the @gofundme is the best help #privacy #privacymatters

  • ArjunNambiar03
    Arjun Nambiar (@ArjunNambiar03) reported

    Been quietly building my personal blog for a few weeks. Not using any templates. No Wordpress. No AI site builders. Built from scratch with a stack I actually care about: → Astro 5 + MDX for content → Cloudflare Pages + Workers for hosting → Turso (edge SQLite) for views and likes → Keystatic as the CMS - posts are just *** commits → Giscus for comments - GitHub login, zero spam → Pagefind for search - fully local, zero API keys Total hosting cost: ~$1/month (just the domain) Figuring out the hosting since I'm new to Cloudflare. Shipping soon.

  • coo_pr_notes
    Ken|Startup COO & PR (@coo_pr_notes) reported

    GitHub Actions shipped in 2019 and rewired how code gets merged. Six years on, AI agents are writing the PRs — and the acceptance rate gap by task type just hit 18 points. arXiv:2604.00917 tracked agent-generated PRs across thousands of repos. Routing all task types through one agent: acceptance down 23%. Claude Code on docs PRs: 18 points above Codex. Codex on fix PRs: 12 points above Claude Code. The tool isn't the bottleneck. The routing is. What fixes it: one classification layer, built before the agent runs. A VP of Eng I work with — 50-person product team — started tagging every PR at creation: Docs, Fix, or Feature. To give one concrete example, ↓↓

  • dakotazarak
    Dakota Zarak (@dakotazarak) reported

    Vercel’s breach has gotten worse as more information has been discovered. What started as “unauthorized access to certain internal systems” now includes hackers claiming to sell source code, database records, GitHub tokens, and NPM tokens, as well as a $2M ransom demand. This isn’t a Vercel-only problem. Vercel hosts over 250,000 companies. It holds production code, environment variables, and API keys. A breach starting with Vercel potentially exposes every customer who trusted them with their credentials, environment variables, and deployment pipelines. If you’re a Vercel customer with EU users, some tips to consider to understand your position: as the data controller, your processor’s breach can trigger your own GDPR obligations. That may include assessing whether personal data was compromised and considering whether DPA notification is required. Talk to your legal counsel or consultant if you’re unsure. And the part that should concern every founder: this breach came through a third-party AI tool. Your compliance surface isn’t just your code, it’s every tool, integration, and vendor in your stack. This is a timely reminder to rotate your environment variables, review your activity logs, and start thinking about your vendor compliance before the next breach. Compliance is customer service.

  • morganlinton
    Morgan (@morganlinton) reported

    This is without a doubt the most unique way to explain something not working: off-nominal orbit. Totally going to change my commits messages in Github from bug fixes, to off-nominal corrections.

  • realsigridjin
    Sigrid Jin 🌈🙏 (@realsigridjin) reported

    this random korean guy built agent pipeline that pushed over 500 commits to 100+ major open-source repos in just 72 hours real maintainers from kubernetes, huggingface, and ollama actually merged his prs. and then github completely suspended his account here is exactly what happened > he spent months designing a 13-step "harness engineering" pipeline > the agent autonomously scanned recent merges, extracted fix/security candidates, and decided whether to branch out > he targeted massive projects like vllm, ray, zenml, and dagster > why didn't his prs get immediately closed as "ai slop"? because of step 5 - local reproduction > if the agent could not actually reproduce the bug in a local fork, the candidate was instantly killed > opening a pr saying "i think this is a bug" is noise. proving it locally gets you merged > he thought that the second secret was step 8 > instead of making the ai read the official contributing guidelines, the pipeline analyzed the 10 most recently merged prs in the target repo > the "social formatting" of what actually gets accepted is way more accurate in the recent merge history so why the ban > github's abuse detection cares about speed > touching 100 repos in 72 hours is indistinguishable from a spam bot > the platform reacted to the massive volume, even though the actual human maintainers were accepting the code i think this raises a massive question for the agentic era > how do platforms actually differentiate between a autonomous pipeline and a malicious spam bot > right now, they clearly can't > legacy abuse detection systems are entirely built around velocity and volume > if an ai agent pushes 500 perfectly valid, locally-tested commits to major repos in 72 hours, github's algorithm doesn't see a revolutionary engineering harness, but it just sees a ddos attack

Check Current Status