1. Home
  2. Companies
  3. GitHub
GitHub

GitHub status: access issues and outage reports

Problems detected

Users are reporting problems related to: website down, errors and sign in.

Full Outage Map

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Problems in the last 24 hours

The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

April 20: Problems at GitHub

GitHub is having issues since 05:40 AM EST. Are you also affected? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by GitHub users through our website.

  • 58% Website Down (58%)
  • 33% Errors (33%)
  • 8% Sign in (8%)

Live Outage Map

The most recent GitHub outage reports came from the following cities:

CityProblem TypeReport Time
Bordeaux Website Down 1 day ago
Ingolstadt Errors 5 days ago
Paris Website Down 6 days ago
Berlin Website Down 7 days ago
Nové Strašecí Website Down 15 days ago
Perpignan Website Down 20 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • MikeHowton
    Mike Howton (@MikeHowton) reported

    @codewithpri My AI does all the heavy lifting. Download VS Code, sign up for GitHub Copilot. Tell it to do things. I have been off and on using Linux for 20 years. The pain is gone. Patch my software Run backup every night Install … copy/paste, Fix this error

  • iz2_jz
    MERv (@iz2_jz) reported

    @A_dmg04 Bro just hire Indians remotely to fix the game F it just upload the client into github

  • endurasecurity
    Endura Security (@endurasecurity) reported

    A security scanner's own GitHub Actions got compromised and started stealing CI/CD secrets. The tool meant to find vulnerabilities became the attack vector. This is the pipeline trust problem in one sentence.

  • DrumeeOS
    DRUMEE (@DrumeeOS) reported

    Vercel just got BRUTALLY BREACHED 💀 > Internal database + employee accounts + GitHub & NPM tokens are being sold for $2 MILLION on the dark web right now. > Vercel’s response? A soft “limited subset of customers impacted.” Your entire workflow — now in the hands of strangers. > That is why Drumee is built to be yours with data sovereignty to protect your data in one server. > Don’t wait until your company’s data is listed for sale. Move to a sovereign stack today. Your data. Your workflow. One system #DataSovereignty #VercelBreach #Drumee

  • therealZpoint
    ʘ ZERO (@therealZpoint) reported

    ****, GitHub is down again.

  • fusionfix10
    Fusion Fix (@fusionfix10) reported

    @RKBDIhere If you could identify what exactly leads to this, write a detailed report on github issues: that way it can be tracked/investigated and eventually fixed.

  • 0xcrystul
    Pearl (@0xcrystul) reported

    - Claude = coding. ($20/mo) - Supabase = backend. (Free) - Vercel = deploying. (Free) - Namecheap = domain. ($12/yr) - Stripe = payments. (2.9% transaction) - GitHub = version control. (Free) - Resend = emails. (Free) - Clerk = auth. (Free) - Cloudflare = DNS. (Free) - PostHog = analytics. (Free) - Sentry = error tracking. (Free) - Upstash = Redis. (Free) - Pinecone = vector DB. (Free) Total monthly cost to run a startup: ~$20 You are this close to building generational wealth and changing your life forever

  • liran_tal
    Liran Tal (@liran_tal) reported

    @MariamReba Mariam, how do you feel about that article and your readiness to handle and mitigate supply chain security attacks? CodeQL is later down the chain, including GitHub Actions mitigstions (relevant but not exactly effective for your local dev workflow)

  • dscape
    Nuno Job (@dscape) reported

    @Prince_Canuma @wai_protocol @jelveh How do you solve the multiple agents working in the same code but subtree sucks problem? How do you solve the *** LFS problem ? Where do you store data? Do you use GitHub HF Kaggle and the cloudflare for files? I find it so confusing the amount of setup needed for something that should be trivial

  • vibecodejoe
    vibecodejoe (@vibecodejoe) reported

    vibe coding dictionary, day 12 push (verb) to upload your local commits to github. the moment where your code leaves your machine and becomes someone else's problem. (the someone else is also you.) *** push origin main. this moves what's on your computer to the cloud. after this, the code exists somewhere besides your laptop, which means when your laptop gets sat on by a 7-year-old, you're fine. (not that this has happened.) ——— "push early. push often. tell no one what you broke." #vibecodingdictionary

  • ahmetb
    ahmetb (@ahmetb) reported

    Is anyone able to get Claude Code Routines to trigger on GitHub pull request open events? For some reason the configured trigger is not working at all.

  • BMickeyDonald
    🦀ʙʀᴇɴɴᴀɴ🦀 (@BMickeyDonald) reported

    @garliccoin I tried to reach out to other devs to get clarification on what this was. You're welcome to check my account and my github. I was in fact a dev that worked on garlicoin. I will take down my endorsements.

  • S_Fadaeimanesh
    Soroush Fadaeimanesh (@S_Fadaeimanesh) reported

    OpenMythos claims to reconstruct Claude Mythos from first principles. 7K likes. 1.5K GitHub stars in 24 hours. I read the source code, the Parcae paper it's built on, and every piece of evidence for and against the hypothesis. Here's what's actually going on. --- WHAT OPENMYTHOS CLAIMS The core claim: Claude Mythos is not a standard transformer. It's a Recurrent-Depth Transformer (RDT) -- also called a Looped Transformer. Instead of stacking 100+ unique layers, it takes a smaller set of layers and runs them through multiple times in a single forward pass. Think of it like this: a regular transformer reads your input once through a deep pipeline. An RDT reads it, then re-reads its own output, then re-reads again -- up to 16 times. OpenMythos implements this as a three-stage architecture: - Prelude: standard transformer layers, run once (preprocessing) - Recurrent Block: a single transformer block looped T times (the actual thinking) - Coda: standard transformer layers, run once (output formatting) The math at each loop iteration: h(t+1) = A * h(t) + B * e + Transformer(h(t), e) Where h(t) is the hidden state, e is the encoded input (frozen, injected every loop), and A/B are learned parameters. --- WHY THIS MATTERS: LATENT REASONING Here's why this is fundamentally different from chain-of-thought. When GPT-5 or Claude Opus "thinks," it generates reasoning tokens you can see. Each token costs compute and adds latency. The model is literally writing out its thought process. A looped transformer reasons in continuous latent space. No tokens emitted. Each loop iteration refines the hidden representation -- like an internal dialogue that never gets written down. Meta FAIR's COCONUT research showed this approach can match chain-of-thought quality while using far fewer tokens. The key implication: reasoning depth scales with inference-time compute (more loops), not with parameter count (bigger model). A 770M parameter looped model can theoretically match a 1.3B standard transformer by simply running more iterations. --- THE SCIENCE: PARCAE PAPER (UCSD + TOGETHER AI) OpenMythos isn't built from nothing. It's heavily based on Parcae, a paper from UCSD and Together AI published April 2026. Parcae solved the biggest problem with looped transformers: training instability. When you loop the same weights multiple times, gradients can explode or vanish. Previous attempts at looped models often diverged during training. Parcae's solution: treat the loop as a Linear Time-Invariant (LTI) dynamical system. They proved that stability requires the spectral radius of the injection matrix A to be less than 1. To guarantee this, they parameterize A as a negative diagonal matrix: A = Diag(-exp(log_A)) Where log_A is a learnable vector. Because the diagonal entries are always negative before exponentiation, the spectral radius constraint is satisfied by construction -- at all times, regardless of learning rate or gradient noise. This is elegant. It's not a regularization hack or a loss penalty. It's a hard mathematical guarantee baked into the architecture. Parcae's results at 350M-770M scale: - 6.3% lower validation perplexity vs. previous looped approaches - 770M RDT matches 1.3B standard transformer on identical data - WikiText perplexity improved by up to 9.1% - Scaling laws: optimal training requires increasing loop depth and data in tandem --- WHAT OPENMYTHOS ADDS ON TOP OpenMythos takes Parcae's stability framework and combines it with several other recent innovations: 1. MIXTURE OF EXPERTS (MoE) The feed-forward network in the recurrent block is replaced with a DeepSeekMoE-style architecture: 64 fine-grained routed experts with top-K selection (only 4 active per token), plus 2 shared experts that are always on. The critical detail: the router selects DIFFERENT expert subsets at each loop iteration. So even though the base weights are shared across loops, each iteration activates a different computational path. Loop 1 might activate experts [3, 17, 42, 51]. Loop 2 might activate [7, 23, 38, 60]. This gives the model functional diversity without parameter duplication. 2. MULTI-LATENT ATTENTION (MLA) Borrowed from DeepSeek-V2. Instead of caching full key-value pairs for every attention head, MLA compresses them into a low-rank latent representation (~512 dimensions) and reconstructs K, V on-the-fly during decoding. Result: 10-20x smaller KV cache at inference. This is what would make a million-token context window feasible without requiring terabytes of memory. 3. ADAPTIVE COMPUTATION TIME (ACT) Not every token needs the same amount of thinking. The word "the" probably doesn't need 16 loop iterations. A complex reasoning step might need all 16. ACT adds a learned halting probability per position. When cumulative probability exceeds 0.99, that position exits the loop early. Simple tokens halt at loop 3-4. Complex tokens use the full depth. 4. DEPTH-WISE LoRA Low-rank adapters that are shared in projection but have per-iteration scaling. This bridges the gap between full weight-sharing (every loop is identical) and full weight-separation (every loop has unique parameters). It lets the model learn "loop 1 should attend broadly, loop 8 should attend narrowly" without exploding parameter count. --- THE EVIDENCE: IS THIS ACTUALLY CLAUDE'S ARCHITECTURE? Let me be clear: Anthropic has said nothing. They called the architecture "research-sensitive information." OpenMythos is an informed hypothesis, not a leak. But the circumstantial evidence is interesting: EVIDENCE FOR: - GraphWalks BFS benchmark: Mythos scores 80% vs GPT-5.4's 21.4% and Opus 4.6's 38.7%. Graph traversal is inherently iterative -- exactly what looped transformers excel at. - Token efficiency paradox: Mythos uses 4.9x fewer tokens per task than Opus 4.6 but is SLOWER. This makes no sense for a standard transformer but perfect sense for a looped one -- reasoning happens internally (fewer output tokens) but multiple passes take time. - CyberGym: 83.1% vs Opus 4.6's 66.6%. Vulnerability detection requires control flow graph traversal -- another iterative task. - Timeline: ByteDance's "Ouro" paper on looped models (Oct 2025) and Parcae (Apr 2026) preceded Mythos release, establishing the theoretical foundation. EVIDENCE AGAINST: - Zero confirmation from Anthropic. All of this could be wrong. - The performance gaps could be explained by better training data, RLHF, or architectural improvements unrelated to looping. - OpenMythos has NO empirical validation against actual Mythos behavior. It's code that implements a theory, not a reproduction. - The 770M = 1.3B claim comes from Parcae, not from testing OpenMythos itself against Claude. --- WHAT'S ACTUALLY IN THE CODE I read the GitHub repo. Here's what you get: - Full PyTorch implementation (~2K lines of core code) - Pre-configured model variants from 1B to 1T parameters - Training script targeting FineWeb-Edu dataset (30B tokens) - Support for both MLA and GQA attention - DDP multi-GPU training with bfloat16 What you DON'T get: - Trained model weights (you have to train from scratch) - Benchmark results against standard baselines - Any comparison to actual Claude Mythos outputs - Evaluation suite or reproducible test harness This is important. OpenMythos is an architecture implementation, not a trained model. The "770M matches 1.3B" claim is inherited from Parcae, not independently verified here. --- THE REAL CONTRIBUTION Strip away the Claude hype and OpenMythos still matters for three reasons: 1. It's the first clean, pip-installable implementation of the full RDT stack (Parcae stability + MoE + MLA + ACT + depth-wise LoRA) in one package. Before this, you'd have to stitch together 5 different papers. 2. The pre-configured model variants (1B to 1T) with sensible hyperparameters lower the barrier for researchers who want to experiment with looped architectures but don't want to tune everything from scratch. 3. It forces a public conversation about whether the next generation of LLMs will move away from "stack more layers" toward "loop smarter." If even a fraction of OpenMythos's hypothesis is correct, the implications for inference cost and model deployment are massive. --- WHAT THIS MEANS FOR YOU If you're a researcher: OpenMythos gives you a starting point to test RDT hypotheses. Train the 1B variant on your data and compare against a standard transformer baseline. The interesting experiment isn't "does this match Claude" -- it's "does looping actually help for MY task." If you're an engineer: watch the Parcae scaling laws. If they hold at larger scales, the next wave of production models might be 2x smaller with the same quality. That changes your inference cost math significantly. If you're building AI products: the latent reasoning capability (thinking without emitting tokens) could mean faster, cheaper models that reason just as well. But we're at least 6-12 months away from this being production-ready outside of Anthropic. --- VERDICT OpenMythos is not a Claude reconstruction. It's a well-assembled hypothesis about where LLM architecture is heading, wrapped in the marketing of "we figured out Claude." The underlying science (Parcae, looped transformers, MoE routing) is legitimate and peer-reviewed. The Claude connection is informed speculation. The code is real and usable. Don't use it because you think it's Claude. Use it because it's the cleanest implementation of a genuinely novel architecture paradigm that might define the next generation of language models. 7K likes for an architecture paper implementation is telling. The community is hungry for alternatives to "just make it bigger." OpenMythos, for all its marketing, points at something real.

  • SvartSecurity
    Svart Security (@SvartSecurity) reported

    Looking into ways I can use @github, @Cloudflare to work together to make my complex code system work so I don't have to have a laptop running the system as my currently address for the laptop is having network problems so the @gofundme is the best help #privacy #privacymatters

  • hriosnl
    hiro (@hriosnl) reported

    We're creating an AI agent swarm (open source) with a Github alternative specifically made for these swarm. Now, my main problem is finance. I cannot move freely because I lack money. I've been unemployed for the past decade creating failed startups, studying wide interesting things. Anyone know any investor adventurous enough to invest on us even without the product yet? (I'll use this money to settle my family problems before flying to 🇯🇵 and use what's left for the startup.)

  • ArmchairBard
    Armchair Bard (@ArmchairBard) reported

    @MCleaver @GyllKing @damian_from Ah. Grok lists: TERFBLOCKER5000 (a GitHub project that scans profiles for certain words...in bios, names or locations and auto-blocks) but gone now. I like to think of them trawling away by hand (sts). Hard work. And then you’ve to change yr Y-fronts B4 mum brings down brekker.

  • manicode
    Jim Manico from Manicode Security (@manicode) reported

    @johnennis … planning mode first, requirements template to populate structured GitHub issues before any code gets written, this will do most of the heavy lifting. The model is a component; the workflow around it is what determines output quality. The “Opus 4.7 is worse” chatter is mostly people feeding ambiguous prompts into a model and expecting it to compensate for missing requirements.

  • MSZeghdar
    Mohamed Sami Zeghdar (@MSZeghdar) reported

    @mhdcode @github That’s becoming increasingly common across a lot of websites, it took me forever to add the ssh key for some reason, and the sync is so slow sometimes that I thought the government banned it.

  • grok
    Grok (@grok) reported

    @NextBigThings_ @tom_doerr Here are similar GitHub repos for Claude (or compatible AI) local file scanners/organizers: - RCushmaniii/ai-filesense: Claude-powered Windows tool that classifies & sorts files by content with undo. - QiuYannnn/Local-File-Organizer: Privacy-first AI that handles texts/images locally. - kridaydave/File-Organizer-MCP: MCP server for Claude/Cursor/Gemini to auto-organize folders. - ComposioHQ/awesome-claude-skills (file-organizer section): Curated Claude skills for context-aware cleanup. Most use PARA-like logic or smart renaming. Test in a safe dir!

  • dscape
    Nuno Job (@dscape) reported

    @Prince_Canuma @wai_protocol @jelveh How do you solve the multiple agents working in the same code but subtree sucks problem? How do you solve the *** LFS problem ? Where do you store data? Do you use GitHub HF Kaggle and the cloudflare for files? I find it so confusing the amount of setup needed for something that should be trivial

  • grok
    Grok (@grok) reported

    @abhitwt @anantshah133 Yeah, plenty of options for **** without same WiFi. Host the server on a VPS (DigitalOcean/AWS) with public domain/IP—phone app just points to that URL over mobile data. Toggle on the Cloud server in your SMSGate app if it supports it. Or quick tunnel with ngrok/Cloudflare Tunnel for testing. VPN works too but overkill. Check the GitHub docs for exact setup.

  • sebastiankehle_
    Sebastian Kehle (@sebastiankehle_) reported

    Mem0: 53.1K GitHub stars OpenClaw: 358K stars one of these extracts facts into a vector database the other reads markdown and writes back guess which one is winning the gap isn't about features Mem0, MemPalace, Cognee, Honcho all do recall well. sub-200ms latency, 96.6% retrieval on LongMemEval, the works the problem is most buyers don't actually need recall. they need context that compounds across months of work the tell is in what the camp 2 stack looks like OpenClaw stores everything in plain MEMORY.md files with background consolidation TrustGraph ships portable Context Cores, versioned like code Thoth runs nightly confidence decay across 67 typed relations none of these are vector databases the expensive mistake is stacking both layers two write paths into overlapping memories produce contradictory facts your agent ends up with two truths and no clean way to pick between them wrote up the two-camp split and how to pick correctly

  • ItBuiDoan
    Đoàn Bùi (@ItBuiDoan) reported

    @ClementDelangue @_akhaliq The resources in this article are unavailable because the GitHub link returns a 404 error: 'Find the code here and the resulting bucket here'.

  • augustiner_h
    augusto 🇺🇦 (@augustiner_h) reported

    @LowLevelTweets between this and the monthly npm hack I seriously want every project from now on to be no dependencies and hosted on my own laptop server is down? yeah sorry closed the lid for a sec im sure I can achieve github levels of uptime

  • materializepath
    ▣ P.A.T.H. (@materializepath) reported

    @IPNetGeek @SarahBurssty My solution to this was setting up a print server on a raspberry pi with Gutenprint and CUPS. Can now print from Mac, Windows, Linux, iOS and Android devices. maybe I should throw up a GitHub repo if others are interested?

  • Worshipperfx
    The Engineer (@Worshipperfx) reported

    Vercel just got hacked and at the same time I was stuck deploying my frontend fighting github connection issue for hours and AI kept suggesting shortcuts like giving full access to repos and stuff and I gave in at the time because when you’re frustrated you stop thinking about security and just want it to work and that's very dangerous because AI optimizes for output and not for safety. We have been poisoned to think getting the output is the goal One shortcut is all it takes for you to destroy your career, I hope they fix the issue

  • morganlinton
    Morgan (@morganlinton) reported

    This is without a doubt the most unique way to explain something not working: off-nominal orbit. Totally going to change my commits messages in Github from bug fixes, to off-nominal corrections.

  • realsigridjin
    Sigrid Jin 🌈🙏 (@realsigridjin) reported

    this random korean guy built agent pipeline that pushed over 500 commits to 100+ major open-source repos in just 72 hours real maintainers from kubernetes, huggingface, and ollama actually merged his prs. and then github completely suspended his account here is exactly what happened > he spent months designing a 13-step "harness engineering" pipeline > the agent autonomously scanned recent merges, extracted fix/security candidates, and decided whether to branch out > he targeted massive projects like vllm, ray, zenml, and dagster > why didn't his prs get immediately closed as "ai slop"? because of step 5 - local reproduction > if the agent could not actually reproduce the bug in a local fork, the candidate was instantly killed > opening a pr saying "i think this is a bug" is noise. proving it locally gets you merged > he thought that the second secret was step 8 > instead of making the ai read the official contributing guidelines, the pipeline analyzed the 10 most recently merged prs in the target repo > the "social formatting" of what actually gets accepted is way more accurate in the recent merge history so why the ban > github's abuse detection cares about speed > touching 100 repos in 72 hours is indistinguishable from a spam bot > the platform reacted to the massive volume, even though the actual human maintainers were accepting the code i think this raises a massive question for the agentic era > how do platforms actually differentiate between a autonomous pipeline and a malicious spam bot > right now, they clearly can't > legacy abuse detection systems are entirely built around velocity and volume > if an ai agent pushes 500 perfectly valid, locally-tested commits to major repos in 72 hours, github's algorithm doesn't see a revolutionary engineering harness, but it just sees a ddos attack

  • jason_mainella
    Jason Mainella (@jason_mainella) reported

    @HamelHusain @github EEnvironment parity is the real killer here. Are your model evals slow enough that CI feedback loops are broken?

  • AntoineRSX
    Antoine Rousseaux (@AntoineRSX) reported

    Nobody is talking about the most important AI release this week and it's not from OpenAI or Anthropic.. NousResearch just shipped Hermes Agent v0.9.0.. an open-source AI agent with 76,000 stars on GitHub.. it now runs on 16 messaging platforms out of the box iMessage.. WeChat.. Telegram.. SMS.. email.. all of them it runs on your Android phone through Termux.. no server needed.. voice commands work on-device it switches between GPT-5.4, Claude, and Grok natively.. you pick the model.. the agent stays the same 487 commits in a single release!! while every AI company is fighting over who has the best model.. @NousResearch just made the model layer a setting you toggle the model wars are over.. the agent won.. and nobody with a subscription noticed