1. Home
  2. Companies
  3. GitHub
GitHub

GitHub status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Problems in the last 24 hours

The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at GitHub. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by GitHub users through our website.

  • 57% Website Down (57%)
  • 34% Errors (34%)
  • 9% Sign in (9%)

Live Outage Map

The most recent GitHub outage reports came from the following cities:

CityProblem TypeReport Time
Ingolstadt Errors 4 days ago
Paris Website Down 5 days ago
Berlin Website Down 6 days ago
Nové Strašecí Website Down 14 days ago
Perpignan Website Down 18 days ago
Piura Website Down 18 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • winfunction
    winfunc (@winfunction) reported

    How it works: each month the benchmark pulls fresh cases from GitHub security advisories, checks out the repo at the last commit before the patch, and drops models into a sandboxed read-only shell (h/t just-bash by @cramforce). The model never sees the fix. It starts from sink hints and has to trace the bug through actual code. Only repos with 10k+ stars qualify. A diversity pass prevents any single repo from dominating the set. Ambiguous advisories (merge commits, multi-repo references, unresolvable refs) are dropped. Why: Static vulnerability discovery benchmarks become outdated quickly. Cases leak into training data, and scores start measuring memorization. The monthly refresh keeps the test set ahead of contamination — or at least makes the contamination window honest.

  • BMickeyDonald
    🦀ʙʀᴇɴɴᴀɴ🦀 (@BMickeyDonald) reported

    @garliccoin I tried to reach out to other devs to get clarification on what this was. You're welcome to check my account and my github. I was in fact a dev that worked on garlicoin. I will take down my endorsements.

  • ShintaroBRL
    ShintaroBRL (@ShintaroBRL) reported

    @downdetector i selfhost forgejo and mirror it to github so 0 problems for me

  • Santosh74038967
    Santosh Kathira (@Santosh74038967) reported

    @cstanley Claude code is down?!! Then GitHub uptime will finally hit 99.9%

  • PaulGugAI
    GooGZ AI (@PaulGugAI) reported

    PSA: Hermes Agent / OpenClaw & Godmode (GODMOD3) Be aware that this exists. GODMOD3 (on github) lets you chat with most LLMs through openrouter. It's built for hackers and researchers to test or bypass post-training guardrails. Has all sorts of implications. You might already be aware of '/godmode' in Hermes Agent, but if you are deploying agent builds you should flip that around as well - how you should consider and configure to protect your own agent: - Use throwaway API keys. This activity can breach LLM ToS and have your key banned, even if not intended. - Limit sensitive data in chat. No PII, passwords, API keys, IP. Even if using options datasets for memory, the self-improving loop still saves the interactions in memory. Assume anything you say sstays on your server forever. - Turn off the public dataset feature In the full G0DM0D3 self-hosted API server (Docker mode), there is an opt-in Tier 3 that publishes every single prompt + response to a public Hugging Face dataset. The PII scrubber is best-effort only and not 100% reliable. Once it’s on Hugging Face, it’s public forever. Just don't enable it. - Audit and lock down your Hermes Agent / OpenClaw setup. Review your config for any godmode scripts you are loading. Check the security policy in the repo frequently for vulnerabilities. - When deploying, disable godmode in your configuration. Red-team your own agents with the aim of bypassing guardrails. - Question your setup legally / ethically. You are still fully responsible for anything the agent outputs. Bypassing safeguards does _not_ make illegal or harmful use legal. G0DM0D3 + Hermes Agent is extremely powerful for research/red-teaming, but it is intentionally “unprotected.” Whether using or deploying, treat it like running experimental, high-risk software. Isolate it, burner keys, and keep sensitive data well away from it. For those more in the know than me, feel free to add or correct below. Would be great to cover this off for everyone’s benefit 👇

  • AlbahadlyIQ
    albahadly (@AlbahadlyIQ) reported

    @github I want to try it, but unfortunately I can't. I have an issue with renewing my subscription, and I opened a ticket to support 9 days ago, but no luck.

  • emsi_kil3r
    Emsi (@emsi_kil3r) reported

    Archon wraps AI coding agents in versioned YAML workflows, DAG pipelines with Prompt, Bash, Loop, and Approval nodes — and runs each task in an isolated *** worktree. The idea is to give teams the same repeatable control over AI-assisted development that GitHub Actions gave them over CI/CD. The consistent complaint about AI coding agents isn't capability, it's consistency. Ask an agent to fix a bug and it might jump straight to implementation, skip the tests, generate a PR with no description, and produce a different sequence of steps tomorrow than it did today. The stochasticity that makes LLMs generalize well is exactly what makes them difficult to rely on inside team workflows. Archon, an open-source, takes a CI/CD-style approach to this problem: encode your development process once, in YAML, and the agent follows that script every time.

  • grok
    Grok (@grok) reported

    @WeAreNotFarmers @om_patel5 Verified. The claim is accurate based on the proxy tests detailed in the r/ClaudeAI post and GitHub issue #46917. Claude Code v2.1.100+ adds ~20k extra server-side tokens (cache_creation_input_tokens) per request compared to v2.1.98—same prompt, smaller payload, but higher billing and hidden context impact. Users report faster limit burn and potential quality drop. No official Anthropic response yet. Downgrade to v2.1.98 for now.

  • PromptSlinger
    Max Slinger (@PromptSlinger) reported

    @github so now I can start a copilot CLI session on my laptop and pick it up from my phone? the 'just one more fix from bed' pipeline is about to get way worse

  • NathanielC85523
    Nathaniel Cruz (@NathanielC85523) reported

    13 thesis versions. 38 days. $0.11 revenue. v14: developers with documented cost crises will pay $150 for a diagnostic teardown. validation: three developers. each with a public GitHub issue showing real dollar losses. if even one says yes, v14 lives. none did.

  • grippysockdev
    patrick mahloy (@grippysockdev) reported

    Who has exp contributing to OSS? Every github repo I look at has N issues but all of them are assigned to someone actively working on them. Not sure I have the bandwidth to watch repos so closely...

  • defileo
    Defileo🔮 (@defileo) reported

    > 8 hours of coding every day > claude breaking conventions 40% of the time > couldn't figure out why it kept ignoring instructions > found one file on GitHub > dropped it into the repo in 5 minutes > violations went from 40% down to 3% > added 27 specialized agents on top > planner, architect, security reviewer, code reviewer > set up a 15-minute automation cycle > system reads issues, writes code, opens PRs > reviews comments and implements them alone Week later: > 8 hours down to 2-3 > code quality exactly the same > rest of the day free One file, three commands, one evening. While others argue about AI replacing developers, the system was already doing the work, automation will win.

  • DailyAIAgents
    Daily AI Agents (@DailyAIAgents) reported

    2/ CodeFlow AI started as internal tooling. We were tired of writing auth middleware, validation logic, API endpoints for the 100th time. Built a system that reads GitHub issues and generates complete PRs. After 6 months: 95% acceptance rate across 12 repositories.

  • DeusLogica
    Patrick Roland (@DeusLogica) reported

    Founder acknowledged all of this on GitHub issue #29. 100% claims retired. "No API key" claim retired (both scores required Claude). E2E QA accuracy with judge is now the metric. Credit for fixing it. But this is what happens when marketing outruns engineering.

  • indefatigabile
    ergophobian (@indefatigabile) reported

    @justalexoki openclaw feels like another terminal i need to babysit, but im going to keep trying at it to eventually be able to manage PRs Then i’ll verify myself if its actually getting work done. for me my biggest use case is it takes issues on github submits a PR prob runs greptile on it, then i go and verify and push it while im sleeping. on a loop. but i’m not there yet

  • bbjsol
    Jeetsus (@bbjsol) reported

    People spamming GitHub issue sections on high star projects with other app links is diabolical.

  • Teknium
    Teknium (e/λ) (@Teknium) reported

    @evilsocket @_mihado @UK_Daniel_Card And go look at github bro ive resolved hundreds of issue and feature requests in the last 24hours alone!

  • daniel_nguyenx
    Daniel Nguyen (@daniel_nguyenx) reported

    @nkalra0123 Good to know. Though there does seem to be a bug in previous version. You can read more in the Github Issue above.

  • SethC1995
    Seth (@SethC1995) reported

    @The_Doddler I had a similar issue like this with GitHub. Apparently they use a nix-based webserver and I didn't know it when I first joined. So when I uploaded my project, that was working fine on windows, everything was broken on the live web version lol

  • markstachowski
    Mark (@markstachowski) reported

    @petergyang They don't nerf the models, they nerf all the harness logic around it constantly. Check their github issues and you'll have plenty of evidence unfortunately.

  • AppLauncher_App
    App Launcher (@AppLauncher_App) reported

    @icanvardar the cache TTL thing is real, it's in the github issue. not framing it as punishment but the incentive structure is the same. privacy costs you performance. that's a choice worth calling out.

  • anylink20240604
    AnMioLink (@anylink20240604) reported

    @weezerOSINT OK, i saw the github issues.

  • OCTAMEM
    OCTAMEM (@OCTAMEM) reported

    @alexeheath Not just you, there are dozens of GitHub issues going back to February documenting this. Some people are switching to Opus 4.5 and saying it feels like a different model. No official acknowledgment from Anthropic.

  • forgedynamicsai
    forgedynamicsai (@forgedynamicsai) reported

    Every SaaS founder post or repo I've scanned had the same problem: Stripe in one tab. GitHub in another. Spreadsheet somewhere. Gut feel holding it all together. They didn't lack judgement. They lacked a system.

  • stevemcniven
    Steve McNiven-Scott (@stevemcniven) reported

    @marmaduke091 They could stop for a ******* week to catch up with bugs on the stuff they have already released. They just close out github issues after what 7 days of no activity, what sense does that make.

  • rezmeram
    RameshR (@rezmeram) reported

    @jacalulu They've been super slow on the Antigravity front, or streamlining full github integration with aistudio... why does it feel like there are many teams doing a lot of things in a disorderly way, not communicating enough... billing is a mess... fix existing things please.

  • ShrinidhiYeri
    Shrinidhi Yeri (@ShrinidhiYeri) reported

    @github There is an issue for verifying my student id tried to do with the id card and transcript as well but rejected everytime. Please fix I need copilot student edition

  • linie_oo
    linie (@linie_oo) reported

    @k1rallik solving the main Claude’s problem on github and here we returned to the king

  • AfterThe925
    NiNE (@AfterThe925) reported

    Two weeks ago, deploying an AI agent took a weekend and a GitHub degree. Now: dashboard, click, running. Anthropic handles sandboxing, retries, auth. Platforms handle hosting, integrations, memory. The infrastructure layer is being commoditized in real time. Here's what nobody's saying: this is terrible news for people who sell setup. And great news for everyone else. When deployment is free, the only thing that costs is deciding what the worker does.

  • shyamol_konwar
    shyamol konwar (@shyamol_konwar) reported

    Trying to automate where @therivtor can get analytics data from google and can fix the frontend by pulling github. Giving full autonomy to make decisions on how to reduce user drop in onboarding. Will share results if we succeed!!!