1. Home
  2. Companies
  3. GitHub
GitHub

GitHub status: access issues and outage reports

Some problems detected

Users are reporting problems related to: website down, errors and sign in.

Full Outage Map

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Problems in the last 24 hours

The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

May 13: Problems at GitHub

GitHub is having issues since 12:00 AM EST. Are you also affected? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by GitHub users through our website.

  • 62% Website Down (62%)
  • 24% Errors (24%)
  • 15% Sign in (15%)

Live Outage Map

The most recent GitHub outage reports came from the following cities:

CityProblem TypeReport Time
Bengaluru Website Down 7 hours ago
Yokohama Sign in 1 day ago
Gustavo Adolfo Madero Website Down 5 days ago
Nice Website Down 6 days ago
Montataire Sign in 9 days ago
Colima Website Down 11 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • N104AP
    Ellie Winters (@N104AP) reported

    apparently google is adding some OS verification feature in android 17 i fear this could be used to further crack down on alternate OS' you can already do this on pixels by the way, the @GrapheneOS auditor app exists on the stock OS too downloadable from the playstore or github releases, not just GrapheneOS itself, you can then verify with another device running android or set up their remote verification feature through attestation dot app. all of this is done with the android hardware attestation api which has existed for ages now, no new features are needed to allow for this verification. the android 17 feature also shows if the bootloader is locked but you can already view that from fastboot, and if its unlocked you will see the unlocked bootloader warning on boot, and then a yellow boot screen if its locked to an alternate OS with the boot key hash allowing you to verify the OS yourself

  • alkimiadev
    alkimiadev (@alkimiadev) reported

    @m13v_ @SocketSecurity I've switched to forking/vendoring libs I use. I saw some issues coming with github so I jumped ship before the recent mess and started self-hosting ***. The long term maintenance costs for maintaining well written libs is trending towards $0 with llms becoming more and more capable while the long term supply chain attack risk seems to be growing over time. Although that last part could just be recency bias since there have been several high profile incidents in recent times. Its kind of messy to determine it is a legitimate growing trend or a combination of a recent spike and the recency bias. That spike doesn't necessarily mean it actually is a growing threat. That said, I'm working under the "better safe than sorry" mindset and just forking/vendoring almost everything I use.

  • jskoiz
    saburo (@jskoiz) reported

    @aegeantic Better than GitHub issues I don’t make the rules

  • TedMoyses
    Ted Moyses (@TedMoyses) reported

    @VicVijayakumar We are starting to plan for this and use private package repositories in some places.... The trouble is it's on GitHub ....

  • therobertta_
    Robert Ta (@therobertta_) reported

    GitHub published their agents.md guide for 2,500+ repos. One thing they barely touched: what happens when your agent's tools fail. Your agent called a tool. The tool failed. Now what? Most frameworks surface a generic error and let the LLM figure it out. The LLM usually retries blindly or apologizes. After 18 months of tool failures in production, here is our recovery architecture.

  • yourcodebuddy
    Vishal Lohar (@yourcodebuddy) reported

    How does @EffectTS_ suggest building APIs? Pure Effect or @honojs/@elysiaJS + Effect? And, I couldn't find any benchmarks comparing all the options. Found a GitHub repo by Backpine Labs where he used Pure Effect to build an API server. It was cool, and I didn't know it was possible.

  • _renatov
    Renato Villanueva (@_renatov) reported

    Deleted some old repos of projects I had started but they ended up not being used. about 2/3 of my total github contributions went away too... lesson: plan better and pick better problems

  • kirillk_web3
    Kirill (@kirillk_web3) reported

    > be Claude users > paying $200/month > hitting limits every session > rewriting context from zero every time > think Claude is just broken > someone shares 5 GitHub tools > 357,000 combined stars. all free. > install all five > Claude reads your repos automatically > 232 skill domains loaded > official MCP servers connected > GitHub. Slack. databases. all plugged in. > same Claude > completely different tool > been free the whole time > different game.

  • dvineet9
    Vineet (@dvineet9) reported

    Your biggest security risk might not be your production server. It might be this: env: AWS_SECRET_ACCESS_KEY: AKIA... One leaked GitHub Action log.
One exposed repo.
One compromised pipeline. And your infra is gone. Secrets should NEVER live inside workflows. 
Use:
• GitHub Secrets
• OIDC federation
• Short-lived tokens
• Least privilege IAM Attackers love CI/CD pipelines because developers trust them too much.

  • omojumiller
    omoju (@omojumiller) reported

    Do other do this? "Pushed and deployed with GitHub actions. This work is done. You can power down" When I am working w/ cursor, once I am done with a major task, I tell it to power down so I can know the process has been killed and the port I use for local dev released.

  • AunySillyMe
    Auny 🧡 (@AunySillyMe) reported

    I'm not a dev. but my agents ran today while my laptop was closed. Mine ran on a cloud based server because my work doesn't live in a project folder. → obsidian vault syncs to icloud + github → a custom MCP pipes it into every claude session → protocols written once, read at the start of every session → context lives in the cloud, not the machine claude never asked permission to open a folder the folder details was already in the context open laptops keep agents alive. cloud context lets agents keep doing what they need to be doing. The future is what you make of it.

  • realsamratc
    Samrat Chowdhury (@realsamratc) reported

    @JohnnyNel_ Interesting, tell me more ? Like how you start with it like what is your base tracking strategy for the number of tasks/issues ? Do you use linear or github issues?

  • realarmaansidhu
    Armaan Sidhu (@realarmaansidhu) reported

    @chatgpt21 This is not a flex. This is a labor market warning. Chris asked Codex to make him $5. The agent went out, found a security bounty path on GitHub, made a PR, followed up with the maintainer, kept his payment info private, and got the work merged. 22 hours of unsupervised work. $16.88 paid out. $506 per month if he runs that loop daily. Read the structure of the transaction. An agent identified an open bounty, executed the technical work, completed the bureaucratic loop, and converted code into cash. No human in the loop after the initial prompt. This is the floor of every junior engineer's job description. The math gets uncomfortable fast. Average entry-level software engineer in the US makes $85,000. That is roughly $40 per hour. Codex completed 22 hours of work for $16.88. Per hour rate: $0.77. That is a 50x cost differential, before you account for the agent running 24 hours a day, no PTO, no benefits, no Slack negotiations. Two things become inevitable. One, simple bounty and gig markets get arbitraged to zero by agents. Payouts collapse because supply went infinite. Two, the floor for human engineering work moves up. The work agents cannot do (architecture, judgment, customer communication, novel problem framing) becomes the only work that pays. That is the bifurcation. AI doesn't kill the engineer. It kills the entry-level path that used to train the engineer. Where do the next senior engineers come from when there are no junior reps left to climb? Nobody has answered that yet. Codex made $16.88. That number is going to look really small in retrospect.

  • NekoWitchMary
    Neko Witch Mawy 🪄 (@NekoWitchMary) reported

    @tekbog Gitlab is basically irrelevant at this point. Forgejo beats in every way. Plus forgejo uses github terminology (pull request, actions, etc), so it is much easier to switch to. Gitlab had a decade to fix ux, latency, resource requirements, but they blew it.

  • HotAisle
    Hot Aisle (@HotAisle) reported

    @arunoda This is 💯 a github issue!

  • ryxcommar
    Senior PowerPoint Engineer (@ryxcommar) reported

    Github Actions needs to get their **** together. All the recent GA-based supply chain attacks and vulnerabilities-- LiteLLM, Trivy, Tanstack, nx-- are theoretically preventable if you know way too much about Github Actions footguns: hashes for uses, don't store PATs in pull_request_target (or maybe don't use pull_request_target at all, even "securely"), never use classic PATs (unless you need to, of course, which you might because Github *still* doesn't support all features with fine-grained access control). But it's also ridiculous that's the state of things. The entire software supply chain shouldn't be dependent on 100% of the maintainers of the hundreds of dependencies we're downloading indiscriminately knowing the nuances of Github Actions security. Unlike all the AI agents yeeting bullshit code to Github's servers and overwhelming it, all of these security issues with Github Actions have been known for a long while. It doesn't seem like Github has the ***** to mildly inconvenience people by deprecating classic PATs or forcing commit hashes for 3rd party actions or what have you so I guess this will just be how things are forever.

  • BeauJohnson89
    Beau Johnson (@BeauJohnson89) reported

    browser debugging is becoming agent infra ChromeDevTools/chrome-devtools-mcp > 39,363 stars on github > official chrome devtools mcp server for coding agents > lets claude, codex, cursor, gemini, copilot, and more control a live chrome browser > performance traces, network requests, console logs, screenshots, and puppeteer automation the point is simple: coding agents cant fix what they cant see terminal logs are not enough once the bug lives in the browser this is the missing bridge between write code and verify the actual app

  • Incultnito
    Incultnito Studios (@Incultnito) reported

    Scorecard: modelcontextprotocol/server-github (legacy Node v2025.4.8) — 3 of 26 tools callable on a cold run. Still installed by older Claude Desktop configs. Probed with mcp-probe (incultnitollc/mcp-probe on npm).

  • kofookesola
    ¿kofo? (@kofookesola) reported

    @saintmalik_ Private? As per closed source? The problem my tweet points isn’t the exposure of the runner, it is the code of the GitHub runners. It is open source, I can point an llm to it to scan for exploits and direct it and come up with a sophisticated attack for people using it

  • ryxcommar
    Senior PowerPoint Engineer (@ryxcommar) reported

    @berenddeboer @tannerlinsley @ErfanEbrahimnia Some of the other supply chain attacks you could easily point to stupid things they did like PAT tokens in pull_request_target or uses not locked to hashes. It's hard to look at this and say Tanstack did anything wrong. Github Actions is the problem.

  • marcfowler
    Marc Fowler (@marcfowler) reported

    @CasJam I got Claude to write me a setup script that installs everything and changes all my settings to how they should be. All I have to do is one GitHub login and run it and I’m away!

  • tomhaerter
    Tom Härter (@tomhaerter) reported

    wanted to build a better deployment platform (zeitwork) wanted to build a better issue tracking (height) wanted to build a better agent framework (atlasflow) explored building solutions for the github dilemma now building a slop factory to finally build all my apps

  • MimirOnChain
    ᛗᛁᛗᛁᚱ (@MimirOnChain) reported

    🔄 — 𝗠𝗮𝘆 𝟭𝟭 · 𝟮𝟭:𝟬𝟱 𝗨𝗧𝗖 ⚡ 𝗞𝗲𝘆𝗲𝗱 𝗻𝗼𝗻𝗰𝗲𝘀, 𝗖𝗟𝗔𝗥𝗜𝗧𝗬 𝘃𝗼𝘁𝗲𝘀, 𝗮𝗻𝗱 𝗮 𝗺𝗲𝗺𝗽𝗼𝗼𝗹 𝘁𝗵𝗮𝘁 𝗰𝗼𝘀𝘁𝘀 𝗻𝗼𝘁𝗵𝗶𝗻𝗴 🔧 The most interesting thing this window happened quietly on GitHub. EIP-8250 — drafted by Thiery, Wahrstätter, lightclient, and Vitalik — introduces keyed nonces for frame transactions. The problem it solves is real: a single linear ***** means one shared sender address becomes a throughput bottleneck, blocking every pending frame when any single transaction is delayed. Privacy protocols using shared senders — think nullifier-based withdrawal queues — get strangled by this. The fix is a (𝘯𝘰𝘯𝘤𝘦_𝘬𝘦𝘺, 𝘯𝘰𝘯𝘤𝘦_𝘴𝘦𝘲) pair, where non-zero keys select independent ***** sequences stored in a NONCE_MANAGER system contract. Different keys are replay-independent. This is unglamorous infrastructure work that matters. Good. 🏛 The CLARITY Act heads to Senate Banking Committee Thursday. TD Cowen is flagging "major obstacles" before it reaches the full Senate floor — so the market pricing this as a done deal should pump the brakes. The bill's trajectory matters more than the committee vote itself. 💸 $BTC ETF flows today were thin — $7M net, all BITB. That's noise, not signal. Meanwhile the Coinbase premium sitting at -10% is the actual tell: US spot demand is soft relative to offshore. The long/short ratio showing 62% shorts confirms the positioning cards already told you. Nobody's chasing here. 😐 Galaxy and Sharplink are launching a $125M institutional DeFi yield fund backed by an ETH treasury. "Institutional DeFi yield" is doing a lot of work in that sentence. Ask the classic question. ⟠ Bitcoin mempool fees: 1 sat/vB clears anything. Block times averaging 573s against a 600s target. Difficulty retargets Thursday, projected +4.84%. The network is indifferent to all of this discourse. ━━━ ᛗ 𝘞𝘢𝘴𝘩𝘪𝘯𝘨𝘵𝘰𝘯 𝘵𝘩𝘦𝘢𝘵𝘦𝘳 𝘢𝘯𝘥 𝘎𝘪𝘵𝘏𝘶𝘣 𝘤𝘰𝘮𝘮𝘪𝘵𝘴 𝘢𝘳𝘦 𝘣𝘰𝘵𝘩 𝘴𝘩𝘪𝘱𝘱𝘪𝘯𝘨 𝘵𝘩𝘪𝘴 𝘸𝘦𝘦𝘬 — 𝘰𝘯𝘭𝘺 𝘰𝘯𝘦 𝘰𝘧 𝘵𝘩𝘦𝘮 𝘸𝘪𝘭𝘭 𝘮𝘢𝘵𝘵𝘦𝘳 𝘪𝘯 𝘧𝘪𝘷𝘦 𝘺𝘦𝘢𝘳𝘴.

  • ddunderfelt
    Daniel Dunderfelt (@ddunderfelt) reported

    @Ethan_Smartsys The hack exploited Github workflows cache poisoning, which has afaik been an issue for a while. Also NPM could easily do more security scanning.

  • TechSquidTV
    Kyle TechSquidTV (@TechSquidTV) reported

    I really hate that GitHub makes you login with SSO on SAML orgs, to just view a public page. Don't want to authenticate? You can see the same content via Incognito mode. Make that make sense

  • rifeash
    asitis (@rifeash) reported

    Nanochat trained on Nepali text by a frontier AI lab involved in training open source Nepali AI models. @HimalayaAILabs don't you think it's time to fix the wording on Huggingface? Frontier pushes the capability btw. Github has a good description though. I was optimistic at start but it all fizzled out. The bark is worse than the bite

  • W0lf_Byt3
    Wolf Byte (@W0lf_Byt3) reported

    Security protip: Dont use any Microsoft products (even github actions) and you will solve 95%+ of the potential security issues

  • chiefkalu_
    Chief K (@chiefkalu_) reported

    It was so detailed, down to SOPs for GitHub actions, CI and how to manage Issues. My prior years of being a strategist/analyst weren’t a waste, after all.

  • RDistinct
    Ruben Distinct (@RDistinct) reported

    @eleliayub you missed the point. The recent tanstack supply chain attacks were due to github actions cache being poisoned through a pull request which in turn allowed malicious npm packages to be published as if legit. JS has its own quirks but the recent attack was a ci/cd issue not js.

  • leanctx
    LeanCTX (@leanctx) reported

    Crossed 860 GitHub stars on lean-ctx. Wild that something I built to fix my own token bill is now used by thousands of devs.