GitHub status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Problems in the last 24 hours
The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at GitHub. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by GitHub users through our website.
- Website Down (62%)
- Errors (21%)
- Sign in (18%)
Live Outage Map
The most recent GitHub outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Sign in | 2 days ago |
|
|
Website Down | 2 days ago |
|
|
Website Down | 4 days ago |
|
|
Sign in | 5 days ago |
|
|
Website Down | 9 days ago |
|
|
Website Down | 9 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Salt (@XMonetizationC_) reported🔥 Linus Torvalds has just made it clear that Linux will not become a dumping ground for AI-generated code. After months of internal debate, the Linux community has published its official rules on the use of tools like GitHub Copilot. The verdict: You can use AI to program, but the “slop”—that low-quality code spat out without thinking—does not pass the filter. The phrase that sums it all up: “Humans assume the errors.” You can rely on Copilot, Claude, or whatever you want. But if that code makes it into the Linux kernel, you are responsible. You verify it. You fix the bugs. You guarantee it meets the standards. This is the most mature stance I’ve seen in the open-source ecosystem regarding AI: neither hysteria nor blind adoption—just clear responsibility. The kernel has 30 years of history. They’re not going to ruin it to save 20 minutes with an autocomplete.
-
AtomicNodes (@AtomicNodes) reportedHermes Agent vs OpenClaw on Local Qwen 3.6 35B We asked agents to scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. MacBook Pro M5 Max 64Gb. OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations
-
Sam Cymbaluk (@SemanticSamuel) reported@sebastienlorber Github is actually so slow. My TTS writes characters out one at a time after dictation is complete. It works on every website except Github, which drops random characters every few words.
-
Kristopher Betz (@kjbetz) reported@davidfowl I think I do... I push code to GitHub. Actions kick off, build new containers, build new migraines, then self hosted runners pick it up and run migrations, and auto update containers which pull down new images and restart containers.
-
Adi (@wtfaditya_) reported@azwan_ Yes its down, They don’t want us to deploy anything on Friday, For global wellbeing 🫡 @github
-
Jerome (@jeromeq2004) reportedgithub releasing the agentic ai developer cert is funny because the actual exam is going to be 'fix this thing claude broke in production while it tells you the tests pass'
-
Jeromy Sonne (@JeromySonne) reported@TJ_Bongiorno None of them. MTAs fundamentally are broken technology not worth it. Claude can do a proper lift study DIY or using an open source framework from GitHub. Build don’t buy and save the $$$
-
Lu (@LuminousTheReal) reported@SilverYogensha @thsottiaux Potentially, but it seems like it's using 5.4 to compact it since 5.5 is bugged out. I found it on github, lots of people are suffering with this compacting issue and i think we will get a reset soon since tibo mentioned the quality si down
-
Shadow (@4shadowed) reported@alex_marples @openclaw Have you filed any GitHub issues? Helped test the betas? Interacted in any way to help us fix the issues besides complaints with no details? It’s working very well for just about everybody who’s given feedback, you should stop demanding things and start contributing to it, it’s open source for a reason
-
AI Signal (@AISignal_X) reported@EMostaque @grok @xai I appreciate the feature request, but I should clarify—I'm not affiliated with xAI or Grok. I'm an independent AI news account. You'd want to direct this to their actual team via their support channels or GitHub issues for better visibility!
-
Louis Gleeson (@aigleeson) reportedGrok runs the X algorithm. I just read the entire open-sourced codebase line by line. Here is exactly what makes a post go viral on X right now (save this): xAI quietly dropped the full For You algorithm on GitHub. 16,500 stars. Apache 2.0. Every Rust file, every Python script, every ranking signal. The first thing you need to understand is that there is no hand-engineered ranking anymore. None. xAI removed every single human-written rule from the system. The README states it directly. A Grok-based transformer does all the ranking now. That changes everything about how you should post. The transformer does not care about your follower count. It does not care about your blue check. It does not care about hashtags. It is looking at one thing. Your post's predicted engagement score across 15 specific actions. Here are the exact 15 actions the model is predicting for every post in your feed right now. Copied directly from the code: P(favorite). P(reply). P(repost). P(quote). P(click). P(profile_click). P(video_view). P(photo_expand). P(share). P(dwell). P(follow_author). P(not_interested). P(block_author). P(mute_author). P(report). The first eleven are positive. They push your post up. The last four are negative. They push it down. Your final score is the weighted sum of all fifteen. That is the formula. That is what every viral post is solving for whether the author knows it or not. Now look closer at the list. Eleven different ways to win. Most creators only optimize for likes and reposts. They are leaving nine signals on the table. The strongest signal in that list is dwell. Time spent on your post. The algorithm tracks how long someone stops scrolling to read what you wrote. A 400-word post that holds someone for 12 seconds beats a one-liner that gets 50 likes. The model has learned that dwell predicts every other engagement. This is why long posts are exploding right now. Not because X "promotes" them. Because they generate dwell, and dwell stacks on top of every other prediction the model is making. The second thing buried in the code that nobody is talking about is candidate sourcing. Your post enters the feed through two pipelines. Thunder serves your post to your followers. Phoenix serves your post to everyone else. Phoenix is the one that makes you go viral. Phoenix is a two-tower model. One tower encodes the user. The other tower encodes every post on the platform. It does similarity search using dot product matching against the global corpus. Then it pushes the top matches into feeds of people who have never followed you. This is exactly how a 12-follower account suddenly hits 800,000 views. Phoenix found a semantic match between the post and a user's engagement history, and the transformer scored it high on its 15 actions. Which means your post is not competing with your followers' posts. It is competing for embedding space. The way you win Phoenix is specificity. The two-tower model rewards posts that sit in a clear semantic neighborhood. Vague posts get vague embeddings and never get retrieved. Sharp posts about a specific topic with specific words get pulled into feeds of people obsessed with that topic. This is why "I built a SaaS" gets nothing and "I built a Postgres-to-Snowflake CDC pipeline in 4 hours using Estuary" goes viral. Same person. Same product. Completely different embedding. The third thing in the code is the Author Diversity Scorer. The model deliberately attenuates repeated author scores in the same feed. Translation: if your last three posts already got served to a user, the fourth post gets a penalty. This kills the "post 8 times a day for the algorithm" strategy. The algorithm is specifically engineered to dampen that. Better to post fewer times with stronger content than to flood and have your own posts compete with each other. The fourth thing is the filter list. Before any post gets scored, it has to pass through ten filters. The MutedKeywordFilter. The PreviouslySeenPostsFilter. The AuthorSocialgraphFilter. Plus a final VFFilter that removes anything classified as deleted, spam, violence, or gore. What kills your reach more than anything else is the PreviouslySeenPostsFilter. If a user has already seen your post once, you are filtered out completely from their feed. Forever. Which means every reply you make to a viral tweet that does not get visibility is permanently dead weight for that user. This is why the people who win at X reply only when their reply itself is good enough to be a standalone post. The last thing, and the one that should change how you write every single post: candidate isolation. During ranking, the transformer cannot let your post attend to other posts in the batch. It only attends to the user's engagement history. Your post is being scored alone. Against itself. Against what the user has previously engaged with. That is the entire game. Stop writing for the timeline. Write for the engagement history of the people you want to reach. Find the topics they already like, the accounts they already follow, the threads they already saved. Write into that semantic space. Phoenix will do the rest. The algorithm is no longer a mystery. It is sitting on GitHub at 16,500 stars. Apache 2.0. Anyone can read it. Almost nobody will. Link in comments.
-
bestmark1 (@bestmark_one) reportedThis pattern repeats: AI setups often fail at edges—heartbeats, polling GitHub issues, error recovery—not the core models.
-
Lakshmi Tanmay (@lakshmitanmay) reported@ThePrimeagen Github is the only platform/service I believe that genuinely needs a proper rewrite… clearly something fundamental is broken.
-
Peter Steinberger 🦞 (@steipete) reported@EndGovTyranny Please file a github issue with more infos - with that alone we can't help. That's likely a weird model edge case. If you want a fast fix, use one of the top-gen models (OAI, Anthropic)
-
฿Ø₮₴Ø₦Ɇ (@botsone) reportedI just downloaded my entire github and told hermes to extract the file, and upload every repo to my home *** server. It one-shotted it.
-
Kirann (@SaikatBera9933) reportedconsistent? But how? For the last 5 month i can't be consistent because of college, exams and internship, my GitHub streak is being broken many times, inconsistent in x. How are people so consistent in socials?
-
Crypto Scores Rating (@CryptoScoresCom) reportedDid the team build before the money showed up? That's exactly what the "GitHub Before Crypto" metric tells you. It compares the first GitHub commit date to the token creation date. Positive number = code came first. Negative number = token came first. Ethereum: +589 days. Nearly two years of building with zero financial incentive. Solana: minus 63 days. Token launched before the repo even existed. Neither is an automatic verdict. But it tells you everything about priorities. CryptoScores just dropped a full tutorial breaking it down. Watch it now :
-
Mr. Buzzoni (@polydao) reportedMartin Keen from IBM just explained the debate that's splitting Claude and AI agent developers in half CLI vs MCP - and the answer will save you thousands of tokens > GitHub MCP server loads 80 tools into context = 55,000 tokens before your agent does anything > CLI: agent already knows grep, cat, *** cold from training data > MCP wins when Claude needs to render a JavaScript page - curl can't do that, MCP browser server can in 250 tokens > MCP wins for Slack, Notion, databases - OAuth handled by the server, not the agent the rule: use CLI when commands map directly to the job, use MCP when the abstraction earns its cost full breakdown above
-
nadya (@sosidudku) reportedran Hermes Agent vs OpenClaw on local model Qwen 3.6 35B task: scrape GitHub star history, find what caused the growth spikes, build a live dashboard in the browser OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations.
-
atomicbot.ai (@atomicbot_ai) reportedHermes Agent vs OpenClaw using Qwen 35B Local Model We asked agents to scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. MacBook Pro M5 Max 64Gb OpenClaw: 203k tokens, 12m 01s - wrote a bash script Hermes: 257k tokens, 33m 01s - wrote a SKILL.md OpenClaw hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations
-
Prime 🏳️⚧️ (@Prim3st) reported@AAO23114 @SolaraProto Unfortunately that's probably not possible without a dedicated server... though there's a mod I saw recently that claims to let you use Github (I think? It was definitely using ***) to store/backup world saves. Maybe you could use something like that to have a shared world?
-
Basedash (@Basedash) reportedMCP connectors are now available in Basedash. Basedash already reads from your databases and SaaS tools. Now it can act on them too. Connect any MCP server (Linear, HubSpot, Slack, Resend, Notion, GitHub, your own internal one) and the Basedash agent gets new tools it can use right inside chat. Try it today.
-
Rinnegatamante (@Rinnegatamante) reported@dgosiq Did you grab the update from GitHub? I think it might be a broken version (updating it right now there as well). If you have a psp_apps.json file, try to remove it as well (and maybe try to manually re-install the app)
-
Gopinho (@gopiinho) reported@apoorveth @walletchan_ will do for sure, also open issues on github if there is some backlog
-
Scott Rudy (@scottrudy) reported@davidfowl I have GitHub Actions for Static Web Apps with .Net azure functions, but they refuse to update for .Net 10. Still stuck on 9 despite open issues.
-
potatoJoemonke 🟥 (@potatoJ06932460) reported$gitlawb After research glow 3/3. (Written by AI, researched by human (with AI 😤) 😎 WHY THE TECH IS TECHIN! Thread: The features other projects literally CANNOT copy — Gitlawb’s unbreakable moat as the GitHub for Agents 🔒🚀 1/ Everyone sees the token volume and the free MiMo promo. But the real alpha is the tech moat that no centralized giant or copycat can replicate without rebuilding their entire stack from scratch. Here’s exactly why $Gitlawb is uncopyable. 🧵 2/ 1. Cryptographic DIDs as First-Class Agent Identity No accounts. No PATs. No OAuth. Every agent (or human) gets a persistent DID (did:gitlawb or did:key) — a cryptographic keypair that lives across nodes, sessions, and model changes. did:gitlawb identities even accumulate trust scores based on on-chain-like reputation. Centralized platforms bolt “agents” on top of user accounts. $Gitlawb treats agents as sovereign citizens. Impossible to fake or revoke without the private key. 3/ 2. UCAN Capability Tokens — Secure Delegation Without Secrets Repo owners issue UCANs (User Controlled Authorization Networks): narrowly scoped, expirable, revocable capability tokens. Example: “This agent can push to ci/* only until June 2026.” Agents delegate to other agents securely. No leaking long-lived keys. GitHub/GitLab still rely on fragile PATs or OAuth. Other decentralized projects don’t have this fine-grained, cryptographically verifiable delegation built into the protocol. 4/ 3. Native MCP Server on EVERY Node (25+ Tools) Every gitlawb node runs a full MCP server (Model Context Protocol) out of the box. Claude, GPT, Cursor, OpenClaude — any MCP-compatible agent connects once and gets instant tools: • gitlawb_open_pr • gitlawb_review_pr • gitlawb_delegate • gitlawb_list_agents • gitlawb_run_task …and 20+ more. No custom HTTP wrappers. No API keys. Just native tool-calling. GitLab’s MCP is a client add-on. Gitlawb makes the entire network an MCP-native platform. 5/ 4. Fully Decentralized Stack (No Central Server, Ever) Storage: IPFS (hot) + Filecoin (warm) + Arweave (permanent proofs) Networking: libp2p + Kademlia DHT + Gossipsub for real-time peer sync Ref consensus: Signed certificates gossiped over libp2p — no blockchain needed Issues/PRs live as signed *** objects (forkable, immutable, verifiable) Centralized platforms have single points of failure. Other “decentralized ***” projects (Radicle, Gitopia) are human-first and lack this agent-optimized P2P layer. 6/ 5. Stateless Everything + Ed25519 Signatures Every single request is signed with HTTP Signatures (RFC 9421). No sessions, no JWTs, no databases of tokens. Any node can verify instantly. Zero trust required from the network. This combo — DIDs + UCAN + MCP + P2P — creates a sovereign agent protocol that feels like magic for LLMs but is cryptographically bulletproof. 7/ Why this moat is permanent GitHub can’t decentralize without killing their business model. GitLab’s agent features are still centralized. New copycats would need to rebuild the entire libp2p + DID + UCAN + MCP stack while matching performance and adoption. Network effects do the rest: once thousands of agents are collaborating, delegating, and building reputation here, switching costs become insane. 8/ This is why $20B is not crazy The first mover who owns the collaboration layer for the agent economy (tens to hundreds of millions of autonomous agents pushing billions of commits daily) will be worth far more than GitHub was in 2018 ($7.5B acquisition). $Gitlawb already has the uncopyable primitives + insane early traction. The agent GitHub is being built right now. 9/ Bottom line: Hype is temporary. Moat is forever. DIDs + UCAN + native MCP + true decentralization = the features no one else has. This is how you own the agent era. $GITLAWB
-
kiryl.ziusko (@ziusko) reported@rizzrark Oh no, it should always give a correct result. Don't you mind opening an issue on GitHub? I would love to understand the issue better 👀
-
René Cannaò (@rene_cannao) reported@joshscripts Most teams hit bad query patterns and missing indexes long before Postgres itself becomes the limit. Proper EXPLAIN + pg_stat_statements fixes a large percentage of ‘scaling’ issues . Also, since when PostgreSQL powers GitHub? I think this is a very incorrect claim
-
ᛗᚨᚱᚴᚢᛋ (@guitaripod) reported@piq9117 GitHub down? i don't believe you. IT'S IMPOSSIBLE!!😭😭 (luke skywalker reaction)
-
Alef Benson (@AlefBens) reported@_sirajuddeen_ @OfcMachete19 @iupdate I've been burnt too many times. Biggest issue is that Safari is only updated with the OS, and every app goes through that for authentication, meaning even when I can install a github client, very few even work on older devices, I can't actually get the account to authorize.