GitHub status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Problems in the last 24 hours
The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at GitHub. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by GitHub users through our website.
- Website Down (62%)
- Errors (21%)
- Sign in (18%)
Live Outage Map
The most recent GitHub outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Sign in | 2 days ago |
|
|
Website Down | 2 days ago |
|
|
Website Down | 4 days ago |
|
|
Sign in | 5 days ago |
|
|
Website Down | 9 days ago |
|
|
Website Down | 9 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Frooxius @ MFF - frooxius.bsky.social (@Frooxius) reported@MrRocketFX @ResoniteApp @unity They should be compressed on Resonite side? I'm not quite sure if I understand, it might be better to make GitHub issue for the request at the repo.
-
atomicbot.ai (@atomicbot_ai) reportedHermes Agent vs OpenClaw using Qwen 35B Local Model We asked agents to scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. MacBook Pro M5 Max 64Gb OpenClaw: 203k tokens, 12m 01s - wrote a bash script Hermes: 257k tokens, 33m 01s - wrote a SKILL.md OpenClaw hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations
-
Lazi (@algoritmii) reported@github bro ffs fix your ******* issues stop pushing features
-
Benitto J D (@BenittoJD) reportedGithub actions are down again
-
Moe Sbaiti (@MoeSbaiti) reportedWHAT THE FRAMING GETS WRONG Most posts today are saying "Grok added a new feature." That framing is backwards. What happened is that an agent framework with over 110,000 GitHub stars, the number 1 ranking on OpenRouter, and an NVIDIA endorsement just got native access to one of the most capable models available through a simple OAuth login. xAI made the announcement. Not Nous Research. Hermes Agent also self-improves. When it solves a hard problem, it writes a skill file for that solution and saves it. The longer it runs on your specific workflows, the more capable it becomes for your specific context. That is not how people are talking about this today. The memory layer and the self-improvement loop are the actual product. Grok is the engine.
-
isaac (@isaac_yeang) reportedjk just lazy error message handling another bajillion dollars to github
-
Abinash (@implabinash) reported@ThePrimeagen I never faced a single GitHub downtime issue in my workflow. Oh! Sorry, I have been using GitLab for a year now.
-
闇ぼの@𝕏級ピンク珍獣.rs (@kenbono13) reported@mcsetty @bee_fumo Most software that directly uses GitHub for the download, build, installation (NOT using the AUR) aren't installed under root in the first place. It's installed under $HOME so it wasn't an issue. AUR is the exception since it uses the Arch build system.
-
Mind Prison (@M1ndPrison) reported@GlenBradley Yes, I have gone deep into it in the past as well. Haven't had time to look at the current update, but the problem has been that the code on github is mostly irrelevant. The important bits are all the parts that aren't public. There is no way to no how the ML algo is ultimately weighting all the parameters. Most importantly, I've catalogued many accounts posting exactly the same content with orders of magnitude differences in reach. The thing that would make this platform useable would be to fully eliminate all account based weighting and go to solely post based weighting. The reach of your post should be only on the merits of what you posted versus who you are.
-
Abu Olumi 🪶 (@Olumi441) reportedThere's also a public feed. BaseLens fetches Base GitHub releases and analyzes them with AI automatically. Clean upgrade cards. No jargon. No noise. Anyone can read it, no login needed.
-
Louis Gleeson (@aigleeson) reportedGrok runs the X algorithm. I just read the entire open-sourced codebase line by line. Here is exactly what makes a post go viral on X right now (save this): xAI quietly dropped the full For You algorithm on GitHub. 16,500 stars. Apache 2.0. Every Rust file, every Python script, every ranking signal. The first thing you need to understand is that there is no hand-engineered ranking anymore. None. xAI removed every single human-written rule from the system. The README states it directly. A Grok-based transformer does all the ranking now. That changes everything about how you should post. The transformer does not care about your follower count. It does not care about your blue check. It does not care about hashtags. It is looking at one thing. Your post's predicted engagement score across 15 specific actions. Here are the exact 15 actions the model is predicting for every post in your feed right now. Copied directly from the code: P(favorite). P(reply). P(repost). P(quote). P(click). P(profile_click). P(video_view). P(photo_expand). P(share). P(dwell). P(follow_author). P(not_interested). P(block_author). P(mute_author). P(report). The first eleven are positive. They push your post up. The last four are negative. They push it down. Your final score is the weighted sum of all fifteen. That is the formula. That is what every viral post is solving for whether the author knows it or not. Now look closer at the list. Eleven different ways to win. Most creators only optimize for likes and reposts. They are leaving nine signals on the table. The strongest signal in that list is dwell. Time spent on your post. The algorithm tracks how long someone stops scrolling to read what you wrote. A 400-word post that holds someone for 12 seconds beats a one-liner that gets 50 likes. The model has learned that dwell predicts every other engagement. This is why long posts are exploding right now. Not because X "promotes" them. Because they generate dwell, and dwell stacks on top of every other prediction the model is making. The second thing buried in the code that nobody is talking about is candidate sourcing. Your post enters the feed through two pipelines. Thunder serves your post to your followers. Phoenix serves your post to everyone else. Phoenix is the one that makes you go viral. Phoenix is a two-tower model. One tower encodes the user. The other tower encodes every post on the platform. It does similarity search using dot product matching against the global corpus. Then it pushes the top matches into feeds of people who have never followed you. This is exactly how a 12-follower account suddenly hits 800,000 views. Phoenix found a semantic match between the post and a user's engagement history, and the transformer scored it high on its 15 actions. Which means your post is not competing with your followers' posts. It is competing for embedding space. The way you win Phoenix is specificity. The two-tower model rewards posts that sit in a clear semantic neighborhood. Vague posts get vague embeddings and never get retrieved. Sharp posts about a specific topic with specific words get pulled into feeds of people obsessed with that topic. This is why "I built a SaaS" gets nothing and "I built a Postgres-to-Snowflake CDC pipeline in 4 hours using Estuary" goes viral. Same person. Same product. Completely different embedding. The third thing in the code is the Author Diversity Scorer. The model deliberately attenuates repeated author scores in the same feed. Translation: if your last three posts already got served to a user, the fourth post gets a penalty. This kills the "post 8 times a day for the algorithm" strategy. The algorithm is specifically engineered to dampen that. Better to post fewer times with stronger content than to flood and have your own posts compete with each other. The fourth thing is the filter list. Before any post gets scored, it has to pass through ten filters. The MutedKeywordFilter. The PreviouslySeenPostsFilter. The AuthorSocialgraphFilter. Plus a final VFFilter that removes anything classified as deleted, spam, violence, or gore. What kills your reach more than anything else is the PreviouslySeenPostsFilter. If a user has already seen your post once, you are filtered out completely from their feed. Forever. Which means every reply you make to a viral tweet that does not get visibility is permanently dead weight for that user. This is why the people who win at X reply only when their reply itself is good enough to be a standalone post. The last thing, and the one that should change how you write every single post: candidate isolation. During ranking, the transformer cannot let your post attend to other posts in the batch. It only attends to the user's engagement history. Your post is being scored alone. Against itself. Against what the user has previously engaged with. That is the entire game. Stop writing for the timeline. Write for the engagement history of the people you want to reach. Find the topics they already like, the accounts they already follow, the threads they already saved. Write into that semantic space. Phoenix will do the rest. The algorithm is no longer a mystery. It is sitting on GitHub at 16,500 stars. Apache 2.0. Anyone can read it. Almost nobody will. Link in comments.
-
Yaseen Shaik (@YaseenTech4) reportedJust completed an assignment on building a dependency graph for AI agent tools using Google Super + GitHub integrations 🚀 Started with: “This should be easy” Then came: TypeScript errors zip/upload issues CRLF debugging 😭 finally got the submission accepted successfully ✅
-
Caneleo (@Caneleo55) reportedSince there is lots of hype on @polymarket right now you have to be extra careful there are lots of scammers out there 🚩 Don’t download random trading bots or repos that are trending on GitHub i tested one once deposited 10$ to a fresh wallet and run the bot on a vps turns out it had a secret function that sent your .env with your private keys to a different server 💀
-
Jerome (@jeromeq2004) reportedgithub releasing the agentic ai developer cert is funny because the actual exam is going to be 'fix this thing claude broke in production while it tells you the tests pass'
-
Kunal Yeola (@yeolakunal) reportedAsked GitHub Copilot to fix ESLint issues and it added eslint-disable at the beginning of the file 😭
-
Aditya Sharma (@aditya_sharma) reportedelon musk dropped the X algorithm on github. i read all 25,000 lines so you don't have to. here's what actually decides your reach. what actually matters - dwell time is the entire game. how long someone pauses on your post is counted twice in the scoring. likes barely move the needle. the pause does. - saves and shares are the highest-value engagement after dwell. they signal the strongest intent. - video has a minimum duration floor. clips shorter than the threshold get zero video credit. five seconds plus, always. - one post per conversation thread survives in any feed. your five-post thread competes with itself. the algorithm picks the strongest one. - replies to big accounts (1000+ followers) get scored on a 0-3 quality scale. high score and you land in the reply panel of viral tweets. low score and you're invisible. - replies to small accounts get a binary spam check only. no quality scoring path. no reach upside. - mutual follow overlap matters. tight clusters of mutuals create reach corridors for everyone in them. - clear topic identity beats vague posting. the algorithm tags your post with topics. clear topics route you to people who follow those topics. - new accounts on the platform get an easier path to reach you than established ones. if you target young/new users, the algorithm is on your side. what kills your reach - posting too often. the algorithm has decay coded in. your second post of the day gets a fraction of your first. your fifth gets almost nothing. - quoting or replying to a flagged tweet. you inherit the badness. your whole post gets dropped even if it's clean. - ai slop. there's a dedicated slop detector that scores your post 1 to 3. high slop = killed reach. - being unclear what your post is about. vague content doesn't match anyone's interests cleanly. - mid-controversial content. it gets pushed away from the high-attention slots in the feed because ads can't sit next to it. - posting your own tweet's reply hoping it boosts the original. only one of them shows up. it might be the reply, not the original. myths to kill - hashtags do nothing. zero boost in the code. they're not even read by the ranker. - premium doesn't get you reach. paid and free accounts go through the same pipeline. - long threads don't beat single posts. the algorithm picks one post per thread. - engagement bait doesn't work. it trips spam classifiers on low-follower accounts. - posting twelve times a day doesn't get twelve impressions. it gets one strong one and eleven weak ones competing with each other. - replying to viral tweets isn't easy reach. the quality bar is high. cheap replies fall straight into the spam path. - timing tricks don't beat ranking. timing helps you enter the candidate pool. quality decides if you win. - external links don't hurt you. clicks are actually one of the 19 positive scoring signals. - the algorithm doesn't hate any specific format. it hates unclear content. format is fine if the content is sharp. - you don't need 10k followers to get reach. the algorithm doesn't read follower count as a scoring input. it reads engagement quality. the playbook - write posts that make people pause for 5+ seconds. dense info, clear structure, screenshots with detail, comparisons. - if you use video, clear the duration floor. always. pick one clear topic per post. don't mix five things into one tweet. - reply to bigger accounts in your niche with substantive, high-effort replies. one good reply beats ten mediocre ones. - build mutuals in tight clusters around your niche. broad spray-follow strategies don't help. focused clustering does. - post 1-2 times a day, not 10. quality compounds, volume decays. - don't quote tweets that look flagged or risky. clean what you cite. - write like a human. don't post ai output verbatim. target newer users on the platform if you can. they have a friendlier reach path for creators. if you're a small account starting out - replies to big accounts in your niche are your highest-leverage move - build a tight mutual cluster of 50-200 accounts in your exact space - one strong post a day beats five medium ones clear topic identity, every single post if you have an established audience - your reach problem is breaking outside your network - dwell time on individual posts is your biggest unused lever - clean brand safety keeps you in prime feed slots next to ads - volume hurts you more as you grow, not less the whole system is built on one bet: that a model fed engagement data can decide relevance better than any rule. there's no hashtag boost, no follower boost, no time-of-day trick in the code. just sequences in, probabilities out. what works is what humans actually want to read. the algorithm is just better at measuring it now.
-
Jason (@jasonbunnell) reported@claudeai FEATURE REQUEST: if user finds an issue with Claude Code and Claude resolves (or not), you should auto increment GitHub issue to keep track of real user issues per issue to resolve as needed Had an issue with Claude Code for VS Code extension but noticed only 10 likes
-
Lu (@LuminousTheReal) reported@SilverYogensha @thsottiaux Potentially, but it seems like it's using 5.4 to compact it since 5.5 is bugged out. I found it on github, lots of people are suffering with this compacting issue and i think we will get a reset soon since tibo mentioned the quality si down
-
Adishwar Rishi (@AdishwarR) reported@argofowl I raised this issue on GitHub. I hope someone from the Codex team sees your post and fixes this asap. Thanks for mentioning this, it's so frustrating.
-
nadya (@sosidudku) reportedWe decided to benchmark Hermes Agent vs OpenClaw: scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. Local model: Qwen 3.6 35B OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations.
-
Beau Johnson (@BeauJohnson89) reportedagent skills are becoming the new software package DenisSergeevitch/agents-best-practices > 120 stars on github > created today > provider-neutral skill for codex + claude code > designs and audits agent harnesses > covers tools, permissions, memory, evals, prompt caching, observability, and safety the important line from the readme: the model proposes actions. the harness validates, authorizes, executes, records, and returns observations. that is the whole game. most people keep trying to fix agents with bigger prompts. the real fix is a tighter harness.
-
Hackscorpio (@hackscorpio) reported@thsottiaux Codex review is not working right. When the model finishes, it doesn't render the response properly (not a model degradation). Seems like a regression in Codex application. I have no idea where to report that. I was reporting errors for cli and VSC extension on github.
-
青川一 (@worigoule) reported@ZooL_Smith And then you google solutions and found yourself ended up in a github merge request or more likely a issue page written in like, 2 years ago
-
anil (@2abstract4me) reported@steipete how do u incorporate users feedback? primarily thru github issues? feedback in the sense, how they are using it? what they want. or how a new feature is being received and etc?
-
David Cramer (@zeeg) reported@eternalmagi dont have context, is there a github issue by chance
-
DemonKingSwarn (@DemonKingSwarn) reported@ThePrimeagen at this point my self hosted *** server has more uptime than github which is funny because they have more money than me
-
NEET INTEL (@neetintel) reportedA post "decoding" X's new algorithm has gone viral. It tells you what's dead, what wins, and to screenshot it. X open-sourced the entire algorithm on GitHub, so I downloaded it and checked the claims against the real code. Most of it doesn't hold up. What the post got WRONG: → "Small accounts get a 3x boost from out-of-network reach." It's the opposite. One part of the code (a file called oon_scorer) exists purely to turn DOWN posts from people you don't follow. Its own comment says "prioritize in-network." The thread printed the algorithm backwards. → "Media gets 2x the weight." There's no 2x. The code just records whether a post has an image. It's a plain yes/no without any multiplier attached. → "Posting 4+ times a day triggers a penalty." There's a real rule that stops one person flooding your feed. But here's the deal: it only spaces out how often you show up in a single scroll. There's no daily count, and no number 4. That was invented. → "Closers like 'what do you think?' get you flagged." There is no engagement-bait detector anywhere in the code. → "Long 4,000-character posts get boosted." I searched the whole codebase for "4000." Nothing. What it got RIGHT (one thing): → Replies really are judged by WHO replies, not just how many. The code has a setting for whether a large account joined your thread. Credit where due. The irony? The repo ships a file that scores post quality. One thing it measures is literally called a "slop score" — X built a tool to detect low-effort filler. A recycled "what's dead / what wins" thread is exactly that. The takeaway? X's algorithm is public. Anyone can open it, but almost nobody does. Instead, they reshare a thread that summarized a blog that paraphrased a tweet. When a post hits you with confident numbers, ask the one question that matters: did they actually open the file?
-
Dewaldt Huysamen (@GodsBoy7777) reported@sickdotdev Getting insane and better results just on medium for all of the above categories. Weirdest is Opus 4.7 fails at basic school tasks help for kids and when I do code GPT 5.5 finds issues that are found in any case on github CI checks. If use codex CI passes more than 99%
-
𝒹ℯ𝓁𝓁𝓎_𝓉𝒽ℯ_𝒹ℯ𝓈𝒾𝑔𝓃ℯ𝓇 (@dellyricch2) reportedElon says the latest 𝕏 algorithm has been published to GitHub Can someone please break it down for us
-
Farhan Helmy (@farhanhelmycode) reporteddown again ? imma kms @github