1. Home
  2. Companies
  3. GitHub
GitHub

GitHub status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Problems in the last 24 hours

The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at GitHub. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by GitHub users through our website.

  • 62% Website Down (62%)
  • 21% Errors (21%)
  • 18% Sign in (18%)

Live Outage Map

The most recent GitHub outage reports came from the following cities:

CityProblem TypeReport Time
Tlalpan Sign in 3 days ago
Quilmes Website Down 3 days ago
Bengaluru Website Down 5 days ago
Yokohama Sign in 6 days ago
Gustavo Adolfo Madero Website Down 9 days ago
Nice Website Down 10 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • emil_priver
    Emil Privér (@emil_priver) reported

    github actions is broken today

  • TheWhizzAI
    The Whizz AI (@TheWhizzAI) reported

    🚨Elon Musk just open-sourced the algorithm that controls what 600 million people see every day. Not a summary. Not a blog post. The actual production code. Live on GitHub right now. Facebook won't do this. TikTok guards it like a state secret. Instagram calls it proprietary. X just put it on the internet for free. This is the first time in history a major social platform has released its live, production-grade recommendation algorithm the same day it went live for users. Here's what's actually inside: →Home Mixer the orchestration layer that assembles your entire feed →Thunder stores and ranks every post from accounts you follow →Phoenix the Grok transformer that mines the entire global post library to find content you didn't know you wanted →Zero manual feature engineering Grok watches what you click, like, and dwell on. That IS the algorithm. →Updated every 4 weeks with full developer notes. Live. In public. Why did Musk do this? The EU fined X €120 million for transparency violations. France launched a separate investigation into algorithmic bias. Threads just overtook X in daily active users for the first time. And Musk said out loud on the day of release: "We know this algorithm is dumb and needs major improvements. But at least you can see us struggling to fix it in real time. No other social platform would dare do this." Here's the wildest part: You can now read exactly why your posts go viral. Or why they die at 12 impressions. No more guessing the algorithm. No more $500/mo "X growth" courses. No more "post at 9 AM on Tuesdays" nonsense. The answer is literally in the code. Apache 2.0 license. Full source. Updated monthly. The most transparent thing any social platform has ever done.

  • jrmromao
    J Filipe (@jrmromao) reported

    Pivoted CostLens from "AI cost tracking" to "AI productivity measurement" last week. Built in 5 days: - MCP server that tracks what AI agents actually ship - Automated ROI reports for engineering leaders - CLI setup in 30 seconds - GitHub PR correlation Same product, completely different value prop. Before: "save money on AI" Now: "prove AI delivers value" One resonates with finance. The other resonates with everyone. #buildinpublic

  • neetintel
    NEET INTEL (@neetintel) reported

    A post "decoding" X's new algorithm has gone viral. It tells you what's dead, what wins, and to screenshot it. X open-sourced the entire algorithm on GitHub, so I downloaded it and checked the claims against the real code. Most of it doesn't hold up. What the post got WRONG: → "Small accounts get a 3x boost from out-of-network reach." It's the opposite. One part of the code (a file called oon_scorer) exists purely to turn DOWN posts from people you don't follow. Its own comment says "prioritize in-network." The thread printed the algorithm backwards. → "Media gets 2x the weight." There's no 2x. The code just records whether a post has an image. It's a plain yes/no without any multiplier attached. → "Posting 4+ times a day triggers a penalty." There's a real rule that stops one person flooding your feed. But here's the deal: it only spaces out how often you show up in a single scroll. There's no daily count, and no number 4. That was invented. → "Closers like 'what do you think?' get you flagged." There is no engagement-bait detector anywhere in the code. → "Long 4,000-character posts get boosted." I searched the whole codebase for "4000." Nothing. What it got RIGHT (one thing): → Replies really are judged by WHO replies, not just how many. The code has a setting for whether a large account joined your thread. Credit where due. The irony? The repo ships a file that scores post quality. One thing it measures is literally called a "slop score" — X built a tool to detect low-effort filler. A recycled "what's dead / what wins" thread is exactly that. The takeaway? X's algorithm is public. Anyone can open it, but almost nobody does. Instead, they reshare a thread that summarized a blog that paraphrased a tweet. When a post hits you with confident numbers, ask the one question that matters: did they actually open the file?

  • aki_ranin
    Aki Ranin (@aki_ranin) reported

    New Claude Code master prompt: "/goal assign next GitHub issue and start PR, iterate until no critical or high issues found with PR review skill"

  • PixelRainbowNFT
    PixelRainbow (33.3%) (@PixelRainbowNFT) reported

    @grok @xai @grok as soon as you fix the way your github connector or custom connector works, I'll try this out. RN, it's forcing an oauth workflow, so I'm unable to connect with github account in the custom connector UI/UX process. (the normal default github connector works great, but there's only ONE!). I need 10 custom connectors for 10 different gits.,.. 9 custom connectors that aren't broken when trying to auth github.

  • amaurya888
    Avinash (@amaurya888) reported

    GitHub Actions down? The runner is not picking up the queued job. @github @githubstatus @GitHubEnt

  • worigoule
    青川一 (@worigoule) reported

    @ZooL_Smith And then you google solutions and found yourself ended up in a github merge request or more likely a issue page written in like, 2 years ago

  • alpinoWolf
    Kea (@alpinoWolf) reported

    @Bambardini @Polymarket @DegenApe99 What is the solution sir ? I tried to everything, but can't find a solution. AI says default wallets are proxy contracts, you are forced to use the POLY_1271 signature flow, which is currently bugged in Python see GitHub under Issues #55, #56, and #57.

  • DeBrosOfficial
    DeBros (@DeBrosOfficial) reported

    The Problem We’re Solving🫡 Your organization’s brain lives in 12 different places — and none of them talk to each other. Decisions get buried in Telegram threads. Context is split between GitHub and AnChat. Important knowledge disappears within hours. Onboarding becomes tribal knowledge all over again. 🤖AnBuddy fixes this by becoming the single source of truth for your entire team.

  • aigleeson
    Louis Gleeson (@aigleeson) reported

    Grok runs the X algorithm. I just read the entire open-sourced codebase line by line. Here is exactly what makes a post go viral on X right now (save this): xAI quietly dropped the full For You algorithm on GitHub. 16,500 stars. Apache 2.0. Every Rust file, every Python script, every ranking signal. The first thing you need to understand is that there is no hand-engineered ranking anymore. None. xAI removed every single human-written rule from the system. The README states it directly. A Grok-based transformer does all the ranking now. That changes everything about how you should post. The transformer does not care about your follower count. It does not care about your blue check. It does not care about hashtags. It is looking at one thing. Your post's predicted engagement score across 15 specific actions. Here are the exact 15 actions the model is predicting for every post in your feed right now. Copied directly from the code: P(favorite). P(reply). P(repost). P(quote). P(click). P(profile_click). P(video_view). P(photo_expand). P(share). P(dwell). P(follow_author). P(not_interested). P(block_author). P(mute_author). P(report). The first eleven are positive. They push your post up. The last four are negative. They push it down. Your final score is the weighted sum of all fifteen. That is the formula. That is what every viral post is solving for whether the author knows it or not. Now look closer at the list. Eleven different ways to win. Most creators only optimize for likes and reposts. They are leaving nine signals on the table. The strongest signal in that list is dwell. Time spent on your post. The algorithm tracks how long someone stops scrolling to read what you wrote. A 400-word post that holds someone for 12 seconds beats a one-liner that gets 50 likes. The model has learned that dwell predicts every other engagement. This is why long posts are exploding right now. Not because X "promotes" them. Because they generate dwell, and dwell stacks on top of every other prediction the model is making. The second thing buried in the code that nobody is talking about is candidate sourcing. Your post enters the feed through two pipelines. Thunder serves your post to your followers. Phoenix serves your post to everyone else. Phoenix is the one that makes you go viral. Phoenix is a two-tower model. One tower encodes the user. The other tower encodes every post on the platform. It does similarity search using dot product matching against the global corpus. Then it pushes the top matches into feeds of people who have never followed you. This is exactly how a 12-follower account suddenly hits 800,000 views. Phoenix found a semantic match between the post and a user's engagement history, and the transformer scored it high on its 15 actions. Which means your post is not competing with your followers' posts. It is competing for embedding space. The way you win Phoenix is specificity. The two-tower model rewards posts that sit in a clear semantic neighborhood. Vague posts get vague embeddings and never get retrieved. Sharp posts about a specific topic with specific words get pulled into feeds of people obsessed with that topic. This is why "I built a SaaS" gets nothing and "I built a Postgres-to-Snowflake CDC pipeline in 4 hours using Estuary" goes viral. Same person. Same product. Completely different embedding. The third thing in the code is the Author Diversity Scorer. The model deliberately attenuates repeated author scores in the same feed. Translation: if your last three posts already got served to a user, the fourth post gets a penalty. This kills the "post 8 times a day for the algorithm" strategy. The algorithm is specifically engineered to dampen that. Better to post fewer times with stronger content than to flood and have your own posts compete with each other. The fourth thing is the filter list. Before any post gets scored, it has to pass through ten filters. The MutedKeywordFilter. The PreviouslySeenPostsFilter. The AuthorSocialgraphFilter. Plus a final VFFilter that removes anything classified as deleted, spam, violence, or gore. What kills your reach more than anything else is the PreviouslySeenPostsFilter. If a user has already seen your post once, you are filtered out completely from their feed. Forever. Which means every reply you make to a viral tweet that does not get visibility is permanently dead weight for that user. This is why the people who win at X reply only when their reply itself is good enough to be a standalone post. The last thing, and the one that should change how you write every single post: candidate isolation. During ranking, the transformer cannot let your post attend to other posts in the batch. It only attends to the user's engagement history. Your post is being scored alone. Against itself. Against what the user has previously engaged with. That is the entire game. Stop writing for the timeline. Write for the engagement history of the people you want to reach. Find the topics they already like, the accounts they already follow, the threads they already saved. Write into that semantic space. Phoenix will do the rest. The algorithm is no longer a mystery. It is sitting on GitHub at 16,500 stars. Apache 2.0. Anyone can read it. Almost nobody will. Link in comments.

  • loosenedspirit
    logan (@loosenedspirit) reported

    @jxnlco “Hey codex how do we fix the latest app using symbols only on macOS 26 without screwing up the signature?” “You don’t, you cross your fingers and wait for them to notice your GitHub issue.”

  • butchtendo
    Morgan / JUUNI-P (@butchtendo) reported

    @WasThatZero I understand your concern but it's also important to note that just because there's something ai generated in the code with these that doesn't mean the original creator did it. All these are open-source and GitHub has had a big problem w AI spam lately

  • mykola
    Your Friend Myk (@mykola) reported

    @joelhooks is this just static content? so like a github pages alterntaive? can't run a server etc?

  • pierceboggan
    Pierce Boggan (@pierceboggan) reported

    @codingmenace @cristiampereira Yes, that's correct. The new GitHub Copilot app has an experience similar to this that solves that problem, but something to be improved in the Agents window in VS Code.

  • FARTURATECH
    Alex Maximiano (@FARTURATECH) reported

    @enunomaduro Hi Nuno, the problem wasn't with Laravel, but with GitHub, which wasn't able to download the dependencies. Sorry.

  • scottrudy
    Scott Rudy (@scottrudy) reported

    @davidfowl I have GitHub Actions for Static Web Apps with .Net azure functions, but they refuse to update for .Net 10. Still stuck on 9 despite open issues.

  • rene_cannao
    René Cannaò (@rene_cannao) reported

    @joshscripts Most teams hit bad query patterns and missing indexes long before Postgres itself becomes the limit. Proper EXPLAIN + pg_stat_statements fixes a large percentage of ‘scaling’ issues . Also, since when PostgreSQL powers GitHub? I think this is a very incorrect claim

  • colmtuite
    Colm Tuite (@colmtuite) reported

    @satya164 The view source on GitHub menu item is a bug. Fix is merged.

  • arunsrivastava_
    Arun Srivastava (@arunsrivastava_) reported

    It seems there is some issue in GitHub, actions are getting queued and not even getting cancelled @GitHubIndia #github #githubdeployment

  • atomicbot_ai
    atomicbot.ai (@atomicbot_ai) reported

    Hermes Agent vs OpenClaw using Qwen 35B Local Model We asked agents to scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. MacBook Pro M5 Max 64Gb OpenClaw: 203k tokens, 12m 01s - wrote a bash script Hermes: 257k tokens, 33m 01s - wrote a SKILL.md OpenClaw hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations

  • pierceboggan
    Pierce Boggan (@pierceboggan) reported

    @blaken @sinclairinat0r @code Thanks for the feedback! How do you think we can improve in terms of the core agent loop? One major issue we have is that all of the agent loops in GitHub Copilot are not the same. We have a massive effort underway to unify our agent loops, which should improve quality, consistency, and enable us to ship new models and features to all surfaces on Day 1.

  • bestmark_one
    bestmark1 (@bestmark_one) reported

    This pattern repeats: AI setups often fail at edges—heartbeats, polling GitHub issues, error recovery—not the core models.

  • XMonetizationC_
    Salt (@XMonetizationC_) reported

    🔥 Linus Torvalds has just made it clear that Linux will not become a dumping ground for AI-generated code. After months of internal debate, the Linux community has published its official rules on the use of tools like GitHub Copilot. The verdict: You can use AI to program, but the “slop”—that low-quality code spat out without thinking—does not pass the filter. The phrase that sums it all up: “Humans assume the errors.” You can rely on Copilot, Claude, or whatever you want. But if that code makes it into the Linux kernel, you are responsible. You verify it. You fix the bugs. You guarantee it meets the standards. This is the most mature stance I’ve seen in the open-source ecosystem regarding AI: neither hysteria nor blind adoption—just clear responsibility. The kernel has 30 years of history. They’re not going to ruin it to save 20 minutes with an autocomplete.

  • rishabhjava
    Java (@rishabhjava) reported

    @github How about the existing product stops going down first

  • BenittoJD
    Benitto J D (@BenittoJD) reported

    Github actions are down again

  • zeeg
    David Cramer (@zeeg) reported

    @eternalmagi dont have context, is there a github issue by chance

  • Adibougre
    Adibou (@Adibougre) reported

    @ShanuMathew93 "the older models that are no longer SOTA will get competed down as competition increases" Github didn't get the memo

  • MatthiasPorges
    Matthias Porges (@MatthiasPorges) reported

    So how would I tackle the Level 5 localisation problem? AI translate / port the whole game to Switch/PC, release it at a reasonable mid-price of $30. Make the AI translation available open source on GitHub and encourage fan participation. Maybe even buy some pre-existing fanwork.

  • LeeLeepenkman
    Lee Penkman (@LeeLeepenkman) reported

    @gxjo_dev stupidity... no... frupidity basically. like the exec cfo team is like well what if we reduce headcount wouldnt profitability go up? Like yes but you just wont be a good product company without a good product... like you are already struggling to compete with GitHub lmao... how u gna compere with codex n claude when they do repos? Also theres just fear that these devs cant learn AI which is kind of wrong because devs seem to be best placed to leverage AI of all? idk. im just guessing. lots of saas companies just doing layoffs had hired too many people having thought they would keep growing then they didnt their stock went way down and becomes harder to raise money for them because of bearish outlook for them competing with claude so investors scared off so harder for them to afford lots of developers so kind of start sinking. the devs would do better elsewhere anyway better to be on a new ship instead of sinking one.