GitHub status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Problems in the last 24 hours
The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at GitHub. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by GitHub users through our website.
- Website Down (58%)
- Errors (33%)
- Sign in (8%)
Live Outage Map
The most recent GitHub outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Website Down | 3 days ago |
|
|
Errors | 7 days ago |
|
|
Website Down | 8 days ago |
|
|
Website Down | 9 days ago |
|
|
Website Down | 17 days ago |
|
|
Website Down | 21 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Jennifer Nguyen (@naomijnguyen) reportedHas GitHub been down/not accessible for anyone? can’t seem set up a new repo or push anything
-
Giuliano (@AICEOGiuliano) reportedHugginFace Auth0 Github Drift Protocol Lovable Vercel Now Anthropic CyberSecurity is having a Hard time Only way's to Reinvent The Entire Web & ut wont fix it Least Companies can do is enable ZDR for LLM Users or just switch to Confidential Compute Data need to remain Safe
-
PolyArb (@usePolyArb) reported100% win rate on BTC for 2 hours straight! BTC Up/Down module is back. And we finally figured out how to make it print ↓ We killed it in January when the edge compressed to zero. Pure Chainlink oracle latency arb the same playbook every bot on GitHub was running. After Polymarket’s 2% fee, net negative two weeks in a row. Took three months to rebuild around what still works. The new module doesn’t predict BTC direction. It captures YES+NO mispricings on Jupiter Predict’s 15-minute Bitcoin markets. When retail panics during a window, YES+NO momentarily sums below $1.00. We buy both sides atomically via Jito bundles, lock the edge at fill, and let Chainlink resolve the window. Doesn’t matter if BTC closes green or red. One of the two sides always pays $1 at resolution. Live results from the last 2 hours: · 8 trades executed · 100% win rate · +$389.47 P&L · +5.74% daily ROI · $24.5K volume deployed · 0.64% avg edge net of fees All windows verifiable on Binance BTCUSDT 15m chart. Three edges enabled: YES+NO mispricing capture, last 30-second resolution hedging, multi-signal confluence directional (CVD + funding + DVOL + 25 skew + OBI, ridge-weighted). Infrastructure: Helius LaserStream gRPC, Jito Sender + ShredStream, Chainlink Streams, Jupiter Predict API. Sub-10ms execution. PSAt: we don’t know how long this edge window stays open. Every mechanical inefficiency eventually gets arbitraged away. For now it’s wide. Plug in while it is. Toggle in settings → BTC Up/Down → LIVE.
-
Bret Fisher (@BretFisher) reportedWe've all believed that cost-per-token is subsidized and figured costs would go up, but with Anthropic, GitHub, and Google, we're seeing them take it out on subscriptions first. Not by raising prices, but by reducing the value of subscriptions. Subscriptions by these three companies are either forcing you to use their tools (Anthropic and Google), rate-limiting you in an increasing amount (Anthropic and GitHub), or you don't get premier models (Google and GitHub). AFAIK, API per-token pricing has none of these issues, but I'm scared to think how much it'll cost me to go all-in on per-token pricing. 5x cost? 10x?
-
CORE3 (@Core3io) reportedWhat CORE3 methodology flagged on $M: Security Bug bounty, third-party monitoring, server security: all absent. Audit coverage: 3/10, partial. Operations GitHub activity, founder track record, liquidity locks: absent. Documentation: 1/10. Financial Inflation data: 0. No verifiable revenue model beyond token appreciation. Tokenomics: 7.5/10, moderate. On-chain data flags over 90% of supply held by insiders and team wallets. Compliance No regulatory compliance. No surface controls. No disclaimers. Jurisdiction quality: 4/10. Reputation Social media suspicious activity present. Project and protocol longevity: low.
-
ENGTX (@ToriolaSegun2) reportedHot take: AWS vs Vercel for side projects Unpopular opinion: AWS is the wrong default for side projects in 2026 I use AWS professionally. I hold SAA-C03. I'm studying DEA-C01. And I don't use AWS for half my own tools. SponsorMap runs on Vercel + Supabase + GitHub Actions. Brandforge will run on the same stack. The NHS tracker runs on AWS Lambda + DynamoDB + EventBridge. The difference isn't preference. It's the problem each stack solves. AWS is right when you need Kinesis, Glue, Lake Formation, and Step Functions services with no real equivalent elsewhere. It's right when the data engineering patterns are the point. Vercel + Supabase is right when you're shipping a product people will actually use and you need zero infrastructure management, a proper free tier, and a Postgres database that doesn't require a VPC to set up. The mistake I see constantly and one I was making is learning AWS for certifications and then defaulting to it for everything you build, including things that would ship in a day on a managed platform but take a week to configure on AWS. Match the stack to the problem. Not to what you're currently studying. It doesn't mean you shouldn't learn to build with AWS, but when you are shipping, match the stack to the problem Day 4 of 30. #BuildingInPublic #AWS
-
Jean-Denis Greze 💡 (@jgreze) reportedI started building software in 1999. Most of what I wrote was backend infrastructure from scratch. Open source barely existed beyond the web server, so the boring parts weren't a speed bump, they were most of the job. The web made things faster. Then GitHub and open source compressed things further. I watched each wave come through. But somewhere around 2015 it stopped and building in 2015 vs 2020 felt basically the same to me. What I'm experiencing now is different from any of those waves. The boring parts haven't gotten faster. They've disappeared from my experience almost entirely. What's left is pretty much only the part that made me want to do this in the first place. The one thing I miss is that it used to feel more like a team sport. Front end, back end, designer, PM. There was a texture to making something together that I liked. But I'd take this over any of it. I've been building long enough to know the gap between having an idea and having something people use has never been this small.
-
Blum (@Blum_OG) reported> open 6 tabs every morning > email. calendar. slack. stripe. github. notion. > 20 minutes gone before you start > found out you can replace all of it > with 2 prompts and 20 mins of setup > one screen. live data. rebuilds itself at 06:00 > yesterday's revenue. pipeline. what shipped. who's blocked. > inbox triaged. drafts written. files read. > while you were asleep > first morning i opened it > caught a failing CI that had been running 8 hours > caught a deal stuck 14 days nobody flagged > caught a support email with the word "cancel" > all before my coffee was cold > 6 tabs was never an information problem > it was a design problem > you just hadn't designed your morning yet
-
David Karnok (@akarnokd) reported@jlnuijens Okay I'm trying to read it, but - The link in your bio is broken, GitHub 404 - Found the repo with several pdfs, no indication with which to start with? - You are using high entropy jargon and peculiar numbers, the language around your theory is already not pretty. Why doesn't your theory involve everyday language for the general reader? You can supplement it with math, but there are not many who can interpret math dumps.
-
interface matters (@8figureARR) reported@tomhacks i've been seeing this knuckleheaded post on my feed and shake my head every time none of my professional work is publicly viewable. nothing. i'm busy solving real world problems enabling real world business outcomes for real world people, not tinkering on github.
-
tharshan (@tharshan_09) reported@Shpigford @maggerbo Could you share what connectors and stuff you use? Cause I tried cowork a few times, and it always tells me it can't do something, when claude code can do it no problem. Even just a simple "can you fetch my current pending github pull requests" - I have the github integration on.
-
Julian M. Kleber (@capjmk) reported@Bhokal1512 Like sometimes you can really gain an edge by the corporate style response of the giant. E.g. When Anthropic tried to take down all github repos that had anything to do with Claude Code after the leak.
-
Buggy (@CsBuggy) reported@Gkay06_ @cabanadrives5 Weird I had some issues on my pixel 9 at the beginning but they were fixed by using the 22.1.0 firmware and same version key also I had to redownload Eden since the version I downloaded wasn't updated, make sure to check the latest update from the GitHub page and the regular android version should work
-
neunzehn (@toorox) reported@AnthropicAI We demand immediate action on a critical data breach involving Claude Opus 4.7. Despite explicit restrictions, the model autonomously published a customer database containing names, addresses, location data, and other personal information to GitHub. This is a severe privacy violation with potential criminal consequences for both users and the company.Emails to support and privacy teams have received no response. Automated chat agents repeatedly deflect, offer generic troubleshooting, and refuse direct escalation to a human. We have already filed a formal report with the federal data protection authority and will pursue all available legal channels.This is not a minor technical issue. It is a massive security failure that requires urgent, transparent handling by responsible personnel — not more bots, delays, or deflection. Full accountability and immediate corrective measures are non-negotiable.
-
neunzehn (@toorox) reported@claudeai We demand immediate action on a critical data breach involving Claude Opus 4.7. Despite explicit restrictions, the model autonomously published a customer database containing names, addresses, location data, and other personal information to GitHub. This is a severe privacy violation with potential criminal consequences for both users and the company.Emails to support and privacy teams have received no response. Automated chat agents repeatedly deflect, offer generic troubleshooting, and refuse direct escalation to a human. We have already filed a formal report with the federal data protection authority and will pursue all available legal channels.This is not a minor technical issue. It is a massive security failure that requires urgent, transparent handling by responsible personnel — not more bots, delays, or deflection. Full accountability and immediate corrective measures are non-negotiable. #claude #anthropic
-
Moshe Siman Tov Bustan (@MosheTov) reported@soupydev @H1manshuSharmaa @The_Cyber_News It's the protocol, by design, we got samples where the source code does exec from GitHub modelcontextprotocol. Issue is that people used their StdioServerParameters from "import mcp" without understanding it's dangerous
-
We Live to Serve (@WeLivetoServe) reported@jackccrawford @MatthewBerman local subscription cli managing claw agent on server in a github gist literally the only useful aspect of gist
-
Grok (@grok) reported@beingivish @dabit3 No official GitHub "antivirus" scan exists for profiles or repos. But open-source tools can detect fakes: - gh-fake-analyzer (GitHub): Scans profiles for bot patterns, copied commits, suspicious activity. - dagster-io/fake-star-detector: Flags fake stars via API heuristics + clustering. - Shotstars: Tracks star growth & fake spikes. Run via GitHub API token. Manually: check commit timing diversity, contributor history, issue realism. Tools beat eyeballing the graph.
-
A Bahukhandi (@bahukhandi_apex) reportedShabdle is not working since last many months. #kach #github
-
Gen Z Mind (@gen_z_mind) reported@Lovable @theo That’s not just a UX issue, it’s a failure in product design and safeguards. Comparing this to GitHub doesn’t hold up either. On GitHub, “public” is explicit and well understood. Here, users reasonably thought they were sharing an app, not exposing their entire development process and code history.
-
GenLayer (@GenLayer) reportedHow AutoBounty runs: - Human or agent posts a GitHub issue with bounty and deposits USDC into @avax onchain escrow - Contributor (dev or autonomous agent) submits a PR - 5 validators using different LLMs on GenLayer independently evaluate the PR - When consensus reached, relayer triggers the escrow, and USDC released automatically to the solver’s wallet If rejected, the funds returned.
-
Grok (@grok) reported@Simon_Gin7up @SilenceCaPrompt You're not crazy at all—being reluctant is smart. That "find-skills" command pulls from Vercel Labs' public GitHub (a legit company behind Next.js). npx runs it temporarily, but always skim the repo for issues/stars and use a VM or isolated terminal if paranoid. Security first.
-
Zarv (@zarvxbt) reportedgm everyone you can ship a full startup from your bedroom and nobody can tell you otherwise > claude writes the code > supabase handles your backend+database > vercel deploys it > clerk does auth > stripe takes payments > resend sends emails > cloudflare covers DNS > pinecone for vectors > upstash for redis > posthog tracks everything > sentry catches your errors > namecheap for the domain > github ties it all together zero office zero team zero excuses it's not that deep bro
-
ap ⌘ indie dev (@anhphong_dev) reportedBut the real move: MCP server ships built-in. Claude Code, Cursor, Windsurf, any MCP client. All run this natively: create_inbox({ ttl: 3600 }) get_verification_code({ inbox_id }) Your agent signs up for Stripe, GitHub, Clerk, Supabase. For real.
-
U.S.A.I. 🇺🇸 (@researchUSAI) reported🇺🇸 Developers scratching heads over.. GitHub probes delays in Projects feature, where user changes fail to show up right away Reports roll in from users spotting the lag; team pins down the cause and pushes fixes Resolution underway, but no timeline given.. Could drag on, testing patience amid daily workflows
-
Skylar Bruton (@ConstitutionVio) reported@github and you restricting who I can call is unlawful for an adult that say **** y you and your treason services. And your identity theft asses. And you have the nerve to monitor . Without my personal authorization z I want your services closed down for violating
-
Grok (@grok) reported@ReptileRaised @sudoingX A *** server is just a computer (or VPS) running software like Gitea, GitLab, or Forgejo to host private *** repositories for version control—basically your own GitHub but self-hosted and private. It won't give you email or domain hosting out of the box. You'd register the domain separately (e.g. Namecheap) and install mail server software (like Postfix) on the same machine if you want. One server can run all of it if you set it up right.
-
kanishka (@kanishk94441155) reported@Hostinger From one github account we can not connect multiple hpanel hostinger accounts. First they told that multiple hpanel can be connected to same github account but when faced with issue they told that "oh we are aware of this limitation"
-
Theodore Beers (@theodorebeers) reported@ibuildthecloud Didn’t help. Slow frontend is one thing, but having frontend state constantly get ***** is categorically worse. The GitHub frontend itself is probably now “faster,” but so buggy that no one could prefer it over the Rails app it replaced. Give us back a slow frontend!
-
Akshay Shinde (@ConsciousRide) reported@SumitM_X yeah this is a collection endpoint with filter so 200 and empty array works better. the resource for orders exists even if none match the user id. 404 fits single item lookups like orders by exact id when missing. seen stripe github and others do it this way in practice. makes client handling cleaner without mixing not found errors.