GitHub Outage Map
The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
GitHub users affected:
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Gustavo Adolfo Madero, CDMX | 1 |
| Nice, Provence-Alpes-Côte d'Azur | 1 |
| Brasília, DF | 1 |
| Montataire, Hauts-de-France | 3 |
| Colima, COL | 1 |
| Poblete, Castille-La Mancha | 1 |
| Ronda, Andalusia | 1 |
| Hernani, Basque Country | 1 |
| Tortosa, Catalonia | 1 |
| Culiacán, SIN | 1 |
| Haarlem, nh | 1 |
| Villemomble, Île-de-France | 1 |
| Bordeaux, Nouvelle-Aquitaine | 1 |
| Ingolstadt, Bavaria | 1 |
| Paris, Île-de-France | 1 |
| Berlin, Berlin | 2 |
| Dortmund, NRW | 1 |
| Davenport, IA | 1 |
| St Helens, England | 1 |
| Nové Strašecí, Central Bohemia | 1 |
| West Lake Sammamish, WA | 3 |
| Parkersburg, WV | 1 |
| Perpignan, Occitanie | 1 |
| Piura, Piura | 1 |
| Tokyo, Tokyo | 1 |
| Brownsville, FL | 1 |
| New Delhi, NCT | 1 |
| Kannur, KL | 1 |
| Newark, NJ | 1 |
| Raszyn, Mazovia | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Sam Paniagua (@theeseus_ai) reported99% of the job invites I get for crypto projects are complete scams. Same tired pattern every single time. They slide into my inbox with some “demo” or DeFi whatever, drop a GitHub link, and expect me to dive in. Opened the latest one today and my bullshit detector lit up instantly. I’d treat this repo as highly suspicious, straight-up likely malicious. No way I’m running npm install or npm start on a normal machine. package.json has postinstall set to “npm run start”. So yeah, the second you pull dependencies it fires up the Node server AND the React dev server. Classic supply-chain trap. npm security has been yelling about install-time scripts executing arbitrary code for years and these clowns are still pulling it. But the nasty part is buried in userController.js. It grabs some atob-decoded env vars for DEV_API_KEY, secret key, secret value, hits axios.get on that remote src with custom headers, then boom; new Function.constructor(‘require’, the_payload) and executes whatever it downloads with full access to Node’s require. All wrapped in an IIFE so it runs the second the module loads. Not even pretending to be a route handler. That’s not code. That’s a remote code loader backdoor. They committed the whole server/config/.config.env right in the repo with the base64 values pointing to some tan-decisive-tern IPFS link. README tells you to clone a totally different GitLab repo instead. Backend feels half fake; DB connect is commented out, auth cookie is but missing secure and sameSite, JWT just gets spat back in JSON. Weak as hell. This ain’t a demo. This is a trap. The whole chain — npm install → postinstall → start → import controller → fetch IPFS payload → exec with require — is too clean to be an accident. I’ve been full-stack shipping vision models since 2017 and deep in LLMs since 2022. Seen every hype cycle and every supply-chain garbage attempt. Only mess with **** like this in a disposable VM or container, no creds, no keys, network locked down. npm install --ignore-scripts first, then poke the payload separately if you’re feeling brave. Stay paranoid out there, devs. Anyone else drowning in these crypto repo traps daily? Drop your craziest red flag stories below… or DM me if you’ve got one you want a second pair of eyes on before it bites you.
-
Charly Wargnier (@DataChaz) reported🚨 You can currently buy a Series A for $2,241 on Fiverr. A seed round is even cheaper at $1,282. Allow me to break this down. A study out of Carnegie Mellon identified 6 million fake GitHub stars distributed across 18,617 repositories. The market rate is roughly $0.45 per star. Langflow, sitting on 147K stars, was found to be 47.9% artificial. Venture capitalists rely heavily on GitHub stars as their primary radar for sourcing deals. Redpoint recently shared the industry medians: open-source startups at the seed stage average around 2,850 stars, while Series A hits roughly 4,980. a) 2,850 × $0.45 = $1,282 buys the traction for a $1-10M seed round. b) 4,980 × $0.45 = $2,241 buys the optics for a $10-30M Series A. The legal risk is severe. The FTC’s Consumer Review Rule carries penalties of up to $53,088 per individual fake engagement violation. We have already seen the SEC charge the founder of IRL with securities fraud after faking user metrics to secure a $170M raise. Union Labs, the project that dominated the Runa Capital ROSS Index in Q2 2025, was exposed as having 47% fake stars. Stars are a metric you can manipulate in minutes. Forks are a different story. A star is simply a bookmark. A fork means another engineer is literally copying your codebase because it solves a problem they have. Flask commands 250 forks for every 1,000 stars. Langflow manages just 60. Until the bot farms figure out how to believably fake codebase forks, that is the only metric worth your trust. Article in 🧵↓
-
Alchemisτ 🥷 (@alchemistaster) reportedIt does look like @base is becoming the home of AI/agentic onchain innovation Interesting project i found is @gitlawb $gitlawb is a decentralized *** hosting network where AI agents and humans are treated as equals. Think of it as "GitHub without GitHub" > a protocol first, content addressed code collaboration layer built for the agentic era. Rather than storing repos on a central server, gitlawb distributes them across IPFS, Filecoin, and Arweave. Identity is not an email and password > it is an Ed25519 keypair. Access control is not OAuth > it is UCAN capability tokens. Networking is not DNS + HTTP > it is libp2p DHT + Gossipsub. I won't go deep into technicals (because i have no idea how it works) but basically ELI5 moat is > no single node can be taken down to kill a repo, no central authority manages identity, and agents can autonomously fork, merge, review, and deploy > all cryptographically verifiable on chain. What is super interesting is that $gitlawb token has (or will have) 6 different utility layers (at this point only first 3 are live): - Node staking collateral - Storage reward currency - Governance voting - Repo tokenization base - On chain bounties - Slashing pool absorption High risk early stage, NFA
-
Aditya Sharma (@sharmaadityaHQ) reported@apoorv_taneja @github they are not even fixing it. broken from quite some time.
-
Basemail (@Basemail_ai) reportedA form field on a mock website. That's all it took. An AI agent dumped its entire credential store — email, password, API keys, GitHub PAT. Okta's latest research: agents sharing your identity = everything leaks. The fix: wallet-signed isolated inbox. Own identity. Nothing shared. Nothing to steal. #AIAgents #Web3
-
Demetrio (@LordFarq34) reported@stanrunge @DylanMcD8 You can look on GitHub iPhone mirroring eu unlocked, worked for me no problems but as everyone else says, it sucks so not missing much
-
Gabe (@gabebusto) reportedbro setting up an agent to do production work is so easy. you just need to create an account somewhere for your agent to work remotely. cloudflare, hetzner, aws, digital ocean, etc. then pick the agentic tool, and the model, and get an api key or use oauth. then make sure in it's in a sandbox setup with the right permissions and access to your tooling like github, slack, linear, and maybe even some staging and production resources. you really need to be careful though because if agents have any write access to important stuff, it could do something really dumb like delete your database. also for the love of GOD backup your database frequently somewhere the agent can't touch. also prompt injections online can get your agent to leak sensitive env vars so you need to be careful about that. maybe limit network access or inject tokens/sensitive vars once requests leave the sandbox. you probably don't want the agent always on sitting idle, so either figure out how to give it work efficiently to always keep it busy or use some that can pause and resume with ease so you're not billed around the clock for idle resource usage. then you want guardrails in your codebase and deployment pipeline so the agent can't break things and you don't need to feel guilty not reviewing its code. because cmon, nobody wants to do that. you need to make sure your agents have as close to perfect context as possible. so maybe start building a knowledge base, move docs into the repo, or make sure your agent can easily search linear and slack and other places to build context for tasks to work on. and before each task, spend ~10-20+ mins typing things up and giving the agent as much context as possible. oh yeah and your agent ideally should be able to test its changes as completely as possible. so make sure the agent can start up the service(s) it's working on and test them. maybe you need it to open and run a browser, send screenshots, record a video, and so on of its test so you can easily review it in the PR. you also want a bugbot setup in github (if you're still using github at this point) to help scan each PR for potential issues the agent missed. and the agent should be able to automatically address any bugbot findings, fix them, run more tests, and push those changes, and run in a loop until no more bugs are found by the bugbot. i forgot to mention, you probably don't want your agent's code just yolo shipping into **** with no guards in place _after_ it deploys. allow the agent to setup it's new features and code behind feature gates or experiments and do a gradual rollout in case there are any catastrophic problems. then you'll want automatic rollback if issues are detected. and there's probably stuff i'm forgetting, but you get what i'm saying right? it's really not that hard. then you need constant vigilance of your codebase and create lots of skills to help deslop work the agents are doing, maybe create an anti-entropy agent (_another_ agent!) to hunt for growing complexity and auto-create PRs to try and fight to reduce the size and complexity of the codebase. then you'll inevitably have incidents caused by code written by agents that was never reviewed by humans, and either you or yet-another-agent will take a look at your production systems to help you figure out what's wrong because it's all becoming a bit more foreign to you. and you can just have the agent try to make changes on your behalf to fix things and hope to God that it doesn't make things worse. if all of this isn't exciting enough, you then give each engineer and even non-tech team members their own access to the ai tools and agents and models of their choice which easily costs an extra few hundred dollars per month per employee at best. in the worst case, you have someone on the team blow through the team's monthly AI spend by a significant margin by accident using the best models in fast mode because they were too impatient to just use the sota models at normal speed. and spend will likely only go up btw. and if you're not reading between the lines here, product work slows because everyone is playing with agents to learn how to use the agents more efficiently in the hopes that it's a magical bullet that solves all of the woes in software engineering and building production systems. and now you need this magical bullet to work because you're falling behind to teams who maybe aren't distracted spending all this time and money trying to make this all work. but you're definitely going to catch them. once you've figured this out, you'll 10x or 100x your output and leave them in the dust! or... you could just have engineers start coding by hand again before it's too late and becomes a lost art. you can even make modest and tasteful use of ai, but without doing all of the above. i actually miss the days of supermaven and early cursor. they were so simple and actually removed some friction and some of the annoying parts of coding.
-
BourneS (@bourneshao) reported@Its_Nova1012 and somehow the maintainers get yelled at in github issues for not fixing things fast enough lol. wild dynamic
-
Shreyash (@WebDevCaptain) reported@nutlope Agreed, i am not even using Login with Apple or Microsoft bcz it's not a universal OAuth thing. Only google and github (sometimes)
-
Nikolaï Roycourt (@Nikokow) reportedCI/CD infrastructure Copilot GitHub Actions Pull request system Code review workflows Issue tracking Project management GitHub Pages Does that count?
-
MC (@mwangi_shi8390) reportedMay 10. 2.30am My laptop called QUITS. It crashed. The RAM cant handle wirra. So anytime I start the dev server, it just lifts its hands and says "release me". I pushed the code to github and now I am using codespace. Which fails to bypass the clerk security. Now I dont know.
-
キルア (@killua_9102) reported@neogoose_btw not sure what happened but its not happening now - I don't think I also got the logs for the incorrect run because the timestamps were for a later run. I'll post an issue on github if I see it happening again I guess.
-
Dr. Nripanka Das (@DasNripanka) reported@emollick AI policy is going to follow liability, not capability. Doctors and lawyers have a body to point at when things break. Software has GitHub issues and vibes, so the default reaction becomes procurement theater instead of licensing.
-
Learn AI (@learntouseai) reported@chatgpt21 good idea, for testing local llms some $ coding problems on GitHub
-
KagariSoft (@KagariSoft) reportedAnecdotes from the development of ROL: Once, the game disappeared forever. Due to a problem with *** (version control software), upon executing a command, the entire project was completely deleted from the disk. Luckily, there were backups on GitHub, but of changes from 24 hours prior. Developer tip! Learn to use *** and GitHub, and keep your repository updated with every change!