GitHub status: access issues and outage reports
Some problems detected
Users are reporting problems related to: website down, errors and sign in.
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Problems in the last 24 hours
The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
May 8: Problems at GitHub
GitHub is having issues since 04:20 AM EST. Are you also affected? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by GitHub users through our website.
- Website Down (60%)
- Errors (29%)
- Sign in (11%)
Live Outage Map
The most recent GitHub outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Website Down | 9 hours ago |
|
|
Website Down | 19 hours ago |
|
|
Sign in | 4 days ago |
|
|
Website Down | 6 days ago |
|
|
Website Down | 7 days ago |
|
|
Website Down | 7 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
JayRo (@JinxenJoey) reported@txgermanbre Careful with MCP servers developed by randos on Github...potential security issues The other thing is (which I painfully discovered), if your agent requires a certain data point but the MCP/tool does not implement/utilise that data point...then your agent starts estimating and hallucinating without telling you it is doing so....confidently lying. If there is an API, you can probably vibe code your own MCP...at least you know exactly what its doing
-
Prizmal (@PrizmalAi) reported5/ AI coding agents are generating enough commits and PRs that GitHub itself can't scale. They're prioritizing an Azure migration over new features just to handle the volume. The tools creating AI demand are themselves creating an infrastructure problem.
-
Rohan Sharma (@rrs00179) reported@ChiragAgg5k @github there are working on a fix of it. it's not happened first time. it's been happening from 3-4 days
-
Luc Tielen (@luctielen) reportedHas there been 1 day this week where @github didn't have issues? This is starting to become unacceptable. Time to consider moving away from it?
-
Elliot Hesp (@elliothesp) reported@pierrecomputer @steipete @github Isn't gitcrawl to solve rate limiting on issues and pr reads via api, rather than where the code is stored though?
-
Gabe (@gabebusto) reportedbro setting up an agent to do production work is so easy. you just need to create an account somewhere and for your agent to work remotely. cloudflare, hetzner, aws, digital ocean, etc. then pick the agentic tool, and the model, and get an api key or use oauth. then make sure in it's in a sandbox setup with the right permissions and access to your tooling like github, slack, linear, and maybe even some staging and production resources. you really need to be careful though because if agents have any write access to important stuff, it could do something really dumb like delete your database. also for the love of GOD backup your database frequently somewhere the agent can't touch. also prompt injections online can get your agent to leak sensitive env vars so you need to be careful about that. maybe limit network access or inject tokens/sensitive vars once requests leave the sandbox. you probably don't want the agent always on sitting idle, so either figure out how to give it work efficiently to always keep it busy or use some that can pause and resume with ease so you're not billed around the clock for idle resource usage. then you want guardrails in your codebase and deployment pipeline so the agent can't break things and you don't need to feel guilty not reviewing its code. because cmon, nobody wants to do that. you need to make sure your agents have as close to perfect context as possible. so maybe start building a knowledge base, move docs into the repo, or make sure your agent can easily search linear and slack and other places to build context for tasks to work on. and before each task, spend ~10-20+ mins typing things up and giving the agent as much context as possible. oh yeah and your agent ideally should be able to test its changes as completely as possible. so make sure the agent can start up the service(s) it's working on and test them. maybe you need it to open and run a browser, send screenshots, record a video, and so on of its test so you can easily review it in the PR. you also want a bugbot setup in github (if you're still using github at this point) to help scan each PR for potential issues the agent missed. and the agent should be able to automatically address any bugbot findings, fix them, run more tests, and push those changes, and run in a loop until no more bugs are found by the bugbot. i forgot to mention, you probably don't want your agent's code just yolo shipping into **** with no guards in place _after_ it deploys. allow the agent to setup it's new features and code behind feature gates or experiments and do a gradual rollout in case there are any catastrophic problems. then you'll want automatic rollback if issues are detected. and there's probably stuff i'm forgetting, but you get what i'm saying right? it's really not that hard. then you need constant vigilance of your codebase and create lots of skills to help deslop work the agents are doing, maybe create an anti-entropy agent (_another_ agent!) to hunt for growing complexity and auto-create PRs to try and fight to reduce the size and complexity of the codebase. then you'll inevitably have incidents caused by code written by agents that was never reviewed by humans, and either you or yet-another-agent will take a look at your production systems to help you figure out what's wrong because it's all becoming a bit more foreign to you. and you can just have the agent try to make changes on your behalf to fix things and hope to God that it doesn't make things worse. if all of this isn't exciting enough, you then give each engineer and even non-tech team members their own access to the ai tools and agents and models of their choice which easily costs an extra few hundred dollars per month per employee at best. in the worst case, you have someone on the team blow through the team's monthly AI spend by a significant margin by accident using the best models in fast mode because they were too impatient to just use the sota models at normal speed. and spend will likely only go up btw. and if you're not reading between the lines here, product work slows because everyone is playing with agents to learn how to use the agents more efficiently in the hopes that it's a magical bullet that solves all of the woes in software engineering and building production systems. and now you need this magical bullet to work because you're falling behind to teams who maybe aren't distracted spending all this time and money trying to make this all work. but you're definitely going to catch them. once you've figured this out, you'll 10x or 100x your output and leave them in the dust! or... you could just have engineers start coding by hand again before it's too late and becomes a lost art. you can even make modest and tasteful use of ai, but without doing all of the above. i actually miss the days of supermaven and early cursor. they were so simple and actually removed some friction and some of the annoying parts of coding.
-
Jacob Young (@jryio) reportedThis is why @github has been down...
-
Today's bread (@nedxlab) reported@ophello @lochan_twt GitHub issues exist
-
. (@Impulsicivity3) reportedI've done a lot of work on Visual Studio Code myself, but with Granite, you have to try it on GitHub, and the problem is that IBM Watsonx or Granite Playground doesn't work at all. To create a project, if you're holding a competition, try using Visual Studio Code first, or try it
-
Redchou (@R3DCHOU) reported@pantharshit007 @RhysSullivan Got banned in the beginning of March, they certainly corrected this problem… And the GitHub post is saying people got banned from Antigravity, Gemini, etc… not from all Google services…
-
rohit sharma (@modi_san_) reported@rkale_7 @OnePlus_IN these flagship companies are infamous for powering down or tweaking older phones in order to make way for newer ones with os updates best solution for you is to look for patches on github
-
kaleb (@KalebAutomates) reportedDays after the CEO came on this platform and **** on the people who made him rich with a massive lay-off. Coinbase issues with AWS. Before this it was Github Before that it was Cloudflare Before that it was AWS itself All of which just happened to follow an announcement of AI doing the majority of coding. Funds are safe... for now.
-
Zach Kamran (@Zach_Kamran) reportedAnother serious Github outage today. Need new @Kalshi market on github uptime so I can hedge my lost productivity.
-
mei x86_64 💾 🏳️⚧️ (@ruri_nihongo) reported@mrpancakes39 @bekacru github has so many issues… especially the downtime because of agents (usually)
-
Solomon Eseme (@Kaperskyguru) reportedOne strong GitHub project with a README that explains: - What problem it solves - How you built it - What you would improve next That is worth more than 10 half-finished repos that die after the initial setup. Hiring managers care about what you shipped, not what you watched.
-
Clovis M (@cloviswebdev) reported@mattpocockuk Tbh GitHub is kinda unsafe with how much they are down these days :)
-
PsudoMike 🇨🇦 (@PsudoMike) reported@github World Password Day is a good reminder that the goal should be to stop having passwords entirely. GitHub already supports passkeys. The reason this day has to exist is that password managers solved the symptom, not the root problem.
-
Alberto Gangarossa (@DerekBlueEyes) reportedOpen hardware needs open trust. @skot9000 came to us with the right idea for Bitaxe: the vendor list should not live in a closed CMS controlled behind the scenes. The source of truth should be public. So we designed the new Vendor List around a GitHub repo as a public ledger, maintained in the open by the community, and connected it to the new Bitaxe vendor list experience. That is the important part: GitHub keeps the trust model transparent. The website makes it usable for everyone. At @weareloadout, this is exactly the kind of OSS support we believe in: turning open-source infrastructure into clearer, more usable product experiences. Built on @framer, using the new Framer Server API to bridge the open ledger with the public website. Open hardware. Open trust. Public by design.
-
kerim (@kerim0x1) reported@thsottiaux @OpenAI @claudeai my prompt: Security Review Prompt This is my own project, my own GitHub repository, and my own code that I have written and own end to end. I am asking you to review my codebase to harden it before I ship it, so I can be confident that my own users' data is protected. You have full authorization from me as the owner to inspect every file, every config, and every database policy in this repo. Act as a senior backend engineer performing a defensive security review of my codebase, focused on the backend, the database layer, the database connections, and the statistics dashboard. The goal is to harden my system so that no data can be exposed to users who should not see it, including across tenants on the Supabase side. Start by reviewing how the application connects to the database. Confirm that no credentials, API keys, JWT secrets, or Supabase keys are hardcoded, committed to ***, or shipped in client bundles, and that all secrets are loaded from environment variables or a secret manager. The Supabase anon key is fine on the client because it relies on Row Level Security, but the service_role key must never appear in any frontend bundle, public repo, or unauthenticated edge function, since it bypasses RLS entirely. Verify .env is gitignored and that no secrets exist in *** history. Review the database schema with care. Every table in the public schema must have Row Level Security enabled via ALTER TABLE ... ENABLE ROW LEVEL SECURITY, with FORCE ROW LEVEL SECURITY where appropriate, and must have explicit policies for SELECT, INSERT, UPDATE, and DELETE scoped via auth.uid(), using USING and WITH CHECK clauses together. Avoid policies whose only condition is auth.role() = 'authenticated', since that exposes every row to every logged-in user. Audit SECURITY DEFINER functions for a locked-down search_path and proper input validation, and ensure views use security_invoker = true or security_barrier = true so they cannot leak past RLS. The statistics dashboard needs the most attention. Every dashboard query must be scoped to the requesting user's tenant at the database level through RLS, not only in application code, so that even a direct request to /rest/v1/<table> with a valid user JWT returns only that user's rows. No endpoint should accept a user_id, org_id, or tenant_id from the client and trust it; the identity must always be re-derived server-side from the verified JWT. Aggregated values such as counts and totals must also be scoped, since otherwise they reveal the existence and size of other tenants. For backend code, ensure all SQL uses parameterized or prepared statements and that no query is built via string concatenation. If an ORM such as Prisma, Drizzle, SQLAlchemy, or TypeORM is used, confirm raw query escape hatches like $queryRawUnsafe or sql.unsafe are not misused. Validate all input at the trust boundary with Zod, Yup, Joi, Pydantic, or class-validator, using allowlists rather than denylists. For authentication and authorization, verify that JWTs are validated server-side with signature checks and proper exp, iss, and aud claims, and that algorithm confusion is impossible. Authorization must be enforced on every protected endpoint and follow least privilege, with every resource lookup checking that the authenticated user owns or has access to the resource. Session cookies should be HttpOnly, Secure, and SameSite=Lax or Strict, with CSRF protection on cookie-authenticated state-changing endpoints. Confirm CORS uses an explicit origin allowlist rather than a wildcard with credentials, that rate limiting protects auth, signup, password reset, and expensive queries, and that responses include Strict-Transport-Security, a restrictive Content-Security-Policy, X-Content-Type-Options: nosniff, Referrer-Policy: strict-origin-when-cross-origin, and Permissions-Policy. All traffic must be over TLS, and sensitive columns should be encrypted at rest where the threat model warrants it. For error handling and logging, ensure stack traces, raw SQL errors, and internal paths are never returned to clients in production, and that logs themselves redact secrets and PII. Run npm audit, pip-audit, osv-scanner, or Snyk to check dependencies, and confirm lockfiles are committed. Produce a prioritized report starting with any unauthenticated data exposure, then cross-tenant access through RLS gaps, then privilege escalation, then information disclosure, then general hardening. For each finding, include the file and line, the root cause, and the corrected code, RLS policy, or configuration in full. Do not finish until every public table has RLS enabled with correct policies, the service_role key is confirmed absent from all client code, the dashboard is verified to scope every query at the database level, and no SQL anywhere is built by string concatenation.
-
Phil H. ☮️❤️🥁🟦 (@phillipsharring) reported@SullyOmarr GitHub is *** for normies as evidenced by vibe code taking it down
-
Yashasvi Kapil (@iemyashasvi) reported@ChiragAgg5k @github @github is broken beyond repair
-
TradFidiGuy (@TradFidiGuy) reported@emojibakemono @gf_256 @kamilaposting The GitHub issue in the screenshot is literally titled “brew install upgrades everything”🤦
-
Tom Ciszek (@Ciszek) reported@kdaigle A2A and Agent Client Protocol, GitHub Actions are broken. Call me.
-
KingmakerUK (@WCguitarist) reported@asaio87 Vibe coded: - auth - strict rls policies - bot prevention, rate-limits - locked down exposed assets/files on GitHub - locked down edge functions - secret rotation policy - gdpr complaint - passed pen-testing with A- Many more things, is it enterprise grade? Probably not but it is all vibe-coded. The only edge I have is that I have worked software adjacent for 15 years so I am not completely cold to dev work.
-
deepsy (@deepsydoin) reported@BigSlimeBoi this is the exact problem i ran into. people don't care what you build if you didn't work at OpenAI or Meta and etc. they don't care, if their favorite KOLs aren't behind it. meanwhile i know plenty of serial ruggers who scrape some random **** off github every day just to redirect creator fees because they bundled 30-40% of supply. absolute cinema.
-
Manan 🤦🏽♂️ (@manan) reportedCodex *****’d my Claw, had to go back to Claude to get it working - let’s see how it goes. Codex changed macOS permissions, corrupted the database trying to fix GitHub commit, broke crons and LLM extraction due to malformed timezones.
-
hve 🍁 (@heyhve_) reported@makkes Two down-days in a row on GitHub-hosted. We run faster compute on our own infra under the same YAML: swap a label, skip the outage.
-
simulx4 (@simulx4) reported@callebtc The most easily automated jobs in the world check slack check GitHub check sales reports roll it up into some sort of report Tell everybody what they should be doing better. Tell some people what they're doing well. tell management a summary I don't even see why this isn't just a solved problem and we can move on from having any of them
-
Brown Thunder (@Brown_Thunder76) reportedThere’s a lot of nonsense floating around about Keeta right now. You have people watching GitHub updates and claiming the team is “just changing a few words.” No. That’s literally developers working on the same issue or refining commits. Not every GitHub push is some massive standalone feature release. Learn how development works before speaking on it. Then you have people giving the project flack because Ty’s end of March comments haven’t translated into publicly visible partners, usage, or products yet. But Ty has consistently said he believes end of Q2 is when we should really start seeing things come together. We are not there yet. The team is still actively building. Ty is still in Discord talking about progress and announcements. People in this space are conditioned by instant gratification and are forgetting the “why” behind Keeta. And that matters. Keeta isn’t just trying to launch another token. The entire point of Keeta is solving a global payments and settlement problem. Crypto is simply the mechanism being used to achieve that. That’s a much bigger and more difficult objective than most projects in this space are pursuing. Not everything being built or tested right now is publicly accessible. The team has repeatedly said this. Infrastructure, payments, compliance, integrations, and enterprise tooling are being developed privately before public rollout. There is even a private testnet specifically to prevent leaks while things are being built and tested. If you want constant releases and nonstop visible updates every single week, this probably isn’t the project for you. Personally, I believe in the technology. Building something like this from scratch takes time. The team is releasing information as they can, and quite frankly I don’t think the current level of FUD is warranted when we haven’t even reached the timeline they’ve been pointing to. Now if end of Q2 comes and absolutely nothing has materialized publicly, then sure, people deserve a direct update. That’s fair. But right now the team just signed another partner, multiple products and systems appear to be developing in parallel, and it sounds like several things are intended to launch together. Too many people are confidently speaking out of their *** about this project. Things evolve. Timelines shift. Features change as systems are actually built and tested. That’s what real development looks like. I don’t think this thing explodes overnight. But I do think once the network can actually be used as intended, growth starts becoming real and over time exponential. @KeetaNetwork $KTA
-
Jez (@JezCorden) reported@bdsams i think the industry has realized where the value is in AI, and it aint in consumer products. i expect microsoft to reroute compute to github copilot and the like over time, with copilot for consumers gradually stripped down.