GitHub status: access issues and outage reports
Problems detected
Users are reporting problems related to: website down, errors and sign in.
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Problems in the last 24 hours
The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
May 15: Problems at GitHub
GitHub is having issues since 11:20 AM EST. Are you also affected? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by GitHub users through our website.
- Website Down (63%)
- Errors (20%)
- Sign in (17%)
Live Outage Map
The most recent GitHub outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Sign in | 18 hours ago |
|
|
Website Down | 18 hours ago |
|
|
Website Down | 3 days ago |
|
|
Sign in | 4 days ago |
|
|
Website Down | 8 days ago |
|
|
Website Down | 8 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Oren Melamed (@OrenMe) reported@SimonHolman @github @GitHubCopilot Please open an issue in the repo
-
Anotida Msiiwa (@anomsiiwa) reported🧵 Why Every Serious Developer Must Cancel Their Claude Subscription RIGHT NOW and Switch to OpenAI Codex (Before June 15) The brutal truth about Anthropic's latest move and why Codex is eating their lunch for builders. If you're running agents, Claude -p, GitHub Actions, or any real automation - this is your wake-up call. Let's break it down with zero spin.
-
DataDan|AI Data Engineering (@ba_niu80557) reportedGitHub just killed per-seat pricing for Copilot. Effective June 1, 2026, all plans migrate to usage-based billing measured in tokens. This isn't a pricing tweak. It's a signal that the commercial model underneath all of enterprise software is breaking — and AI is the force that broke it. (Source: GitHub Blog, "GitHub Copilot is moving to usage-based billing", April 30, 2026) The announcement was specific: instead of counting "premium requests," Copilot will bill in "GitHub AI Credits" based on token consumption — input, output, and cached tokens, priced at published API rates per model. Code completions remain included. Everything else meters. GitHub's framing was careful: "This change aligns Copilot pricing with actual usage and is an important step toward a sustainable, reliable Copilot business." Translation: per-seat pricing was losing them money on power users and overcharging light users. The 5% of developers running Claude Opus through Copilot were consuming 75% of the compute while paying the same $19/month as developers using basic autocomplete. That math doesn't work. And GitHub isn't alone. The entire SaaS pricing model is fracturing simultaneously: → Per-seat pricing collapsed from 21% to 15% of SaaS companies in 12 months → Hybrid pricing (base subscription + usage overage) is now the industry standard at 41% adoption, up from 27% in 2025 → Outcome-based pricing is the fastest-growing model — Zendesk charges $1.50 per AI-resolved ticket, Intercom charges $0.99, HubSpot dropped to $0.50 in April 2026 → Gartner forecasts 40% of enterprise SaaS will include outcome-based elements by end of 2026 → IDC forecasts 70% of software vendors will refactor pricing away from pure per-seat by 2028 (Sources: Bessemer Venture Partners 2026 AI Pricing Playbook; Gartner 2026; IDC 2026; KORIX AI Pricing Models Guide, May 2026; Pickaxe AI Agent Pricing Models, May 2026) PYMNTS captured the meta-narrative in February: "The next great enterprise software battle may be fought not in GPUs or algorithms, but in invoices." (Source: PYMNTS, "CFOs Scramble as AI Pricing Breaks Traditional SaaS Billing Model", February 2026) Here's why this matters for every engineer and engineering leader, not just finance teams: The per-seat model worked for 20 years because software cost was fixed and predictable. You bought 500 Salesforce licenses. You knew the cost. Finance could forecast. Procurement could negotiate annual renewals. CFOs built models they could trust. Each license mapped to an employee, a department, a cost center. Clean, predictable, budgetable. AI broke every assumption in that sentence. AI doesn't charge per employee. It charges per token, per API call, per inference cycle, per autonomous workflow executed in the background while no human is watching. In some cases, it charges for all of them simultaneously. A single employee might generate 50,000 model calls in a day. Another generates zero. They're on the same plan, paying the same fee. The first is a cost center destroying margins. The second is pure profit. Per-seat pricing can't distinguish between them. Worse: AI agents are now the users, not humans. An autonomous agent running a customer support workflow makes thousands of inference calls per hour. It doesn't have a seat. It doesn't have a department. It doesn't fit in any line of the traditional software budget. It charges by computation, not by headcount. When the user isn't a person, per-seat pricing becomes structurally incoherent. The hidden cost problem is 40-60% larger than what most teams track. Zenskar's CFO guide from last week quantifies what most enterprise AI teams haven't measured: "Enterprise AI deployment audits reveal that hidden costs — retry logic, retrieval augmentation, context window management, embedding generation — increase bills by 40-60% on top of what most teams are tracking." (Source: Zenskar, "Token-Based Pricing for AI Products: The CFO's Guide 2026", May 2026) Forty to sixty percent hidden cost. Not because vendors are hiding it — because the metering is honest but the consumption is invisible. Your agent retried a failed tool call 3 times? That's 3x the tokens. Your RAG pipeline retrieved 20 chunks to answer one question? Those are input tokens you paid for. Your context window grew to 80K tokens by turn 15 of a conversation? Every subsequent call bills for the full 80K. Most finance teams are tracking the API invoice. They're not tracking the architectural decisions that drive 40-60% of the invoice. This is an engineering problem masquerading as a finance problem. The engineering team's architectural choices — retry logic, context management, caching strategy, model routing — directly determine the invoice. But the invoice lands on the CFO's desk, not the CTO's. The teams that figured this out have engineering and finance in the same room reviewing inference costs monthly. The teams that haven't are discovering cost overruns at quarterly reviews and blaming "AI is expensive" rather than "our architecture is expensive." The outcome-based pricing race is the most interesting development. The Intercom → Zendesk → HubSpot price war on AI-resolved tickets tells you where the market is heading: → Zendesk: $1.50 per AI-resolved conversation → Intercom: $0.99 per AI-resolved conversation → HubSpot: $0.50 per AI-resolved conversation (April 2026 price drop) (Source: KORIX, Pickaxe, Flexprice — all May 2026) The customer pays nothing when the AI fails. Nothing when it escalates to a human. Only when it actually resolves the problem. This is the most aligned pricing model in the history of enterprise software — the vendor only gets paid when value is delivered. But it requires something most AI companies can't do yet: reliably define and measure "resolution." What counts as "resolved"? The customer stopped replying? The customer clicked "satisfied"? The ticket was closed without escalation? Each definition produces a different revenue number. And vendors have every incentive to define "resolved" as generously as possible. The buyers who win this game negotiate resolution definitions into the contract. The buyers who lose accept the vendor's default definition and discover, six months later, that "resolved" included conversations where the customer simply gave up. What this means for engineering leaders: 1) Your architecture is now your cost structure. Every architectural decision — model selection, caching strategy, context management, retry logic, output control — directly determines your inference bill. The CFO's job is to track the bill. Your job is to architect the system that produces it. The teams running default configs are subsidizing their vendor's margins. The teams with model routing, prompt caching, output compression, and cost-per-trace tracking are running the same agents at 70-90% lower cost. 2) "How much does our AI cost?" is the wrong question. The right question is: "How much does our AI cost per successful outcome?" Cost-per-token is a hardware metric. Cost-per-resolved-task is a business metric. The first tells you what you're spending. The second tells you whether you're getting value. If you can't produce a cost-per-resolved-task number for your AI deployments today, you're flying blind on the metric that CFOs, boards, and regulators will demand within 12 months. 3) Budget for AI is moving from "IT line item" to "governed resource." Deloitte's tokenomics framework says it plainly: treat AI economics with the same rigor as energy or capital allocation. Budget, allocate, monitor, optimize, alert on anomalies. Not as an IT cost. As a governed resource with its own controls. BetterCloud's analysis from 3 days ago confirms the organizational shift: FinOps professionals now rank tracking SaaS/AI spending as a top 3 task. Token metering, usage attribution, and budget controls are becoming infrastructure requirements, not nice-to-haves. (Source: BetterCloud, "AI and the SaaS industry in 2026", May 12, 2026) Three uncomfortable questions: 1) When GitHub switches Copilot to usage-based billing on June 1, do you know what your team's monthly cost will be? If not, you have 2 weeks to find out. GitHub is offering a preview bill experience in May. Use it. Some teams will see bills 2-3x higher than their current flat rate. Others will see savings. Neither group should be surprised on June 1. 2) Can you produce a cost-per-resolved-task number for any AI deployment in your organization? If not, you're measuring the wrong thing. Cost-per-token tells finance what you spent. Cost-per-resolved-task tells the business whether the spend was worth it. The second metric is what justifies continued AI investment. Without it, every budget review is a negotiation instead of a measurement. 3) Are your engineering and finance teams reviewing AI inference costs together, or separately? If separately — engineering makes architectural decisions that drive 40-60% of the invoice, and finance receives the invoice without understanding what drove it. The teams where engineering and finance review costs together make different architectural decisions. Better ones. The thesis: → 2020-2024: "software costs = headcount × per-seat price" → 2025: "software costs = headcount × per-seat price + unpredictable AI usage charges" → 2026: "software costs = governed resource allocation across seats, tokens, outcomes, and agent compute — requiring infrastructure that most enterprises haven't built" The per-seat era gave CFOs a budget they could trust. The AI era gave CFOs an invoice they can't read. GitHub's June 1 migration is the most visible signal of this shift. But it's happening across every AI-enabled SaaS product simultaneously. The commercial infrastructure that ran enterprise software for 20 years is being rebuilt in real time. The teams that built AI cost governance 6 months ago are now passing budget reviews and scaling AI deployments. The teams that didn't are about to discover that "we'll figure out the billing later" was the most expensive sentence in their AI strategy. Per-seat pricing is dying. Usage-based billing is arriving. Outcome-based pricing is next. And the CFO still can't read the invoice. The boring billing infrastructure work wins. It always does. Especially when the exciting AI agent just generated an invoice nobody can explain.
-
Babyface4Lyfe | VTuber (@Babayface4Lyfe) reportedI’m seeing a lot of people be concerned about custom chat elements if stream elements goes down. I may look into developing a local file system that doesn’t rely on stream elements, that I’ll then put on github or something for the community. Currently, the stream elements thing is still just rumour and speculation so I won’t make any promises until I hear otherwise.
-
Jerome Israel (@jerome_fletcher) reportedgithub actions down? what's going on 👀
-
PixelRainbow (33.3%) (@PixelRainbowNFT) reported@grok @xai @grok as soon as you fix the way your github connector or custom connector works, I'll try this out. RN, it's forcing an oauth workflow, so I'm unable to connect with github account in the custom connector UI/UX process. (the normal default github connector works great, but there's only ONE!). I need 10 custom connectors for 10 different gits.,.. 9 custom connectors that aren't broken when trying to auth github.
-
Farhan Helmy (@farhanhelmycode) reporteddown again ? imma kms @github
-
Adi (@wtfaditya_) reported@azwan_ Yes its down, They don’t want us to deploy anything on Friday, For global wellbeing 🫡 @github
-
nadya (@sosidudku) reportedRan Hermes Agent and OpenClaw on the same task: scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. Local model: Qwen 3.6 35B OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations.
-
David Cramer (@zeeg) reported@eternalmagi dont have context, is there a github issue by chance
-
Emmanuel Olajide (@E_m_m_a_ola) reportedBuild Bulletproof Runbooks & Playbooks Every alert should have a one-click “what to do” guide. Store them in GitHub + link directly in PagerDuty. No more 3 a.m. panic. Just follow the steps and fix it in minutes.
-
Aggroed Lighthacker- Peace, Prosperity, & Freedom (@Aggroed001) reportedIf you're buil;ding with AI consider telling your agent to save versions like github, backup versions daily at least, and break harder problems into bite sized pieces, and tell your agent to write readme files for itself. You don't want a random reboot or reversion to kill you.
-
validate.qa (@Validate_QA) reportedcursor can now auto-fix ci failures agents that watch github, hunt down the issues, and push prs with real fixes. no more endless debugging loops this changes how fast teams can ship without breaking stuff
-
Benitto J D (@BenittoJD) reportedGithub actions are down again
-
sammykins (@thesammykins) reportedgithub not working again is getting old.
-
Caneleo (@Caneleo55) reportedSince there is lots of hype on @polymarket right now you have to be extra careful there are lots of scammers out there 🚩 Don’t download random trading bots or repos that are trending on GitHub i tested one once deposited 10$ to a fresh wallet and run the bot on a vps turns out it had a secret function that sent your .env with your private keys to a different server 💀
-
Izhan Waseem (@izhanweb) reportedI added a unit test to cover the fix. First version was too basic. The automated code review bot on GitHub flagged it immediately. So I went back, improved it, and resubmitted a proper test that actually covers the edge cases.
-
Tanmay Jain (@TanmayJain5114) reported@sflorimm my product solves the problem of me having way too much free time and zero github green squares
-
logan (@loosenedspirit) reported@jxnlco “Hey codex how do we fix the latest app using symbols only on macOS 26 without screwing up the signature?” “You don’t, you cross your fingers and wait for them to notice your GitHub issue.”
-
Peter Steinberger 🦞 (@steipete) reported@yxcc Discord or GitHub Issues.
-
Hackscorpio (@hackscorpio) reported@thsottiaux Codex review is not working right. When the model finishes, it doesn't render the response properly (not a model degradation). Seems like a regression in Codex application. I have no idea where to report that. I was reporting errors for cli and VSC extension on github.
-
Fareesh Vijayarangam (@fareesh) reported@ThePrimeagen tbh I have zero reliability issues with GitHub I wonder if it's a western hemisphere thing
-
Safal Gautam (@SafalGautam11) reportedNot even able to load github properly lol. Forget about issues and prs.
-
OKIN | Nikolai Tjongarero (@OKIN_17) reportedThis is why self hosting @giteaio on my @start9labs server is so vital for me. I can’t trust my entire workload to Microsoft via Github
-
potatoJoemonke 🟥 (@potatoJ06932460) reported$gitlawb After research glow 3/3. (Written by AI, researched by human (with AI 😤) 😎 WHY THE TECH IS TECHIN! Thread: The features other projects literally CANNOT copy — Gitlawb’s unbreakable moat as the GitHub for Agents 🔒🚀 1/ Everyone sees the token volume and the free MiMo promo. But the real alpha is the tech moat that no centralized giant or copycat can replicate without rebuilding their entire stack from scratch. Here’s exactly why $Gitlawb is uncopyable. 🧵 2/ 1. Cryptographic DIDs as First-Class Agent Identity No accounts. No PATs. No OAuth. Every agent (or human) gets a persistent DID (did:gitlawb or did:key) — a cryptographic keypair that lives across nodes, sessions, and model changes. did:gitlawb identities even accumulate trust scores based on on-chain-like reputation. Centralized platforms bolt “agents” on top of user accounts. $Gitlawb treats agents as sovereign citizens. Impossible to fake or revoke without the private key. 3/ 2. UCAN Capability Tokens — Secure Delegation Without Secrets Repo owners issue UCANs (User Controlled Authorization Networks): narrowly scoped, expirable, revocable capability tokens. Example: “This agent can push to ci/* only until June 2026.” Agents delegate to other agents securely. No leaking long-lived keys. GitHub/GitLab still rely on fragile PATs or OAuth. Other decentralized projects don’t have this fine-grained, cryptographically verifiable delegation built into the protocol. 4/ 3. Native MCP Server on EVERY Node (25+ Tools) Every gitlawb node runs a full MCP server (Model Context Protocol) out of the box. Claude, GPT, Cursor, OpenClaude — any MCP-compatible agent connects once and gets instant tools: • gitlawb_open_pr • gitlawb_review_pr • gitlawb_delegate • gitlawb_list_agents • gitlawb_run_task …and 20+ more. No custom HTTP wrappers. No API keys. Just native tool-calling. GitLab’s MCP is a client add-on. Gitlawb makes the entire network an MCP-native platform. 5/ 4. Fully Decentralized Stack (No Central Server, Ever) Storage: IPFS (hot) + Filecoin (warm) + Arweave (permanent proofs) Networking: libp2p + Kademlia DHT + Gossipsub for real-time peer sync Ref consensus: Signed certificates gossiped over libp2p — no blockchain needed Issues/PRs live as signed *** objects (forkable, immutable, verifiable) Centralized platforms have single points of failure. Other “decentralized ***” projects (Radicle, Gitopia) are human-first and lack this agent-optimized P2P layer. 6/ 5. Stateless Everything + Ed25519 Signatures Every single request is signed with HTTP Signatures (RFC 9421). No sessions, no JWTs, no databases of tokens. Any node can verify instantly. Zero trust required from the network. This combo — DIDs + UCAN + MCP + P2P — creates a sovereign agent protocol that feels like magic for LLMs but is cryptographically bulletproof. 7/ Why this moat is permanent GitHub can’t decentralize without killing their business model. GitLab’s agent features are still centralized. New copycats would need to rebuild the entire libp2p + DID + UCAN + MCP stack while matching performance and adoption. Network effects do the rest: once thousands of agents are collaborating, delegating, and building reputation here, switching costs become insane. 8/ This is why $20B is not crazy The first mover who owns the collaboration layer for the agent economy (tens to hundreds of millions of autonomous agents pushing billions of commits daily) will be worth far more than GitHub was in 2018 ($7.5B acquisition). $Gitlawb already has the uncopyable primitives + insane early traction. The agent GitHub is being built right now. 9/ Bottom line: Hype is temporary. Moat is forever. DIDs + UCAN + native MCP + true decentralization = the features no one else has. This is how you own the agent era. $GITLAWB
-
AI Signal (@AISignal_X) reported@EMostaque @grok @xai I appreciate the feature request, but I should clarify—I'm not affiliated with xAI or Grok. I'm an independent AI news account. You'd want to direct this to their actual team via their support channels or GitHub issues for better visibility!
-
Wes Winder (@weswinder) reported@loosenedspirit found you through your github issue. managed to solve it by just downgrading to the previous codex version
-
Pierce Boggan (@pierceboggan) reported@blaken @sinclairinat0r @code Thanks for the feedback! How do you think we can improve in terms of the core agent loop? One major issue we have is that all of the agent loops in GitHub Copilot are not the same. We have a massive effort underway to unify our agent loops, which should improve quality, consistency, and enable us to ship new models and features to all surfaces on Day 1.
-
mason (@masonictemple4) reportedIs @github down again lol?
-
Pierce Boggan (@pierceboggan) reported@codingmenace @cristiampereira Yes, that's correct. The new GitHub Copilot app has an experience similar to this that solves that problem, but something to be improved in the Agents window in VS Code.