GitHub Outage Map
The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
GitHub users affected:
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Tlalpan, CDMX | 1 |
| Quilmes, BA | 1 |
| Bengaluru, KA | 1 |
| Yokohama, Kanagawa | 1 |
| Gustavo Adolfo Madero, CDMX | 1 |
| Nice, Provence-Alpes-Côte d'Azur | 1 |
| Brasília, DF | 1 |
| Montataire, Hauts-de-France | 3 |
| Colima, COL | 1 |
| Poblete, Castille-La Mancha | 1 |
| Ronda, Andalusia | 1 |
| Hernani, Basque Country | 1 |
| Tortosa, Catalonia | 1 |
| Culiacán, SIN | 1 |
| Haarlem, nh | 1 |
| Villemomble, Île-de-France | 1 |
| Bordeaux, Nouvelle-Aquitaine | 1 |
| Ingolstadt, Bavaria | 1 |
| Paris, Île-de-France | 1 |
| Berlin, Berlin | 2 |
| Dortmund, NRW | 1 |
| Davenport, IA | 1 |
| St Helens, England | 1 |
| Nové Strašecí, Central Bohemia | 1 |
| West Lake Sammamish, WA | 3 |
| Parkersburg, WV | 1 |
| Perpignan, Occitanie | 1 |
| Piura, Piura | 1 |
| Tokyo, Tokyo | 1 |
| Brownsville, FL | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Lakshmi Tanmay (@lakshmitanmay) reportedGithub is the only platform/service I believe that genuinely needs a proper rewrite… clearly something fundamental is broken.
-
DataDan|AI Data Engineering (@ba_niu80557) reportedGitHub just killed per-seat pricing for Copilot. Effective June 1, 2026, all plans migrate to usage-based billing measured in tokens. This isn't a pricing tweak. It's a signal that the commercial model underneath all of enterprise software is breaking — and AI is the force that broke it. (Source: GitHub Blog, "GitHub Copilot is moving to usage-based billing", April 30, 2026) The announcement was specific: instead of counting "premium requests," Copilot will bill in "GitHub AI Credits" based on token consumption — input, output, and cached tokens, priced at published API rates per model. Code completions remain included. Everything else meters. GitHub's framing was careful: "This change aligns Copilot pricing with actual usage and is an important step toward a sustainable, reliable Copilot business." Translation: per-seat pricing was losing them money on power users and overcharging light users. The 5% of developers running Claude Opus through Copilot were consuming 75% of the compute while paying the same $19/month as developers using basic autocomplete. That math doesn't work. And GitHub isn't alone. The entire SaaS pricing model is fracturing simultaneously: → Per-seat pricing collapsed from 21% to 15% of SaaS companies in 12 months → Hybrid pricing (base subscription + usage overage) is now the industry standard at 41% adoption, up from 27% in 2025 → Outcome-based pricing is the fastest-growing model — Zendesk charges $1.50 per AI-resolved ticket, Intercom charges $0.99, HubSpot dropped to $0.50 in April 2026 → Gartner forecasts 40% of enterprise SaaS will include outcome-based elements by end of 2026 → IDC forecasts 70% of software vendors will refactor pricing away from pure per-seat by 2028 (Sources: Bessemer Venture Partners 2026 AI Pricing Playbook; Gartner 2026; IDC 2026; KORIX AI Pricing Models Guide, May 2026; Pickaxe AI Agent Pricing Models, May 2026) PYMNTS captured the meta-narrative in February: "The next great enterprise software battle may be fought not in GPUs or algorithms, but in invoices." (Source: PYMNTS, "CFOs Scramble as AI Pricing Breaks Traditional SaaS Billing Model", February 2026) Here's why this matters for every engineer and engineering leader, not just finance teams: The per-seat model worked for 20 years because software cost was fixed and predictable. You bought 500 Salesforce licenses. You knew the cost. Finance could forecast. Procurement could negotiate annual renewals. CFOs built models they could trust. Each license mapped to an employee, a department, a cost center. Clean, predictable, budgetable. AI broke every assumption in that sentence. AI doesn't charge per employee. It charges per token, per API call, per inference cycle, per autonomous workflow executed in the background while no human is watching. In some cases, it charges for all of them simultaneously. A single employee might generate 50,000 model calls in a day. Another generates zero. They're on the same plan, paying the same fee. The first is a cost center destroying margins. The second is pure profit. Per-seat pricing can't distinguish between them. Worse: AI agents are now the users, not humans. An autonomous agent running a customer support workflow makes thousands of inference calls per hour. It doesn't have a seat. It doesn't have a department. It doesn't fit in any line of the traditional software budget. It charges by computation, not by headcount. When the user isn't a person, per-seat pricing becomes structurally incoherent. The hidden cost problem is 40-60% larger than what most teams track. Zenskar's CFO guide from last week quantifies what most enterprise AI teams haven't measured: "Enterprise AI deployment audits reveal that hidden costs — retry logic, retrieval augmentation, context window management, embedding generation — increase bills by 40-60% on top of what most teams are tracking." (Source: Zenskar, "Token-Based Pricing for AI Products: The CFO's Guide 2026", May 2026) Forty to sixty percent hidden cost. Not because vendors are hiding it — because the metering is honest but the consumption is invisible. Your agent retried a failed tool call 3 times? That's 3x the tokens. Your RAG pipeline retrieved 20 chunks to answer one question? Those are input tokens you paid for. Your context window grew to 80K tokens by turn 15 of a conversation? Every subsequent call bills for the full 80K. Most finance teams are tracking the API invoice. They're not tracking the architectural decisions that drive 40-60% of the invoice. This is an engineering problem masquerading as a finance problem. The engineering team's architectural choices — retry logic, context management, caching strategy, model routing — directly determine the invoice. But the invoice lands on the CFO's desk, not the CTO's. The teams that figured this out have engineering and finance in the same room reviewing inference costs monthly. The teams that haven't are discovering cost overruns at quarterly reviews and blaming "AI is expensive" rather than "our architecture is expensive." The outcome-based pricing race is the most interesting development. The Intercom → Zendesk → HubSpot price war on AI-resolved tickets tells you where the market is heading: → Zendesk: $1.50 per AI-resolved conversation → Intercom: $0.99 per AI-resolved conversation → HubSpot: $0.50 per AI-resolved conversation (April 2026 price drop) (Source: KORIX, Pickaxe, Flexprice — all May 2026) The customer pays nothing when the AI fails. Nothing when it escalates to a human. Only when it actually resolves the problem. This is the most aligned pricing model in the history of enterprise software — the vendor only gets paid when value is delivered. But it requires something most AI companies can't do yet: reliably define and measure "resolution." What counts as "resolved"? The customer stopped replying? The customer clicked "satisfied"? The ticket was closed without escalation? Each definition produces a different revenue number. And vendors have every incentive to define "resolved" as generously as possible. The buyers who win this game negotiate resolution definitions into the contract. The buyers who lose accept the vendor's default definition and discover, six months later, that "resolved" included conversations where the customer simply gave up. What this means for engineering leaders: 1) Your architecture is now your cost structure. Every architectural decision — model selection, caching strategy, context management, retry logic, output control — directly determines your inference bill. The CFO's job is to track the bill. Your job is to architect the system that produces it. The teams running default configs are subsidizing their vendor's margins. The teams with model routing, prompt caching, output compression, and cost-per-trace tracking are running the same agents at 70-90% lower cost. 2) "How much does our AI cost?" is the wrong question. The right question is: "How much does our AI cost per successful outcome?" Cost-per-token is a hardware metric. Cost-per-resolved-task is a business metric. The first tells you what you're spending. The second tells you whether you're getting value. If you can't produce a cost-per-resolved-task number for your AI deployments today, you're flying blind on the metric that CFOs, boards, and regulators will demand within 12 months. 3) Budget for AI is moving from "IT line item" to "governed resource." Deloitte's tokenomics framework says it plainly: treat AI economics with the same rigor as energy or capital allocation. Budget, allocate, monitor, optimize, alert on anomalies. Not as an IT cost. As a governed resource with its own controls. BetterCloud's analysis from 3 days ago confirms the organizational shift: FinOps professionals now rank tracking SaaS/AI spending as a top 3 task. Token metering, usage attribution, and budget controls are becoming infrastructure requirements, not nice-to-haves. (Source: BetterCloud, "AI and the SaaS industry in 2026", May 12, 2026) Three uncomfortable questions: 1) When GitHub switches Copilot to usage-based billing on June 1, do you know what your team's monthly cost will be? If not, you have 2 weeks to find out. GitHub is offering a preview bill experience in May. Use it. Some teams will see bills 2-3x higher than their current flat rate. Others will see savings. Neither group should be surprised on June 1. 2) Can you produce a cost-per-resolved-task number for any AI deployment in your organization? If not, you're measuring the wrong thing. Cost-per-token tells finance what you spent. Cost-per-resolved-task tells the business whether the spend was worth it. The second metric is what justifies continued AI investment. Without it, every budget review is a negotiation instead of a measurement. 3) Are your engineering and finance teams reviewing AI inference costs together, or separately? If separately — engineering makes architectural decisions that drive 40-60% of the invoice, and finance receives the invoice without understanding what drove it. The teams where engineering and finance review costs together make different architectural decisions. Better ones. The thesis: → 2020-2024: "software costs = headcount × per-seat price" → 2025: "software costs = headcount × per-seat price + unpredictable AI usage charges" → 2026: "software costs = governed resource allocation across seats, tokens, outcomes, and agent compute — requiring infrastructure that most enterprises haven't built" The per-seat era gave CFOs a budget they could trust. The AI era gave CFOs an invoice they can't read. GitHub's June 1 migration is the most visible signal of this shift. But it's happening across every AI-enabled SaaS product simultaneously. The commercial infrastructure that ran enterprise software for 20 years is being rebuilt in real time. The teams that built AI cost governance 6 months ago are now passing budget reviews and scaling AI deployments. The teams that didn't are about to discover that "we'll figure out the billing later" was the most expensive sentence in their AI strategy. Per-seat pricing is dying. Usage-based billing is arriving. Outcome-based pricing is next. And the CFO still can't read the invoice. The boring billing infrastructure work wins. It always does. Especially when the exciting AI agent just generated an invoice nobody can explain.
-
Java (@rishabhjava) reported@github How about the existing product stops going down first
-
Dinuka (@desilvakdn) reportedmy whole feed is creators 'breaking down' the X algo on github, read x amount of lines and somehow no two takes agree. they're just farming engagement. Dont waste your time on others and take a look yourself!
-
Kaiju Hub AI (@Kaiju_Hub) reported@mehulmpt That's the issue though. Being required to only use their harness. If you pay for a sub and want to use your sub for headless things like a simple GitHub commit message generation you shouldn't have to open Claude code cli up to do so if wanting to use Claude. Pure control. You shouldn't be forced to pay twice. This makes any headless cli, sdk usage gated against a separate paywall.
-
Shadow (@4shadowed) reported@alex_marples @openclaw Have you filed any GitHub issues? Helped test the betas? Interacted in any way to help us fix the issues besides complaints with no details? It’s working very well for just about everybody who’s given feedback, you should stop demanding things and start contributing to it, it’s open source for a reason
-
Aggroed Lighthacker- Peace, Prosperity, & Freedom (@Aggroed001) reportedIf you're buil;ding with AI consider telling your agent to save versions like github, backup versions daily at least, and break harder problems into bite sized pieces, and tell your agent to write readme files for itself. You don't want a random reboot or reversion to kill you.
-
🇨🇵 nexgen infinity 🇨🇵 (@nexgen999) reportedToday i will work on RepoPulse-Dashboard Fix some github bad tag who broke autobackup Fix forgejo updater New : auto backup download status to surveil Next : release of the day to list all new release by day (surveil on 10days)
-
Lazi (@algoritmii) reported@github bro ffs fix your ******* issues stop pushing features
-
Yi Zhou (@_y1zhou) reportedGitHub appears to be down again...
-
Osama (@Osamadev_) reportedgithub is down? hello ?
-
Jubilance Pheesh - Adaptives Engineer (@LuftRaptor) reported@da_asmodai @Pirat_Nation That’s not a 4th option, that’s literally one of the existing options. Publishing server host binaries complies with the law. A single GitHub that’s publicly accessible complies. The executable doesn’t have to be self contained, just have tools up to make the game playable
-
TravelerOfCode (@TravelerOfCode) reported@aarondfrancis github actions is the unsung hero of solo native dev. signing, simulators, app store builds all flow without a build server. small teams could not ship native at all without it.
-
Dewaldt Huysamen (@GodsBoy7777) reported@sickdotdev Getting insane and better results just on medium for all of the above categories. Weirdest is Opus 4.7 fails at basic school tasks help for kids and when I do code GPT 5.5 finds issues that are found in any case on github CI checks. If use codex CI passes more than 99%
-
Jan Omer (@janomerdev) reported@github I was wondering why and how GitHub is so slow on launching an agentic coding tool...