Skip to main content
Back to Blog
insights5 min read

GitHub Copilot Unit Economics: A $20/User Cost Analysis

GitHub Copilot lost $20/user/month for two years before restructuring. A case study in why AI cost visibility determines pricing sustainability.

BLT

Bear Lumen Team

Research

#unit-economics#ai-margins#case-study#pricing-strategy#cost-attribution

What does your AI product cost per user?

GitHub Copilot charges $10/month. Microsoft's estimated cost to serve each user is roughly $30. The Wall Street Journal reported that gap went unmeasured for two years, across 4.7 million paying subscribers and an estimated $2 billion in annual recurring revenue.

Microsoft has $80 billion in cash reserves and 200,000 engineers. It still took two years to close the gap between price and cost.

Last Updated: April 2026


The Inversion

In SaaS, your best customers cost you almost nothing. In AI, your best customers might cost you everything.

Traditional SaaS has near-zero marginal cost. Adding the 10,000th Slack seat costs Salesforce almost nothing in compute. The cost structure is front-loaded: build the product, serve it cheaply at scale. Power users were the best accounts on the books. High engagement, strong retention, negligible incremental cost.

AI products invert this. Every user interaction triggers inference. Every inference costs money. Cost scales with usage, not headcount. Power users, the ones who accept the most code completions, resolve the most tickets, generate the most content, are now potentially the most expensive accounts on the books.

The relationship between engagement and profitability has flipped.


The Cascade

This is not one company's problem. Three products hit the same wall at three different scales.

At outcome scale: Intercom's Fin charges $0.99 per resolved support ticket. Inference cost per ticket ranges from $0.15 for a straightforward FAQ to $0.85 for a complex multi-turn conversation. Margin per resolution swings from 84% to 14% depending on ticket complexity. The buyer pays a flat rate for a variable-cost product.

At growth scale: Cursor launched at $20/month unlimited in early 2024. Within 18 months, pricing changed four times. A credit-based overhaul in June 2025 effectively halved what $20 bought when using Claude models. Users revolted. Cursor's CEO issued a public apology and offered refunds. The product had crossed $1 billion ARR in under two years. The pricing took 18 months of iteration to catch up.

At platform scale: Copilot. $10/month price, $30/month average cost to serve. Some power users cost $80/month. The correction required a complete restructure to five pricing tiers, from a free tier up to $39/month Enterprise. By the time the restructure shipped, 4.7 million subscribers were anchored to the original rate.

Three companies. Three scales. Same discovery: flat pricing on a variable-cost product creates a margin gap that widens with adoption.


The Quiet Part

The better Copilot's code suggestions get, the more users accept them. The better Fin's answers get, the more tickets it resolves. The better Cursor's completions get, the more code it generates.

In each case, product improvement drives cost increase. The relationship between quality and margin runs backward.

Better code completion means more completions accepted per session. More acceptances means more inference calls. More inference calls means a higher provider bill per user. The product improves. The margin compresses. This is not a pricing bug. It is the cost structure working as designed.

SaaS rewarded engagement. AI charges for it.


The Trajectory

Three forces are compressing the gap. Model costs dropped: GPT-4 level inference costs roughly 90% less than it did 18 months ago. Routing got smarter: directing simple queries to cheaper models cuts inference spend 30-50%. Caching improved: repeated queries hit cache instead of triggering new inference.

ICONIQ's 2026 data shows the result: AI company gross margins rose from 41% in 2024 to 52% in Q1 2026. The direction is positive.

But each improvement has a ceiling. Cheaper models attract more usage. Better products drive more queries. The margin paradox from the previous section applies here too: gains in efficiency get partially consumed by gains in adoption. At 52% gross margin, AI companies still need roughly twice the revenue of traditional SaaS to reach equivalent profitability. The economics are structurally different, not temporarily broken.


What the Copilot Story Actually Demonstrates

The Copilot story is not about Microsoft making a mistake. It is about cost structures that only become visible under load.

Inference costs, model routing, caching efficiency, per-customer usage variance: these factors compound invisibly until the margin report arrives. They sit across multiple provider invoices, multiple internal systems, and multiple layers of the stack that most dashboards do not connect.

The teams that see costs end to end can build pricing that meets their users where they are, sustains the value the product delivers, and funds the next iteration. Bear Lumen provides that end-to-end cost attribution layer, connecting provider costs to individual customers so pricing decisions rest on data instead of estimates. The teams that operate without it get the Copilot problem: a gap between price and cost that widens with every new subscriber.

The question is whether you discover your cost distribution on your terms, or on your customers'.

Share this article

Join the waitlistBook a call