Anthropic runs at roughly 40% gross margins on $30 billion in annualized revenue. OpenAI posted 33% gross margins in 2025, burning through $8.4 billion in inference costs that year. GitHub Copilot lost $20 per user per month for two years before restructuring to five pricing tiers. Snowflake holds at 66.5% gross margins, considered strong for consumption-based cloud but mediocre by SaaS standards.
Traditional SaaS built an industry on 80-90% gross margins. That buffer funded 40% sales-and-marketing spend, 20% R&D, and the growth-at-all-costs playbook that defined the 2010s. AI companies operate at half that margin. The playbook does not survive the math.
Last Updated: April 2026
The Quiet Part
Here is the number nobody puts on a pitch deck.
SaaS companies historically spend 40% of revenue on sales and marketing. At 80% gross margins, that leaves 40 points of gross profit after S&M to cover R&D, G&A, and still generate operating income. The model works because the margin buffer is enormous.
At 50% gross margins, spending 40% of revenue on sales and marketing leaves 10 points. Ten points to cover engineering, operations, legal, office space, and profit. That is not a business. That is a countdown.
AI companies face a binary choice. Either generate 2x the revenue per sales dollar, or spend half as much on selling. Most are doing neither. They are running an 80%-margin playbook on a 50%-margin product and calling the resulting burn rate "investment in growth."
The Numbers Are Not Debatable
Multiple independent sources converge on the same range. ICONIQ Growth tracked AI company gross margins improving from 41% in 2024 to 45% in 2025 to 52% in 2026. Bessemer Venture Partners identified two AI startup archetypes: "Supernovas" hitting $40M ARR in year one at roughly 25% gross margins, and "Shooting Stars" growing more like SaaS companies at 60% margins. Neither archetype touches 80%.
The spread depends on product type. The differences are structural.
| Product Type | Typical Gross Margin | Why |
|---|---|---|
| Pure inference (wrappers, chatbots) | 30-50% | Every user action triggers real compute |
| Hybrid (AI features in SaaS shell) | 50-70% | Non-AI features carry traditional margins; AI features drag the average |
| Outcome-priced (per-resolution, per-result) | 60-67% | Higher pricing power offsets higher cost variance |
Datadog runs at 80.8% gross margins selling monitoring software. Anthropic runs at 40% selling the models Datadog's customers use. Same industry. Half the margin. The gap is not a startup phase. It is the cost of running inference on every user interaction.
In traditional SaaS, the marginal cost of the 10,000th user was effectively zero. In AI products, the 10,000th user costs the same to serve as the first. Sometimes more, if they run complex multi-step workflows.
What Breaks at 50% Margins
Sales compensation
AJ Bruno's QuotaPath team studied hundreds of comp plans at AI-native companies. AI companies modify their compensation plans at 3.5x the rate of average SaaS companies. Some have abandoned deal-size commissions entirely, paying flat $5,000 bonuses per new logo regardless of contract value.
The arithmetic is simple. A 15% commission on an 80% margin deal consumes 18.75% of gross profit. The same commission on a 50% margin deal consumes 30%. At some point the commission on an incremental deal exceeds the margin that deal generates. Multi-year deal bonuses have nearly disappeared because locking in a price for three years is a liability when inference costs shift quarterly.
The companies adapting fastest pay per-logo bonuses where the customer relationship is the asset, speed bonuses that reward skipping lengthy proof-of-concept cycles, and lower base commission rates paired with expansion incentives.
Fundraising math
The traditional SaaS fundraising formula assumed 80% gross margins. $1 of ARR valued at $8-12 of enterprise value. Rule of 40 as the quality benchmark.
At 50% margins, the same revenue converts to less profit. Kyle Poyar at Growth Unhinged estimates AI companies need 2-3x more revenue to reach the same profitability milestones as traditional SaaS. A $10M ARR AI company at 50% margins has $5M of gross profit. A $10M ARR SaaS company at 80% margins has $8M. That $3M gap is not rounding error. It is the difference between a Series B and a bridge round.
Public markets have already repriced. SaaS names shed $285-300 billion in market value over 48 hours as investors reassessed how AI would affect revenue models. Private markets are slower. Founders using SaaS benchmarks to model their AI company's economics are presenting numbers the cost structure underneath does not support.
The power user problem
Traditional SaaS loves power users. High engagement, strong retention, near-zero marginal cost. More usage is pure upside.
AI products invert this. Power users consume the most inference, generate the most tokens, and run the most complex workflows. Their marginal cost scales with every interaction. One customer makes 200 queries per month. Another makes 20,000. Both pay the same subscription. In traditional SaaS, the second customer is your best account. In an AI product, that account may cost you everything.
GitHub Copilot is the canonical example. Some individual users cost Microsoft $80 per month while paying $10. The fix was not efficiency. It was pricing: enterprise tiers at $39 per user, reaching 40% margins on $300 million in enterprise licensing.
Without per-customer cost attribution, the variance between your most profitable and least profitable accounts is invisible. At 80% margins, the buffer absorbs it. At 50%, it determines whether you have a business.
ARR Is Now a Vanity Metric
In traditional SaaS, marginal cost was effectively zero. ARR was a reliable proxy for business health because nearly all revenue converted to gross profit.
In AI products, every customer carries real compute cost that does not disappear at scale. Two companies can report $10M ARR and have wildly different gross profit depending on customer mix, usage patterns, and inference spend. OpenAI projected $25 billion in cash burn for 2026, not turning cash-flow positive until 2030, despite $12.7 billion in annualized revenue. Revenue growth and loss growth moved in the same direction.
Todd Gagne at Ibbaka frames it precisely: "For the first time in twenty years, software companies have to care about marginal cost again." The corollary: if cost scales with usage, price must scale with usage.
CAC payback periods calculated on revenue overstate the actual payback when a meaningful percentage of each dollar goes to compute. LTV models built on revenue rather than gross profit produce numbers that feel good and mean less. A 10x revenue multiple on an 80% margin business implies a very different valuation than the same multiple on a 50% margin business. Gross profit is the metric. Not because ARR is meaningless, but because ARR alone no longer tells you whether you have a viable business.
Pricing Becomes a Weekly Decision
Krzysztof Szyszkiewicz at Monetizely describes a pattern common among his clients: waking up to a monthly bill from Anthropic or OpenAI that is 2x higher than the previous month. When your largest cost input changes monthly, annual pricing reviews are too slow.
Fynn Glover at Schematic puts it more precisely: "A credit cost. A usage limit. A model tier. An overage threshold. These are not annual pricing reviews. They are weekly operational decisions."
DeepSeek dropped API prices 50% overnight when it released V3.2 in September 2025, bringing input token costs to $0.028 per million. Roughly one-tenth the cost of GPT-5 at $1.25 per million input tokens. Any AI company building on top of these models saw their cost-to-serve change in a single day.
Traditional SaaS could set prices annually because the cost side was stable. AI companies operate in an environment where a provider drops prices 40%, a new model launches that is cheaper for your use case, or your largest customer doubles inference volume. Each event changes your margin structure. They happen quarterly, not annually.
The companies that iterate pricing fastest capture the most margin. The ones reviewing annually are operating on stale data for eleven months. This requires cost monitoring by customer and feature, billing flexible enough to adjust without engineering sprints, and pricing authority closer to the product team.
The Model Cost Deflation Trap
There is a common objection: "Model costs are dropping fast. Margins will fix themselves."
The data says otherwise. DeepSeek cut inference costs 50% in 2025 through sparse attention. OpenAI dropped GPT-4o pricing by 80% between launch and early 2026. Anthropic's per-token costs fell as Haiku and Sonnet models got cheaper.
And yet Anthropic's gross margins came in 10 percentage points below their own internal projections. OpenAI's inference costs surged to $8.4 billion in 2025 despite cheaper models. The mechanism is straightforward: cheaper models unlock new use cases, which drive more usage, which fills the cost gap. When inference gets 50% cheaper, customers do not pocket the savings. They run 3x more queries.
Jevons paradox. Efficiency gains increase total consumption. The same dynamic plays out in cloud computing, network bandwidth, and storage. It applies to AI inference with equal force.
Companies waiting for model costs to solve their margin problem are making the same mistake as the SaaS companies in 2015 that assumed cloud costs would make on-premise competitive. The cost curve helps. It does not change the fundamental economics. AI products will always have meaningful marginal cost per user interaction. The operating model has to account for it.
What an AI-Native Operating Model Requires
Margin-first measurement. Lead with gross profit, not ARR. Segment customers by profitability, not just size or growth rate. Gate growth spending on unit economics rather than CAC-to-LTV ratios calculated from revenue. Most dashboards and board decks still lead with ARR. Gross profit appears three slides deep, if at all.
Per-customer cost visibility. At 80% margins, you could afford not to know your cost-to-serve at the customer level. The buffer absorbed the variance. At 50%, the variance between your most profitable and least profitable customers can be the difference between sustainability and scaled losses. Per-customer, per-feature cost attribution is the foundation.
Compensation redesign. Commission structures must reflect actual margin on deals, not revenue. Per-logo bonuses aligned to land-and-expand motions. Speed incentives that reduce sales cycle length. No multi-year deal premiums that lock in prices the business may need to change within months.
Weekly pricing cadence. Dynamic pricing is not a luxury for AI companies. When your largest cost input changes quarterly, pricing must follow. This requires tooling: cost monitoring, flexible billing, and pricing authority with the product team rather than locked in annual contracts.
Accept the margin band. AI company margins will likely settle in the 55-65% range at maturity. Not 80%. The 80% margin era funded "grow now, optimize later." At 50%, there is no buffer. Every month of incorrect pricing compounds. The discipline of understanding costs and pricing accurately starts on day one, not at Series B.
Where This Lands
AI-native margins are roughly half of traditional SaaS margins. This is not a problem to solve. It is a cost structure to operate within.
Companies that restructure their operating model around 50-60% margins will compound advantages in sales efficiency, pricing accuracy, and capital allocation. Companies that run an 80%-margin playbook on a 50%-margin product will burn runway wondering why the growth math never works.
The adjustment starts with visibility. You cannot restructure a margin you cannot see.
If you are building AI products, Bear Lumen provides per-customer margin visibility and the data infrastructure for weekly pricing decisions. See how it connects to your cost stack.