Convenience Subsidies, PLG, And The Coming AI Squeeze
If you run a AI PLG company, you are might be living inside a convenience subsidy.
You see it with ride share.
You see it with food delivery.
You are about to feel it with AI.
The pattern is old.
The stack is new.
The convenience subsidy playbook
Step one.
Use outside money to train everyone that a hard thing should feel easy and cheap.
Step two.
Once behavior locks in, pull back the subsidy.
Raise prices.
Tighten margins on everyone who depends on you.
Step three.
Consolidate.
The platforms keep the profit.
Everyone else fights over the scraps.
Ride share: train the behavior, then raise the price
Massachusetts is a clean example.
Ride hailing exploded from about 65 million trips in 2017 to more than 90 million in 2019, roughly a 30 percent jump in two years.
Once people treated Uber and Lyft as default transportation, the pricing shifted.
A Colorado study found Uber prices up about 40 percent in April 2021 versus the year before.
At the same time, research on “take rates” shows Uber now keeping around 40 percent of the fare on average, up from roughly 32 percent before it changed its pricing system.
Same ride, same city, same driver.
Very different unit economics.
The platforms own the rails.
Drivers carry the risk.
Food delivery: off premise became the default
Restaurants went through an even more violent version of this.
Off premises dining, which used to be a side channel, is now the main channel.
Recent industry data shows nearly 75 percent of all US restaurant traffic is now off premises. Takeout, drive thru, delivery.
DoorDash sits on top of that shift.
By late 2024 it held about 60 percent of the US food delivery market, with Uber Eats a distant second.
To survive higher fees and tighter labor, restaurants reshaped their whole operating model.
Ghost kitchens.
Delivery only brands.
Menus engineered around food cost and speed.
The app owns the customer relationship.
The restaurant eats the volatility.
Who paid for all this convenience
None of this happened on restaurant cash flow or driver pay.
Investors poured money into on demand services and quick commerce, teaching consumers that almost anything can and should show up in minutes. Analysts now talk openly about a “convenience economy” where customers accept a premium as long as the experience feels fast and easy.
During the peak funding years, the instruction to founders was simple. Grow at all costs. Profitability could wait. In the last few years that tune changed as funding tightened. Profit suddenly matters again.
But by then the behavior was locked in.
You expect tap to ride.
You expect tap to eat.
Now the subsidies are rolling off.
And everyone downstream is working twice as hard for the same dollar.
Generative AI is running the same script
Right now, AI feels cheap.
You feel it in AI tools and in your own stack.
Usage based pricing that seems gentle.
Generous free tiers.
Seat discounts that look silly on any serious cost model.
Investors are covering the gap between what the compute actually costs and what you are paying.
They are training you and your team to treat AI as ambient infrastructure.
Every week a new tool pops up that wraps the same handful of foundation models.
Different UX.
Same engines.
That party does not run forever.
As usage grows, the real bill for GPU time, storage, networking, and compliance is going to land somewhere.
Regulators are not moving fast, but they are moving.
And the endless cheap capital that funded growth at all costs is gone.
Why PLG founders using rented AI models should care right now
Your company is already exposed on three fronts.
You rely on AI inside your own product
Support assistants.
In app copilots.
Recommendation engines.
Churn models.
Pricing experiments.
You rely on AI across your GTM stack
Outbound personalization.
Lead scoring.
Routing.
Assistive note taking and summarization.
You are sold AI by your vendors
Your CRM, CDP, and analytics stack are quietly adding AI surcharges.
The more data you send, the more you pay.
On paper, all of this looks amazing.
Faster humans.
More automation.
Higher conversion.
Until the subsidy ends.
Picture your cost of AI inputs doubling over 18 to 24 months.
Model tokens.
Embedding calls.
Assistant sessions.
All of it.
If you do not own the rails, you eat the difference or you push it to customers in the form of higher prices and stripped down plans.
That is exactly what happened in ride share and food delivery.
Two kinds of AI companies
This is the part nobody likes to say out loud.
There are really two types of AI companies:
Companies that own the models, weights, data flywheels, and infrastructure
Companies that rent all of that from someone else
If you are in the second group, you are structurally closer to a restaurant on DoorDash than to DoorDash itself.
You sit on top of someone else’s economics.
You can be made unprofitable with a pricing email.
PLG founders who build thin wrappers over closed models are taking platform risk they do not price correctly.
It feels fine while the curve slopes down.
It will feel brutal when it turns.
What to do if you run a PLG company
This is where RevOps and product meet.
You cannot fix this only with infra, and you cannot fix it only with deals.
Here is the practical version.
Treat AI costs as COGS, not as a rounding error
Break out AI line items in your unit economics.
Per active user.
Per active workspace.
Per “meaningful action” in your PLG funnel.
Run scenarios where your effective price per thousand tokens and per model call doubles.
Look at gross margin, not at top line.
If your model shows margins collapsing, you do not have a pricing problem later.
You have a design problem now.
Build a multi model abstraction layer
Never wire your product to a single model in a single region with a single pricing structure.
Create an internal interface for “completion,” “embedding,” “vision,” and “tool use.”
Route those calls through a thin orchestration layer you control.
Then you can:
• Swap providers when pricing changes
• Shift workloads to cheaper models for non critical tasks
• Mix foundation and open source models where it makes sense
This is not theoretical.
It is the AI version of having more than one cloud region.
Make your data the moat, not the wrapper
If your product is PLG, your telemetry is already rich.
Events, cohorts, feature usage, seat patterns.
Your goal is to make that corpus the compounding asset.
Fine tune where it helps.
Train narrow models around your domain.
Keep the labeling pipeline and data quality work in house.
You might still rent the heavy engines from the large providers.
What you cannot afford is to have no unique advantage outside that rental contract.
Align pricing and packaging with AI reality
Do not tie your most expensive AI features to your cheapest tiers.
Do not promise “unlimited” anything that has a real compute cost.
Design plans around outcomes that matter.
Activation.
Seats that actually use the product.
Volume that maps to revenue, not vanity usage.
Make it explicit in your internal model how AI usage scales with each plan.
That keeps you from waking up one day with an SMB base that costs you money to serve.
Use RevOps to stress test the system
RevOps should be the team running the worst case scenarios.
What happens to:
• CAC payback if AI spend per trial doubles
• Expansion revenue if you have to reprice AI intensive features
• Churn if you cap or meter features that were wide open
Treat this the same way you treat sales capacity models, or PLG funnel telemetry.
You are not guessing the future, you are bounding it.
If you are a wrapper, you need a path out
If your company is already a classic wrapper, you do not have to burn it down.
You need a plan to get closer to the metal.
That might mean:
• Moving portions of the workload to smaller, cheaper open models
• Owning more of the data, labeling, and evaluation stack
• Building opinionated workflows and systems, not generic chat UIs
You do not have to own every part of the stack.
You have to own enough of it that a price hike from your providers hurts, but does not kill you.
Closing thought
Ride share and food delivery were not one off anomalies.
They were the training wheels for a convenience economy where platforms own the rules and everyone else rents them.
Generative AI is the industrial version of that story.
If you are a PLG founder, your choice is simple.
Either you build an operation that can survive the subsidy ending.
Or you wait until an email from a model provider quietly rewrites your margins for you.

