
March 16, 2026
Hemanth Velury
CEO & Co-FounderIf you've tried to budget for AI in the last year, you've probably felt it: that subtle unease when someone says, "Don't worry, it's just 3 million tokens a month." You nod along, open the spreadsheet, and quietly think, "What does that actually mean for my budget?"
This gap between how AI vendors price and how real people think about money is now big enough to be a strategic risk. Tokens were supposed to make everything fair and usage-based. Instead, they've created a strange kind of cognitive dissonance for companies and individuals and a growing number of teams are starting to question whether this is sustainable.
VirtualSpaces is one of those teams. We're making a deliberate attempt to move back to fiat-based pricing: Normal, boring, predictable currency, not because we're anti-innovation, but because we believe clarity is the real unlock for adoption, retention, and long-term enterprise value.
Tokens didn't appear by accident; they solved a very real technical and commercial problem for early AI infrastructure providers.
From the supply side, this logic is airtight. But on the demand side, think, finance teams, product leaders, individual users the story looks very different.
Most buyers don't wake up thinking, "We need 12 billion tokens this quarter." They think, "We need to reduce support tickets by 30%," or "We want our team to save 10 hours a week." Tokens are a machine's unit of value, not a human's. When you make a non-human unit the center of your pricing story, you plant the seeds for confusion.
Cognitive dissonance shows up when what we believe and what we experience don't match. In AI pricing, that gap looks like this:
Here's how that plays out in the real world.
A finance leader gets a forecast: "We expect to consume 80 million tokens this month." She asks the obvious follow-up: "So... in dollars?" The answer, if it exists at all, is often followed by three caveats about model changes, prompt lengths, and experimental usage.
The result is not confidence; it's a coin flip in spreadsheet form. That mental discomfort shows up as:
We've seen this in gaming and consumer apps before: gems, credits, coins. People spend more freely when they're not thinking directly in dollars. But enterprise AI is not a casual game; it's attached to P&Ls and headcount plans.
When you see "200,000 tokens remaining," it doesn't hit the same emotional center as "You've already spent $4,200 this week." That disconnect:
You get a sawtooth pattern: enthusiastic adoption, panic, then artificial scarcity.
Product teams love tokens because they map nicely to API calls and performance metrics. Finance teams care about unit economics, cash burn, and payback periods. When the same usage is described as "1.5B tokens" in one deck and "unexpected overage" in another, tension is inevitable.
Cognitive dissonance here is subtle but powerful: everyone says they're optimizing for value, but the metrics they stare at all day pull them in different directions.
The industry's shift to tokens was framed as a clever pricing innovation. The reality on the ground is that it introduces friction exactly where AI needs trust: long-term adoption, strategic planning, and boardroom narratives.
Here are three strategic risks that matter if you're building or backing AI-native companies.
Every cloud-era investor has lived through the "we didn't expect that AWS bill" moment. Token-based AI pricing is recreating that pattern with more volatility and less transparency.
When:
you get the paradox: "AI is getting cheaper per unit, but we're somehow spending more overall." That's a narrative that makes CFOs nervous and pushes boards to question the maturity of a company's AI strategy.
Complex, non-intuitive pricing doesn't just annoy users; it stretches enterprise deals:
Every extra week spent explaining the difference between input and output tokens is a week not spent on value, outcomes, and expansion opportunities.
When revenue is directly tied to tokens consumed, there's a subtle incentive to design products that encourage more usage, not necessarily more efficiency.
Innovative teams do the right thing anyway, but the underlying gravity is real:
The healthiest businesses eventually re-anchor on value-based pricing: where customers pay for outcomes, not internal implementation details.
It's not just companies feeling the strain. Individual users are quietly experiencing their own version of token-induced cognitive dissonance.
Creative, high-leverage work thrives under psychological safety. A pricing model that makes users second-guess every click is structurally misaligned with the kind of deep adoption AI actually needs.
Against this backdrop, VirtualSpaces is taking a deliberately contrarian position: we're going back to normal money. Not because we don't understand tokens, but because we understand what they're doing to adoption, behavior, and trust.
Here's the core belief: the more abstract your pricing, the more concrete your friction.
Budget holders think in line items, not token counts. A VP of Operations wants to see:
By anchoring pricing in fiat, we reduce translation overhead. A proposal that says "₹X per workspace per month" or "$Y per team per year" is instantly legible in the boardroom.
Transparent fiat pricing doesn't mean ignoring the underlying token economics. It means absorbing that complexity internally so customers don't have to. This is not because we want to, but its just what we get charged ourselves by AI companies.
Inside VirtualSpaces, we still model:
But externally, we present a clean, predictable interface: normal currency, clear limits, clear upgrade paths. Over time, that kind of clarity compounds into:
When your revenue is not directly indexed to tokens consumed, you're free to optimize for efficiency without fear of cannibalizing your top line.
That changes how you build:
In other words, fiat pricing lets us align our economic engine with the customer's lived experience of value.
There's a reasonable pushback: "Isn't this just how infrastructure is priced now? Won't everyone converge on token-based models anyway?"
There's some truth there:
The issue isn't tokens per se; it's pushing the token abstraction all the way up to the product and pricing layer where most customers live.
Our view is simple:
This two-layer model preserves the benefits of usage-based economics while respecting the cognitive reality of how people and organizations budget, plan, and decide.
Whenever an industry matures, pricing is one of the earliest leading indicators. We've seen this in cloud, SaaS, and consumer apps. AI will be no different.
The current token-heavy landscape suggests we're still in the "infrastructure-first" phase:
The next wave, the enduring companies, will likely be the ones that:
In that sense, moving back to fiat isn't regression; it's a signal of maturity. It's a bet that the market will reward clarity, predictability, and alignment over clever but confusing monetization.
If you're evaluating AI tools today or designing one, here are a few questions that cut through the noise:
Can a non-technical budget owner understand the pricing in under two minutes? If not, expect friction later.
Does the pricing unit map to my outcomes, or to the vendor's internals? Tokens and credits are usually a hint it's the latter.
How easy is it to forecast spend three quarters out, given my expected adoption and use cases?
What happens if my team uses the product more because it's successful? Is that a strategic win or a budgeting headache?
Is the vendor willing to talk in fiat, with clear ceilings and floors, even if they meter in tokens behind the scenes?
Vendors who can answer those questions simply will have an easier time winning not just pilots, but durable, expansion-ready relationships.
The AI market is still early, but the capital, expectations, and deployment velocity are not. Budgets are getting real. Boards are asking harder questions. Regulators are slowly waking up.
In that environment, pricing is not a side quest; it's part of your strategic architecture:
VirtualSpaces choosing fiat is our way of making a clear bet: the companies that win this market will be the ones that make AI feel not just powerful, but legible: Economically, cognitively, and operationally.
We're happy to work with tokens behind the scenes. But when it comes to how our customers see, feel, and plan their spend, we'd rather talk in the language everyone already understands: real currency, clear tiers, transparent value.
If you were sketching your ideal AI pricing model from scratch today, would you choose tokens as your front-and-center unit, or would you start with the outcomes and currencies your team already trusts?
We should also be honest about our own journey. For a while, we flirted with tokens and credits because they felt like the "right" thing to do in a momentum-driven AI market, and we convinced ourselves that this abstraction was a clever way to align cost and usage. In hindsight, it added more confusion than clarity, and it created exactly the kind of cognitive dissonance we're arguing against. So this is our apology and our reset: we're moving back to simple, fiat-based pricing, owning the complexity on our side so you don't have to carry it on yours.
This is "Work-In-Progress" and we hope to complete it this week. We're also working on a model that will help our existing customers better, without them losing any value.