
It’s always the same story: A new technology comes out and everyone starts talking about how it will change everything. Then capital comes rushing in, companies are created overnight, and valuations rise faster than anyone can justify. Then, several months later, the warnings come, and people suddenly remember the dot-com crash or crypto.
You might have seen this before too. And if you have, you probably think AI is the next bubble. Humans are great at pattern-matching. We have evolved to see patterns, so when something familiar comes up, we instinctively map it onto the closest story we already know. We think we’ve seen this before, and we believe we know how it ends.
But that instinct can mislead us. Aye feels Like a bubble, because we’re inserting something really unbalanced into a familiar story. The idea that anything that grows fast must eventually perish seems sensible. But that doesn’t mean it will always be true.
Why do markets keep overshooting?
Every major technological shift produces the same external symptoms: heightened expectations, followed by high-visibility failure. Dot-com, mobile, and crypto all went through a phase where the world lost its sense of proportion.
Why does this keep happening? Because markets have no framework for sustained change. Discounted cash flow models assume stable, steady growth, and comparable companies assume the category already exists. So people assume that the near future looks like the recent past, but that doesn’t work when the underlying category itself is changing.
Most valuation tools are designed for incremental progress, so analysts focus on quarterly forecasts and incremental improvements. They don’t know what to do with phase changes, and they can’t model non-linear adoption.
So when you see excess capital or extreme dispersion of results, it’s the market trying to value decade-long bets using quarterly logic. (Which doesn’t work.) And that’s exactly what the bubble is: a sign that no one yet knows how to price what’s coming. This uncertainty looks like invalidity, but it highlights the limitations of the existing framework.
category error we keep making
When something new comes, we start comparing.
AI is like electricity.
AI is like a computer.
AI is like the Internet.
AI is like mobile.
These comparisons are comfortable because they all produced large-scale, economy-wide change and attracted enormous capital. He changed the way things were done.
They also share some deep things. Each of those technologies enhanced human capacity without replacing human cognition. Machines powered by electricity, but humans still decided what to build. Computers processed the data, but humans interpreted it. The Internet delivered information, but humans decided what mattered. Mobile phones put computing in your pocket, but human attention remained a scarce resource. In every case, human intelligence took over. This was also a hindrance.
AI is different because it performs cognitive functions. And if it makes you uncomfortable, so be it. Because if AI really can think, then the things we’ve built our careers on, like our expertise and our hard-won skills, may not be as defensible as we thought. The junior engineer who spent years developing intuition now works with a tool that has it readily available. Financial analysts are also known for their variance analysis. People are no longer entirely sure where value really resides, and that’s scary.
I talk to the CFO every week. Six months ago, they asked me abstract questions like “What is AI?” and “Should we have an AI strategy?” Now the questions are concrete: “What parts of my team’s work no longer need to be done this way?” This change has happened so rapidly that it is already changing the way resources are allocated.
For example, a founder I know started using the cloud to write SQL queries that would have taken his analyst days. Did he replace the analyst? no way. But he’s removed the barrier, and she no longer has to depend on him for quick answers. Then his role as an analyst changed completely. He spent 60% of his time on writing questions, 10% on testing them, and 90% on making strategic recommendations. The company made no reductions in headcount or costs, and the analyst went from supporting three stakeholders to supporting fifteen.
This is where historical comparisons really start to fail. Tools like GitHub Copilot have compressing expertise. A junior engineer can now work at a level that requires years of work experience. And every time the tool is used, it learns. A hammer doesn’t improve just because you built a house with it, but AI tools improve. And as tools become better through use, the rate of improvement increases. That dynamic doesn’t fit neatly into any prior tech analogy, which is why the tendency to call it a “bubble” misses the real point.
Previous technologies placed a certain limit on human cognition. They made us faster and stronger, but the limiting factor was always the same: How many smart people could we put into a problem? AI pushes that boundary far beyond what we are accustomed to. In the past, understanding your business better usually meant one of three things: more data, more analysts, or more experienced leaders. The constraint was how much human attention and judgment you could get. With AI, that barrier changes. When analysis that once took days appears in seconds, the new hurdle is knowing what to look for. Which questions matter? The limiting factor stops being talent and starts being decisions.
The skeptics are right about the hype, and wrong about its meaning.
Let’s take the strongest version of the bubble argument at face value. Maybe AI really is overhyped, and most of these companies will fail. Perhaps we are still early, and the real impact will take another five or ten years. All of this could be entirely true, and it still wouldn’t change the basic point, which is this:
Even if most AI startups fail, and even if the pace of adoption is slower than expected, AI is still the first technology that can perform knowledge tasks. It doesn’t disappear because the market peaks or expectations are reset. The doubters are right that the publicity is being exaggerated. But they are wrong that the hype makes the technology irrelevant. We’ve seen it before: The dot-com bubble was real, and pets.com Crashed and burned, but the Internet still changed everything. Both things were true at the same time.
The finance leaders I work with are beyond debating whether AI matters or not. Now they are trying to understand which workflow changes first, and how fast they need to adapt. That conversation is happening quietly, amidst all the noise.
And collapsing workflows share the first three properties:
- They require expertise, but they are repetitive.
-
They are obstacles to strategic work.
-
They are easy to verify but difficult to generate.
These workflows are important enough to get paid, but not so strategic that automating them would threaten competitive advantage. They require skill, but that skill does not add up dramatically to repetition, which makes them economically vulnerable, and explains why they are already being automated.
Where humans still matter (for now)
AI is great at recognizing trends, and terrible at knowing which trends actually matter. It can generate variance analysis, but it can’t tell you whether a 12% fluctuation in spending indicates healthy growth or a deeper problem. It can draft strategies, but it can’t tell you which strategy is appropriate for this market and this team at this exact moment. Decisions under uncertainty, and high-stakes trades, where the downside is devastating, remain human responsibilities. For now.
When the constraint is no longer “do we have enough smart people,” the problem becomes one of priority. What deserves attention? What’s worth making next? This is where I see many founders get stuck. They ask if it’s a bubble and if they’re too early, but these aren’t the most useful questions. The right thing to do is: “What can I build over the next year that will create real value, no matter what the valuation?”
The companies that last will quietly iterate and embed AI into real workflows that solve real problems. Take the CFO, for example. They’re buying AI because their board wants faster variance analysis, and they’re tired of hiring analysts who leave after six months. This is a real-world problem that companies need to solve.
And the same is true for investors. The people who succeed in the long term will be those who tolerate uncertainty long enough to see what really works.
This time is really different
In the short term, AI will disappoint. Many use cases will not deliver on their promise and many companies built in this wave will not survive. But there will be technology. And, in the long term, AI will reshape every sector that depends on knowledge work. Not all at once, and not evenly, but a decade from now, it will be difficult to find a knowledge-based industry that looks like it does today.
AI is different because intelligence, which was historically the main barrier to human innovation, is now scalable. It is an observable fact with measurable results. The conversation about the bubble will fade, as it always does, and what will be left is the system that quietly adapted while everyone else was arguing about valuations. The skeptics will be right about the extremes, and wrong about what really matters, because, five years from now, we will probably look back at today’s panic the same way we look at those who rejected the Internet because some companies failed. And the winners will be those who were building while everyone else was arguing about valuation.
Over time, those stories are what one remembers.
Sikki Chen is the co-founder and CEO of Runway.
<a href