it is stepback, A weekly newsletter telling an essential story from the tech world. Follow David Pierce for more on AI coding and the vibe-coding boom. stepback Delivers to our subscribers’ inboxes at 8am ET. opt in for stepback Here.
Before anyone was really talking about AI, writing code was a killer app for AI. In the spring of 2021, 18 months before the world knew the word “ChatGPT,” Microsoft introduced the first product of a partnership with a nonprofit called OpenAI: a tool called GitHub Copilot that watched developers write code and attempted to autocomplete snippets and lines for them. It wasn’t all that great, and it was only a “restricted technical preview”, but over a million developers signed up to try it out nonetheless.
Large language models seemed clearly poised to make software development even simpler and faster. Most of the code is relatively structured and straightforward; Coding languages are generally extremely well documented; And there is a large amount of code available online for use in training models (albeit sometimes in questionable ways). Unlike a lot of other information you get from LLM, you can also check the quality of the code by trying to run it. At first, some companies thought, LLMs might be able to make writing code and faster By predicting the next word like Google’s autocompletion. But pretty soon, he hoped, it might be able to do some coding for you. Maybe all this too.
For so many years, companies in the tech industry had also embraced the idea of ”low code” and “no code” software. Instead of offering users endless lists of settings and inaccessible menus, the idea was to let people effectively create the software themselves. For a long time, it was quite hacky: you got things like Zapier and Apple Shortcuts, which were effectively super-complex workflow builders; Or you found software like Notion and Airtable, which were extremely flexible at the cost of being quite difficult to figure out.
Even in those early days, it was clear why AI coding tools could one day be a good business. Developers are expensive; Product manufacturing takes a long time. Any tool that could mean companies can employ fewer developers, or help developers become more productive, would certainly be an easy pitch to software companies around the world. If the technology ever worked, the products would practically sell themselves. Companies like Cursor and Windsurf raised large sums of money to try to build companies around AI coding tools, while OpenAI, Google, Anthropic, and others began creating new products for developers.
Previously, AI coding tools could not be trusted. For a few years, they might have completed a few lines of code, but there was always a need for checking. In late 2023, Simon Willison, a programmer and blogger, called LLMs “weird coding interns”. He wondered whether these interns would make coders more versatile and powerful than ever before, or eventually begin to replace them.
In early 2025, Anthropic released a product called Cloud Code that will soon make that question even more urgent for many more people.
In late 2025, Anthropic released a new version of its cloud LLM, called Opus 4.5. According to Anthropic’s benchmarks, it was the best cloud model ever built, but did not represent some groundbreaking advancement in AI technology. However, a few weeks later, a lot of developers with a few free hours over the holidays started testing the new model in Cloud Code, and almost universally reached the same conclusion: It works. Suddenly, the tool you previously had to carefully prompt and carefully review can turn a few sentences into a working prototype. Boris Cherny, the creator of Cloud Code, claimed that 100 percent of their code is already written by AI. “It was as surprising to me as it was to everyone else,” he said. The Verge earlier this year. In a way that seemed impossible for a coding tool, Cloud Code went viral.
Cloud code may have captured the imagination of much of the software world, but Anthropic’s competition hasn’t been far behind. OpenAI’s Codex, which was launched in 2025 a few months after Cloud Code, has received a series of updates and is also a powerful and popular tool for writing code. Google has launched a command line interface for its Gemini models and has recently been putting more coding features into its AI Studio app.
Increasingly, AI coding seems like the first real mainstream AI use case – not to mention the first potentially great AI business. The moment of cloud code coincided with an absolute explosion in Anthropic’s revenues; One of OpenAI’s top executives recently told his team to stop doing “side quests” and instead focus on competing with Anthropic and cloud code. Both OpenAI and Anthropic are reportedly planning to go public this year, which means both companies will need to show something for the billions raised in capital and billions spent on computation. It seems like everyone’s best idea is to write code.
In fairness, this seems like a pretty reasonable estimate. Companies around Silicon Valley are suddenly seeing employees competing to use the most tokens, using GPU access as a recruiting tool, and publicly bragging about their AI bills. Nvidia CEO Jensen Huang recently said he would be concerned about any highly paid engineer who is not spending $250,000 a year on AI tokens. Even though developers fear that AI coding tools could derail their careers and livelihoods, the race is on to adopt them as quickly as possible. A 2025 study found that 98 percent of respondents said they used AI coding tools “several times a week.”
It’s not just developers. In February 2025, Andrzej Karpathy, a veteran of the AI industry, coined the term “vibe coding”. “I’m building a project or webapp,” he wrote on X, “but it’s not really coding – I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”
It’s a shame Carpathy didn’t come up with a catchier phrase, because the vibe coding stuck. The phenomenon as well as the name: Many people who didn’t or couldn’t write code were suddenly turning to workable software. For many of those who would otherwise have created a slide deck or Figma mockup, a barely functional prototype was enough, and these coding tools have proven more than capable of creating barely functional prototypes. Vibe coding does come with risks, however, both in terms of problems caused by bad code and the risks posed by giving these devices access to your computer and your data. It’s one thing to trust a system when you can verify its output, and quite another to do so when you can’t speak its language.
The software developer crisis is just beginning. Companies around Silicon Valley are laying off thousands of employees, usually citing AI as the reason. “Using the tools we are building, a significantly smaller team can do more and do it better,” Block CEO Jack Dorsey wrote in a memo. “And intelligence tool capabilities are growing exponentially every week.” In Block’s case and many others, AI is probably at least partly just a cover for pandemic-era overhiring, but the tech industry is clearly relying on AI to increase productivity and reduce headcount.
As AI coding tools continue to improve, they could also reshape the rest of the software business. Why pay big bucks for someone else’s software when Cloud Code can build it for you, exactly how you want it? Some are calling it the SaaSpocalypse, and predicting a fundamental rethinking of the way we value software. Others think we are due a new generation of successful startups, offering AI-native ways to do everything. Still others think it’s all blown out of proportion and that Salesforce will be just fine. Whatever the outcome, the software industry, which has grown to such unimaginable heights and valuations, feels to many as if it is suddenly in an unstable state.
On the other end of the developer spectrum are vibe coders. For most people, even the simplest current AI coding tools are too much. They ask you to read the code; They require terminal access; They ask so many questions that hardly anyone should be expected to know how to answer. AI coding still comes with a lot of bugs, big privacy questions, and many ways in which bad actors are able to exploit both of them.
With products like Cloud Cowork, Anthropic has started to see if it could make the technology of cloud code a little more accessible and less intimidating – you just give it access to a bunch of files on your computer and let it do the work. Products like Perplexity Computer are exploring whether people can give LLMs access to everything on their devices, allowing AI tools to organize files, respond to messages, even purchase things on their behalf. The underlying technology is starting to work, but it’s not exactly clear how people should use it, and whether they will even want to.
- If you haven’t tried vibe coding yourself, you really should – it’s awesome to see how the tools work. I recommend starting with Claude Cowork. Give it access to your Downloads folder, and let it organize everything for you.
- Right now, most people are paying $20 or $200 per month for AI coding tools. OpenAI has just announced a medium tier at $100 per month, geared specifically towards heavy users of codecs. Since these companies look for ways to make money, don’t expect a $20 plan to get you very far.
- OpenAI, Anthropic, and Google are all desperate to get you to use their own apps for everything – they’re all trying to create AI super apps, and all see coding as a core part of the offering. Anthropic recently tried to effectively ban OpenCL; Expect more steps from these companies to shut down the broader ecosystem and force you back into their apps.
- We wrote about the cloud code moment in early 2026, and its waves are still here.
- Paul Ford wrote a great article for this new York Times How coders feel about AI coding (and then came to talk about it The Vergecast). Clive Thompson also wrote an excellent story for times magazine.
- Reddit’s vibe-coding subreddit takes a fascinating look at what and how people are building. The joke is that everyone is making a habit tracker… because everyone is making a habit tracker.
<a href