2026-04-22
Today is the day of announcement of wealth accumulation. As is the nature of writing for a large audience, this is a formal, reserved declaration. As it should be. Writing must necessarily become largely impersonal. But I would like to write something personal about why I am doing this. What is the goal of creating Exe.dev? I’m already the co-founder of a startup that’s doing great work, selling a product that I love just as much as I did when I first helped design and build it.
How could I have the strength to go through so much trouble to start another company? Some fellow founders looked at me in disbelief and shock that I would throw myself back into the frying pan. (Worse, experience tells me that most of the pain is still in my future.) This has been a really hard question to answer as I begin to search for a “bigger” cause, a principle or a social need, a reason or motivation beyond the challenge. But I believe the truth is far more simple, and for some people I’m sure it is almost as unbelievable.
I like computers.
In some technical circles, this is an unusual statement. (“In this house, we curse computers!”) I get it, computers can be really frustrating. But I like computers. I always have. Getting things done with computers is really fun. Painful, sure, but the results are worth it. Little microcontrollers are fun, desktops are fun, phones are fun, and servers are fun, whether they’re in your basement or in a data center around the world. I like them all.
So it’s no small thing for me when I admit: I don’t like clouds today.
I want. Computers are great, whether it’s BSD installed directly on a PC or a Linux VM. I can enjoy Windows, BeOS, Novell NetWare, I even installed OS/2 Warp before and had a great time with it. Linux is particularly powerful today and is a source of infinite possibilities. And for all pages of products, the cloud is just Linux VMs. Better, they are API driven Linux VMs. I should be in heaven.
But every cloud product I’ve tried is wrong. Some are better than others, but I’m constantly hampered by the choices cloud vendors make, making it difficult to get computers to do what I want them to do.
These issues go beyond UX or poor API design. Some of the basic building blocks of today’s clouds are the wrong shape. VMs are missized because they are tied to CPU/memory resources. I want to buy some CPU, memory and disk and then run a VM on it. A Linux VM is a process running in another Linux’s cgroup, I should be able to run as many as I want on the computer I have. The only way to do this easily on today’s clouds is to take over the isolation with GVisor or nested virtualization on the same cloud VM, pay the nesting performance penalty, and then I am left with the task of running and managing, at least, a reverse proxy on my own VM. This is because the cloud abstraction is the wrong shape.
Clouds have attempted to solve this with “PaaS” systems. Abstractions that are inherently less powerful than a computer are specified to a particular provider. Learn a new way of writing software for each compute vendor, only to discover halfway into your project that what is easy on a normal computer is nearly impossible because some obscure limitation of the platform system is so deeply buried that you can’t find it unless you are deeply committed to a project. I’ve repeatedly said “it’s the one” but felt betrayed by some half-baked, half-implemented or half-baked idea. No thanks.
Consider the disk. Cloud providers may want you to use a remote block device (or something even more limited and slow, like S3). When remote block devices were introduced they made sense, because computers used hard drives. If the buffering implementation is good, remote sequential read/write does not hurt performance. Random searches on hard drives take 10ms, so 1ms RTT is a good price to pay for an Ethernet connection to remote storage. This is a great product for hard drives and makes a cloud vendor’s life a lot easier because it removes an entire dimension from their standard instance types.
But then we all moved to SSD. Seek time increased from 10 milliseconds to 20 microseconds. Heroic efforts have actually reduced network RTT slightly for good remote block systems, but the IOPS overhead of remote systems has increased from 10% with hard drives to more than 10x with SSDs. It’s a lot of work to configure an EC2 VM for 200k IOPS, and you’ll pay $10k/month for the privilege. My MacBook has 500k IOPS. Why are we burdening our cloud infrastructure with slow disks?
Finally networking. Hyperscalers have great networks. They charge you a lot for this and make it a pain for you to deal with other sellers. The standard price for a 1GB extraction from a cloud provider is 10x what you pay when racking up a server in a normal data center. The multiplier at medium volume is even worse. Sure, if you spend $XXm/month with the cloud the prices get much better, but most of my projects want to spend $XXm/month without a little m. The basic technology here is fine, but this is where limitations are placed on you to ensure that nothing you create can be economical.
Finally, there are painful APIs in clouds. That’s where projects like K8S come in, taking the pain out so that engineers have a little less to lose from using the cloud. But VMs with Kubernetes are harder because the cloud lets you do it all yourself with lumpy nested virtualization. Disks are hard because Google didn’t even actually make usable remote block devices when they were designing the K8S, and even if you could find a general pattern for papering between clouds today, it would be slow. Networking is hard because if it were easy you would have a private link to some system from a neighboring open DC and shave a zero off your cloud spend. It’s tempting to dismiss Kubernetes as a scam, an artificial construct designed to avoid making the actual product work, but the truth is worse: It’s a product attempting to solve an impossible problem: making clouds portable and usable. can not do it.
You can’t solve fundamental problems with cloud abstractions by building new abstractions on top. It’s inherently impossible to make Kubernetes look good, a project similar to putting (supposedly high quality) lipstick on a pig.
We have been wallowing in the mud with these pathetic clouds for the last 15 years. We do the same as we do with all obnoxious parts of our software stack, holding our nose whenever we have to deal with it and trying to minimize how often it happens.
However, it’s time to fix this.
This is the moment because something has changed: now we have agents. (Actually my co-founder Josh and I started tinkering because we wanted to use LLM in programming. It turned out that LLM needed better traditional abstractions.) Agents, making it easier to write code, mean there will be a lot more software. Economists would call this an example of the Jevons paradox. Each of us will write more programs for entertainment and work. We need private spaces to run them, easy sharing with friends and colleagues, minimal overhead.
With more holistic software in our lives the cloud, what was an annoying pain, becomes a huge pain. We need so much compute, we need to make it easy to manage. Agents help to some extent. If you trust them with your credentials they will do a great job running the AWS API for you (although sometimes this will drain your production DB). But agents struggle with the basic limitations of abstraction just as much as we do. You need more tokens than necessary and you get a worse result than expected. Every cent of the context window the agent spends thinking about how to get Classic Cloud to work is not using the context window to solve your problem.
So we’re going to fix it. What we launched today on exe.dev solves the VM resource isolation problem: instead of provisioning separate VMs, you get the CPU and memory and run the VMs you want. We took care of a TLS proxy and an authentication proxy, because I really don’t want my new VMs dumped directly onto the internet. Your disk is local NVMe with blocks replicated asynchronously from the machine. We have areas all over the world for your machines, as you want your machines to be. Your machines are behind an anycast network to give all your global users a low latency entry point to your product (and so we can create some new exciting things soon).
There’s a lot more to build on here, ranging from obvious things like static IPs to UX challenges like how to give you access to our automated historical disk snapshots. They will be constructed. And at the same time we’re going back to the very beginning, racking up computers in data centers, thinking about every layer of the software stack, exploring all the options for how we’re wiring up the network.
So, I’m making a cloud. One I actually want to use. I hope it will be useful for you.
<a href