TeaThe denouement of the anthropic mythos is the first time in my life I have felt truly poor. Maybe because I grew up on the Internet and it’s such a permissionless place where you can take advantage of untapped exploration and ambition and give chance. This is now changing with the difference between publicly available models versus those already reserved for the wealthy and pre-established.
In 1893, Frederick Jackson Turner argued that everything that is distinctive about America was shaped by the existence of free lands in the West from which anyone could start, and this condition imbued America with its distinctive freedom, egalitarianism, rejection of feudal hierarchy, self-reliance, and ambition.
Since the days when Columbus’s fleet set sail into the waters of the New World, America has been another name for opportunity… But never again will such gifts of the free land present themselves… Each frontier has indeed presented a new field of opportunity, a door of freedom from the bondage of the past… And now, four centuries from the discovery of America, at the end of a hundred years of life under the Constitution, The frontier is gone, and with it gone is the first period of American history. – Frederick Jackson Turner, The Importance of the Frontier in American History, 1893
We are witnessing the closing of another frontier in history. Even though the American dream was nearly dead, a somewhat accessible escape route that provided economic mobility and nurtured individual agency was wired in. You may never own a home, but when it comes to technology, a poor person and the wealthiest person in the world have access to the same Internet, the same phone, the same encryption protocols (my TLS connection was not using AES-ECB-Quant-8 vs your AES-GCM-512).
A 16-year-old kid, with neither qualifications nor capital, can’t do anything. The world of bits offered the freedom to create without being bogged down by arbitrary constraints, in a way that didn’t require amassing large capital or reputation or connections, where your creativity and work could speak for themselves, and you had agency. It is a precious thing and we should try to preserve it as long as possible, because there is still so much potential left. We’ve only just begun to scratch the surface of what’s possible to build and how to best harness the intelligence of powerful models.
I feel this most acutely in the siege of marginal models from public access, although this argument also applies to the general replacement of labor and intelligence with capital. Rudolf Line expresses this well in his essay, Capital, AGI and Ambition.
When labor-replacing AI took off, those with significant capital had lasting benefits. Upstarts won’t be able to beat them, because capital now translates into superhuman labor in any field. – Rudolf Line, 2024
George Hotz more clearly calls it neofeudalism.
It’s not like nuclear weapons, it’s intelligence itself. Nuclear weapons can only destroy; Intelligence is the greatest creative power in the world. If a small group of people have a monopoly on it, you are a permanent underclass, just like animals. – George Hotz, 2026
The laboratory, which frequently draws comparisons to the Manhattan Project, has long been a favorite of mine. Nuclear nonproliferation worked to the extent that it did, because nuclear weapons are instruments of mass destruction and laws are written in blood. Intelligence is economically valuable in a completely different way. Every country will pursue this as much as possible, and given the multipolar world we have returned to, and our recent record with treaties and commitments, I do not think there will be global alignment on risk reduction. At least not before there is blood.
Anthropic has mentioned that it is not planning to make Mythos generally available. However, it’s one thing to not release the model at all and one thing to keep it under complete control. It also has some restriction period after which you will release it for public use with some testing.
Today we’re announcing Project Glasswing, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software. – anthropological
But it’s another thing entirely to simply share access with enterprise partners like CrowdStrike, Cisco, and Microsoft, which are known for regularly having large-scale security incidents. How dangerous would it be from a security perspective if the private capability gap widens rapidly (already happening with iterative self-improvement) before the world has to pay the price, and there is a security breach in one of these labs or their partners? Or what if a foreign laboratory came up with something close to an equivalent model with minimal access restrictions? However limited availability of computation has a big role to play in the restriction calculation here too.
These are not the only organizations with security concerns. I am not arguing that the model should be made publicly available to anyone via API. But structurally speaking, a private company has created the most capable AI model in the world, and has unilaterally decided who gets access and is worthy of protection. They and their established partners are now sitting on zero day generators, accumulating private knowledge of exploitation in everyone else’s infrastructure: capabilities that once belonged to nation states and are now being privatized for a handful of well-connected organizations. These are state-level capabilities without state-level accountability. If you believe in democracy, we created three branches of government for a reason. Anthropic is simultaneously manufacturer, regulator and appeals court, with no on-ramps for anyone willing to even make a payment and undergo robust KYC.
API access may not be full ownership, but at least it’s a programmable surface that doesn’t obfuscate the possibility. Locking it down for security and “unapproved” uses certainly helps prevent abuse, but it also stifles innovation. Public access also brings latent capabilities out into the open, which, given how evil-aware models are (the Mythos Alignment Report calls evil awareness “a significant challenge”) and the barriers to artificial red-teaming, is better from a security perspective. Fail fast and fix, as opposed to stockpiling capabilities that have never been tested in the real world. It’s bad enough for the world to adjust to and understand AI capabilities when half the American population thinks AI is useless because they are forced to use Copilot at work.
The reaction to AI detecting security vulnerabilities also seems exaggerated. Security is always an arms race. A decade ago fuzzers like the American Fuzzy Lop seemed like a bonanza to attackers, but many security-first projects built fuzzing into their CI pipelines and now catch most bugs before release. I wrote about this symmetry in my post on the death of security through obscurity. Here again, Frontier Model access will allow more people to build security systems that will help the world improve its security. For too long, organizations have been careless about security and have been putting their customers’ data at risk due to poor security practices. The transition will be difficult, but this is a period of great upheaval on many dimensions, so why would we expect safety without harm?
And the people who would actually do rigorous security research on these models may not get access to them. A few weekends ago I was at the MATS Research Symposium. MATS is one of the most serious AI security programs, and nearly two-thirds of the posters include the Chinese open source model. Many experiments require white-box access, and these researchers cannot get it elsewhere. Meanwhile, the mainstream AI security position is that open source models are dangerous. Most projects were limited to small models due to calculation limitations, leaving open whether their results would survive at marginal scales. Thank God for the open source model, because if meaningful security research depended on the generosity of laboratories, or on being hired by one, it would be disappointing.
You can generate your own electricity from solar panels (think local models), but most people will prefer to pay utility bills. And the power company does not decide who deserves electricity on the basis of lineage. Intelligence should work similarly, where the capabilities you have access to can scale with scrutiny and due process, but there must be access to inference. Add safety guardrails to restrict hazardous use; Start by making them extremely trigger-happy if necessary and test over time. But the default should be to allow access.
If you have government-level capabilities, it’s time to start acting like a government. There should be proper processes in place, publicly disclosed criteria for who gets access and why, and a clear appeals mechanism that doesn’t mean emailing and pleading to the trust and security team. And when you cut someone off, you should be required to explain why, because revoking your Frontier Model’s access is the same as being unbanked. From an audit perspective, there should be FOIA-style obligations to disclose their work in security-critical areas.
There’s something special about training a model on all of humanity’s data and then locking it up for the benefit of a few well-connected organizations with whom you have relationships. Perhaps you’ll notice another historical pattern here. Extract value from a population that cannot meaningfully consent, concentrate the returns into a small inner circle, and then provide some version of charity to those from whom you have extracted, as a moral cover for the system. This pattern repeats itself with labs that promise UBI after AGI or encourage EA philanthropy while focusing on marginal potential. I’m not saying the intent is malicious, I think many people are trying to do their best, I’m just observing.
If we’re lucky, none of this will matter. This could be the mainframe era of AI, one way on the way to personal computing. When the Apple II came out it was much less powerful than a mainframe, and was mostly adopted by hobbyists and aesthetes. Compared to that gap, open source models are already quite impressive, lagging the range by 3-12 months depending on the dimension. So perhaps hardware supply chains will become larger, chips and energy will become available in abundance, and intelligence will become much cheaper by the meter.
The city is cutting down twenty year old ficus trees in my neighborhood because they might fall on someone during a storm and the city doesn’t want to get sued. Hurricanes occur in San Francisco at most once a year. I hope we don’t end up with stars like this.
<a href