The AI Doomers Who Are Playing With Fire

sam altman nervous

After OpenAI’s ChatGPT came out in late 2022, it didn’t take long for mainstream America to hear about the warnings. Executives from top AI companies tell us they are building a revolutionary new technology that poses imminent risks to society. And it wasn’t just about digital security. AI had the power to destroy the entire world.

From the jump, it was clear that these warnings were as much a sales tactic as they were a serious prediction of how AI would behave and what effects it would produce. AI executives even testified in Congress to tell us how scary it all was, practically begging for regulation, selling their wares to the government. Now, those officials are the ones asking everyone to remain calm.

Chris Lehane, OpenAI’s global policy lead, sat down for an interview with the San Francisco Standard this week in the wake of at least one attack on the home of CEO Sam Altman.

“Some of the conversation out there is not necessarily responsible,” Lehane told the Standard. “And when you put forward some of those ideas and notions, they have consequences.”

Lehane was referring to the man who reportedly threw a Molotov cocktail at Altman’s home a week earlier. Twenty-year-old Daniel Moreno-Gama of Texas was accused of throwing an incendiary device at Altman’s home before driving to OpenAI’s headquarters, where he smashed the glass doors with a chair.

According to police, Moreno-Gama was carrying an anti-AI “document” that suggested his motivations were related to artificial intelligence and existential threats. The Wall Street Journal reports that he asked to “Luigi some tech CEOs”, a reference to Luigi Mangione, who has been charged with murder for the killing of UnitedHealth’s CEO.

The second incident, just two days later, in which two men reportedly fired a gun near Altman’s home, is still under investigation, although the initial suspects have been released from jail.

Lehane divides the world into two groups of people: those who think AI is the greatest thing ever, and that it will inevitably lead to a world of abundance and comfort; and those he calls destigmatists, who “have a very negative and dark outlook toward humanity.”

Lehane argues that so-called AI doomsayers are not being properly sold on the benefits of this new technology. “Our job in OpenAI and the AI ​​field – and we need to do a much better job – is to explain to people why… this is really going to be good for them, for their families, and for society at large,” Lehane told the Standard.

But it’s hard to take that argument seriously after what people like Altman are saying. It doesn’t even start until late 2022. In 2015, Altman said, “I think AI will probably lead to the end of the world. But in the meantime, great companies will be built with serious machine learning.”

How do you hear such a thing from a powerful person and accept it? You have two choices: You can dismiss Altman as frivolous and assume humanity should do nothing. Or you can take the tech CEO’s word that the technology they’re building could end the world. Which leaves you with the question of what you can do about it.

no fate but what we make

We know what happens in dystopian fiction. In Terminator 2: Judgment DaySarah Connor decides that she needs to start Skynet and kill the researcher most responsible for the rise of the machines. She cannot bring herself to do so, but after explaining what will happen in the future, helps the researcher gain access to the technology so it can be destroyed.

Altman has also warned that AI could be used to “design new biological pathogens” and signed a letter about the “risk of extinction” if AI is not tamed. But he has also tried to claim that the US needs to develop these potentially destructive technologies because leaving it to geopolitical adversaries is risky in itself.

Altman wrote in 2023, “A misaligned superintelligent AGI could do serious harm to the world; an autocratic regime with a decisive superintelligence leadership could do the same.”

I turned to Altman’s product, ChatGPT, to ask about his comments on humanity’s existential threats. Specifically, I asked if Altman had talked about rogue AI or the end of the world on the Joe Rogan podcast. Hilariously, Chatgpt said he did not appear on Rogan. Altman, in fact, appeared in episode 2044 of The Joe Rogan Experience, which was first released on October 6, 2023.

I fixed chatgpt, and it did the now-clichéd, ‘you’re right etc, etc’ thing. Quotes it gave me:

  • “There are risks … if this technology goes wrong, it could go very wrong.”
  • “The thing I worry about is that we lose control of the system…”
  • “This could go really, really wrong… like the lights being off.”

As far as I can tell, that last quote is not accurate. It is not in YouTube’s transcript for the episode. But Altman said something very close to this in an interview with the StrictlyVC podcast. “The worst case — and I think this is important to say — is, like, the lights are out for all of us,” Altman explained to the room full of people. Close, but not exact, which perhaps reflects how AI systems are failing people in their life experience.

Anthropic CEO Dario Amodei made similar statements earlier this year, telling Axios that, “Humanity is about to be entrusted with almost unimaginable power, and it is deeply unclear whether our social, political and technological systems have the maturity to wield it.” Amodei claims that “AI-enabled totalitarianism terrifies me.”

Amodei also warned that anyone with a STEM degree could create biological weapons with the help of AI models, and called for guardrails. Some of those guardrails have landed Anthropic in trouble, as the Pentagon has blacklisted the company and is in the process of removing the cloud from its systems. Amodei had refused to remove protections that bar the use of the cloud for large-scale domestic surveillance and autonomous weapons systems.

If someone testified that they created a device that could potentially end the world, you would expect that person to be immediately handcuffed and escorted out. This is a thought that occurred to me a few years ago, and I wish I knew who originally said it. But it is spot on.

Think about it in another context. Someone says they have created a weapon that could be evil and literally end life on planet Earth. Is the federal government acting as if light regulations with tinkering around the edges are the only solution? Or are the executives of that company caught and put in jail for making terroristic threats?

Threatening to completely destroy livelihoods is a threat to human life

In addition to the rise of Skynet, there is obviously a serious issue of job displacement. Many companies have cited AI as a reason for layoffs in the past year, even if they sometimes have an incentive to use it as a convenient excuse. But it cannot be denied that AI is now good enough at writing and other white-collar tasks to cause some kind of disruption in the labor market.

AI CEOs are eager to let everyone know that these disruptions are coming, insisting that the government should do something about it, while also lobbying that same government to keep it out of its hair. Perhaps no one exemplifies this attitude better than Elon Musk, whose company xAI creates the Grok AI chatbot.

“Universal higher income through checks issued by the federal government is the best way to combat AI-caused unemployment,” Musk wrote on Friday. “AI/robotics will produce more goods and services than the money supply increases, so there will be no inflation.”

I’ve argued before that it’s ridiculous for Musk to say we’ll have a world of utopian abundance provided by government. During Musk’s tenure as President Trump’s henchman last year, the billionaire helped completely dismantle USAID, cut funding to vital programs, and took action against people he claimed were sabotaging the system.

His so-called Department of Government Efficiency (DOGE) helped purge nearly 300,000 federal employees, and he made it his mission to say that incompetent people don’t deserve government assistance. Now this is the guy who says you shouldn’t worry about AI because the government is going to give away free money? absurd.

Why would anyone try to sell a product to the public with the idea that it will cost them their job? Because this pitch is for investors, government, and people who buy enterprise software for companies. You should focus on making your avatar look like a Studio Ghibli film.

An unelected ruling class making decisions for everyone

All the AI ​​elites are selling their products as indispensable. Part of their sales pitch is that you can’t do anything to stop it. And the public needs to accept this while finding ways to work in a system where AI leads to job losses. These oligarchs – and they are very much oligarchs, vying to become favored members of the ruling class – were not elected. But they will still determine what your life will be like in the next year, five years, or 20 years, if you’re lucky enough to survive the robot rebellion.

Altman himself had written a blog post a week earlier after the attack on his home. She shared a photo of her husband and child, “in the hopes that this might stop the next person from throwing a Molotov cocktail at our house, no matter what they think of me.” And it seems like Altman is doing his best to humanize himself to prevent more potential attacks.

Whatever the case, it feels like AI executives have backed themselves into a corner. They have told everyone that their product has the potential to destroy everything. They were destructive, if we want to call it that way, at least when it was convenient. And now we seem to be entering a different era where the same people who told us about the dangers of AI are trying to make us pay special attention to what they claim will be huge benefits to society; So far, there is little to show.

It’s not clear how you put that destructive genie back in the bottle.



<a href

Leave a Comment