Anthropic’s Daniela Amodei Believes the Market Will Reward Safe AI

trump administration One might think that regulation is crippling the AI ​​industry, but one of the industry’s biggest players disagrees.

At WIRED’s Big Interview event on Thursday, Anthropic president and co-founder Daniela Amodei told WIRED editor Steven Levy that even though Trump’s AI and crypto czar, David Sacks, tweeted that her company is “running a sophisticated regulatory capture strategy based on fear-mongering,” she is confident that her company’s commitment to addressing the potential dangers of AI is strengthening the industry.

“We were very vocal from day one that we thought there was incredible potential for AI,” Amodei said. “We really want to be able to make the whole world realize the possibilities, the positive benefits and benefits that come from AI, and to do that, we have to get the hard things right. We have to get the risks manageable. And that’s why we talk about it so much.”

More than 300,000 startups, developers and companies use some version of Anthropologie’s cloud model and Amodei said that, through the company’s dealings with those brands, they’ve learned that, while customers want their AI to be able to do great work, they also want it to be reliable and secure.

“No one says, ‘We want a less secure product,'” Amodei said, comparing Anthropic’s reporting about its model’s limitations and the jailbreak to a car company releasing a crash-test study to show how it has addressed safety concerns. It may seem shocking to see a crash-test dummy fly through a car window in a video, but knowing that an automaker updated its vehicle’s safety features as a result of that test could sell a car to a buyer. Amodei said the same is true for companies using Anthropic’s AI products, creating a market that is somewhat self-regulating.

“We’re setting what you might think of as minimum safety standards based on what we’re putting into the economy,” he said. Companies “are now building a lot of workflows and day-to-day tooling functions around AI, and they’re like, ‘Okay, we know this product doesn’t hallucinate that much, it doesn’t produce harmful content, and it doesn’t do all these bad things.’ Why would you go with a contestant who has a low score?”

Daniela Amodei participates in the WIRED Big Interview program.

Photograph: Annie Noelker



<a href

Leave a Comment