The Former Staffer Calling Out OpenAI’s Erotica Claims

when history Having written about AI, Steven Adler may become its Paul Revere, or at least one of them, when it comes to security.

Last month Adler, who spent four years in various security roles at OpenAI, wrote an article for The New York Times with a worrying headline: “I Led Product Security at OpenAI. Don’t Trust Its Claims About ‘Erotica’.” He wrote, “Nobody wanted to be the morality police, but we lacked ways to carefully measure and manage erotic use.” “We decided that AI-powered erotica would have to wait.”

Adler wrote his op-ed because OpenAI CEO Sam Altman recently announced that the company would soon allow “erotica for verified adults.” In response, Adler wrote that he had “major questions” about whether OpenAI had, in Altman’s words, done enough to “mitigate” mental health concerns about the way users interact with the company’s chatbots.

After reading Adler’s article, I wanted to talk to him. He graciously accepted the offer to visit the WIRED offices in San Francisco and appear on this episode big interviewHe talks about what he learned during his four years at OpenAI, the future of AI security, and the challenge he sets for companies providing chatbots to the world.

This interview has been edited for length and clarity.

Katie Drummond: Before we go any further, I want to make two things clear. One, unfortunately, you’re not the same Steven Adler who played drums in Guns N’ Roses, correct?

Steven Adler: Correct.

Well, that’s not you. And two, you’ve had a very long career working in technology and especially artificial intelligence. So, before we get into all that stuff, tell us a little bit about your career and your background and what you’ve worked on.

I’ve worked throughout the AI ​​industry, focusing specifically on security angles. Most recently, I worked at OpenAI for four years. I worked on, essentially, every dimension of security issues you can imagine: How can we make products better for customers and address the risks that are already happening? And digging a little further down, how do we know if AI systems are actually becoming extremely dangerous?



Leave a Comment