The New York Times-OpenAI Legal Fight Is Getting Mean

nyt

In a blog post on Wednesday detailing events related to legal discovery in the lawsuit brought against it (and Microsoft) by the New York Times, OpenAI claimed to be “one of the most targeted organizations in the world.” The post reveals that the privacy of millions of sensitive chat logs is at risk, and the Times is one of the forces endangering its users, along with attacks from “organized criminal” groups and “state-sponsored” actors.

The post is titled “Fighting The New York Times’ Invasion of User Privacy.”

According to OpenAI, the Times at one point tried to uncover 1.4 billion private chats. “We pushed back, and now we’re pushing back again,” the post said.

It’s worth remembering that OpenAI CEO Sam Altman gave a tense interview to the Times’ Hard Fork podcast four months ago. At 1:20 in the video above, the conversation almost gets derailed when Altman suddenly comes up with his own question: “Are we going to talk about where you sued us because you don’t like user privacy?”

The context of the blog post is that on Wednesday, OpenAI submitted a filing asking the U.S. District Court for the Southern District of New York to overturn the requirement that it hand over 20 million ChatGPIT user conversations for perusal by The New York Times and its lawyers.

These, it says, are private conversations “of which more than 99.99% of the plaintiffs believe that they have nothing to do with the case.” It adds, “This data comes from ChatGPT users around the world—families, students, teachers, government officials, financial analysts, programmers, lawyers, doctors, therapists, and even journalists.”

The New York Times put it differently in an equally heated statement to Ars Technica. It once again accuses OpenAI of “stealing millions of copyrighted works to create a product that competes directly with The Times”, and describes the blog post’s message as “yet another attempt to cover up its illegal conduct” that “intentionally misleads its users and omits the facts.”

“The privacy of any ChatGPT user is not at risk,” the statement continues, adding that OpenAI must “provide a sample of the chats anonymized by OpenAI under a legal protective order.”

Apparently the judge’s reasoning for releasing the documents referenced the case of Concord v. Anthropic – which OpenAI says is “misleading”, and about half the filing is devoted to arguing over it in some detail. Essentially, it has been argued that what Anthropic provided was much more concise and less invasive than OpenAI expected, and that providing those documents was at least partly Anthropic’s idea.

Nonetheless, the Times statement rejects the severe language in the OpenAI blog post. “This fear-mongering is even more dishonest, given that OpenAI’s own terms of service allow the company to train its models on users’ chats and hand over chats for litigation,” the Times statement claimed.



Leave a Comment