Categories Technology

The New York Times-OpenAI Legal Fight Is Getting Mean

In a blog post Wednesday about events surrounding legal discovery in the lawsuit brought against it (and Microsoft) by the New York Times, OpenAI claims to be “one of the most targeted organizations in the world.†The post makes the case that the privacy of millions of sensitive chat logs are under threat, and that the Times is one of the forces menacing its users, alongside attacks from “organized criminal†groups and “state-sponsored†actors.

The post is called “Fighting the New York Times’ invasion of user privacy.â€

In OpenAI’s telling, the Times at one point sought to expose 1.4 billion private chats. “We pushed back, and we’re pushing back again now,†the post says.

It’s worth remembering that OpenAI CEO Sam Altman gave a tense interview to the Times’ Hard Fork podcast four months ago. At 1:20 in the above video, the conversation almost goes off the rails when Altman abruptly jumps in with his own question: “Are we gonna talk about where you sue us because you don’t like user privacy?â€Â 

The context for the blog post is that on Wednesday, OpenAI submitted a filing that asked the US District Court for the Southern District of New York to overturn a requirement that it hand over 20 million ChatGPT user conversations for perusal by the New York Times and its lawyers.

These are, it says, private conversations “more than 99.99% of which plaintiffs concede have nothing to do with this case.†It continues, saying, “This data belongs to ChatGPT users all over the world—families, students, teachers, government officials, financial analysts, programmers, lawyers, doctors, therapists, and even journalists.â€

The New York Times framed this differently in an equally heated statement provided to Ars Technica. It once again accuses OpenAI of “stealing millions of copyrighted works to create products that directly compete with The Times,†and characterizes the message of the blog post as “another attempt to cover up its illegal conduct,†that “purposely misleads its users and omits the facts.â€

“No ChatGPT user’s privacy is at risk,†the statement continues, adding that OpenAI is supposed to “provide a sample of chats, anonymized by OpenAI itself, under a legal protective order.â€

Apparently the judge’s reasoning in calling for this release of documents referred to the case of Concord v. Anthropic—which OpenAI says is “misleading,†and about half the filing is dedicated to arguing this in some detail. Essentially, it argues that what Anthropic provided was much briefer and less invasive than what is being required of OpenAI, and that it was at least partly Anthropic’s idea to provide those documents.

Nonetheless, the Times’ statement is dismissive of the grave language in the OpenAI blog post. “This fear-mongering is all the more dishonest given that OpenAI’s own terms of service permit the company to train its models on users’ chats and turn over chats for litigation,†the Times’ statement claims.

Original Source: https://gizmodo.com/the-new-york-times-openai-legal-fight-is-getting-mean-2000684610

Original Source: https://gizmodo.com/the-new-york-times-openai-legal-fight-is-getting-mean-2000684610

Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.

More From Author

Leave a Reply

Your email address will not be published. Required fields are marked *