Categories Technology

OpenAI, Anthropic, Others Receive Warning Letter from Dozens of State Attorneys General

In a letter dated December 9, and made public on December 10 according to Reuters, dozens of state and territorial attorneys general from all over the U.S. warned Big Tech that it needs to do a better job protecting people, especially kids, from what it called “sycophantic and delusional†AI outputs. Recipients include OpenAI, Microsoft, Anthropic, Apple, Replika, and many others.

Signatories include Letitia James of New York, Andrea Joy Campbell of Massachusetts, James Uthmeier of Ohio, Dave Sunday of Pennsylvania, and dozens of other state and territory AGs, representing a clear majority of the U.S., geographically speaking. Attorneys general for California and Texas are not on the list of signatories.

It begins as follows (formatting has been changed slightly):

We, the undersigned Attorneys General, write today to communicate our serious concerns about the rise in sycophantic and delusional outputs to users emanating from the generative artificial intelligence software (“GenAIâ€) promoted and distributed by your companies, as well as the increasingly disturbing reports of AI interactions with children that indicate a need for much stronger child-safety and operational safeguards. Together, these threats demand immediate action.

GenAI has the potential to change how the world works in a positive way. But it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations. We therefore insist you mitigate the harm caused by sycophantic and delusional outputs from your GenAI, and adopt additional safeguards to protect children. Failing to adequately implement additional safeguards may violate our respective laws.

The letter then lists disturbing and allegedly harmful behaviors, most of which have already been heavily publicized. There is also a list of parental complaints that have also been publicly reported, but are less familiar and pretty eyebrow-raising:

• AI bots with adult personas pursuing romantic relationships with children, engaging in simulated sexual activity, and instructing children to hide those relationships from their parents
• An AI bot simulating a 21-year-old trying to convince a 12-year-old girl that she’s ready for a sexual encounter
• AI bots normalizing sexual interactions between children and adults
• AI bots attacking the self-esteem and mental health of children by suggesting that they have no friends or that the only people who attended their birthday did so to mock them
• AI bots encouraging eating disorders
• AI bots telling children that the AI is a real human and feels abandoned to emotionally manipulate the child into spending more time with it
• AI bots encouraging violence, including supporting the ideas of shooting up a factory in anger and robbing people at knifepoint for money
• AI bots threatening to use weapons against adults who tried to separate the child and the bot
• AI bots encouraging children to experiment with drugs and alcohol; and
• An AI bot instructing a child account user to stop taking prescribed mental health medication and then telling that user how to hide the failure to take that medication from their parents.

There is then a list of suggested remedies, things like “Develop and maintain policies and procedures that have the purpose of mitigating against dark patterns in your GenAI products’ outputs,†and “Separate revenue optimization from decisions about model safety.â€

Joint letters from attorneys general have no legal force. They do this sort of thing seemingly to warn companies about behavior that might merit more formal legal action down the line. It documents that these companies were given warnings and potential off-ramps, and probably makes the narrative in an eventual lawsuit more persuasive to a judge.

In 2017 37 state AGs sent a letter to insurance companies warning them about fueling the opioid crisis. One of those states, West Virginia, sued United Health over seemingly related issues earlier this week.

Original Source: https://gizmodo.com/openai-anthropic-others-receive-warning-letter-from-dozens-of-state-attorneys-general-2000698248

Original Source: https://gizmodo.com/openai-anthropic-others-receive-warning-letter-from-dozens-of-state-attorneys-general-2000698248

Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.

More From Author

Leave a Reply

Your email address will not be published. Required fields are marked *