California state senator Steve Padilla, a Democrat from San Diego, introduced a bill in the California State Assembly on Monday that would place a 4-year moratorium on the sale of toys with artificial intelligence chatbot capabilities for kids under the age of 18, according to a new report from Techcrunch. The goal of the legislation, known as Senate Bill 867, is to provide enough time for the development of safety regulations to protect kids from AI-powered toys that engage in inappropriate conversations and tell children how to harm themselves.
“Chatbots and other AI tools may become integral parts of our lives in the future, but the dangers they pose now require us to take bold action to protect our children,†Senator Padilla said in a statement posted online.
“Our safety regulations around this kind of technology are in their infancy and will need to grow as exponentially as the capabilities of this technology does. Pausing the sale of these chatbot integrated toys allows us time to craft the appropriate safety guidelines and framework for these toys to follow. Our children cannot be used as lab rats for Big Tech to experiment on,†Padilla continued.
There have been several horror stories in recent months of AI-enabled toys talking inappropriately with kids. FoloToy, which makes a teddy bear named Kumma, started talking about sexual fetishes with kids last year until OpenAI shut out its access to GPT-4o. The teddy bear would also tell kids where to find knives.
Mattel announced a partnership with OpenAI in June 2025 that was supposed to see the company make an AI-assisted toy, but that hasn’t happened yet. The consumer advocacy group Public Interest Group Education Fund also tested some AI toys and found that many have limited parental controls and could tell kids where to find dangerous objects like guns and matches. One of the key takeaways is that guardrails seemed to fail the longer someone interacts with an AI toy.
AI chatbots have come under fire in a variety of contexts recently, especially as a number of people have taken their own lives after engaging with them. Gizmodo filed a Freedom of Information Act request last year with the Federal Trade Commission for consumer complaints about OpenAI’s ChatGPT that included examples of AI-induced psychosis. A complaint from one woman in Utah told of how the chatbot instructed her son not to take his medication and insisted his parents were dangerous. Putting that kind of capability into a teddy bear obviously would pose even bigger problems.
President Donald Trump issued an executive order last month that ostensibly bans states from passing their own laws to regulate AI. And while Trump’s power to do that with an executive order is questionable in itself, putting that question aside, the EO does provide exceptions for laws around child safety protections.
It’s unclear whether Padilla’s new legislation will pass. But even if it sails through the California State Assembly it could find itself vetoed by Gov. Gavin Newsom, a Democrat who’s an ally of Big Tech and loves to veto bills that might be too good for humanity. Back in October, Newsom vetoed the No Robo Bosses Act, which would have stopped companies from automating firings and discipline decisions for workers.
Original Source: https://gizmodo.com/california-could-get-a-4-year-ban-on-toys-with-ai-chatbots-2000706416
Original Source: https://gizmodo.com/california-could-get-a-4-year-ban-on-toys-with-ai-chatbots-2000706416
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
