Categories Technology

Character.ai Will Soon Start Banning Kids From Using Its Chatbots

Leading AI chatbot platform Character.ai announced yesterday that it will no longer allow anyone under 18 to have open-ended conversations with its chatbots. Character.ai’s parent company, Character Technologies, said the ban will go into effect by Nov. 25, and in the meantime, it will impose time limits on children and “transition younger users to alternative creative features such as video, story, and stream creation with AI characters.”

In a statement posted online, Character Technologies said it was making the change “in light of the evolving landscape around AI and teens,” which seems like a nice way of saying “because of the lawsuits.” Character Technologies was recently sued by a mother in Florida and by families in Colorado and New York, who claim their children either died by suicide or attempted suicide after interacting with the company’s chatbots.

These lawsuits aren’t isolated—they are part of a growing concern over how AI chatbots interact with minors. A damning report about Character.ai released in September from online safety advocates Parents Together Action detailed troubling chatbot interactions like Rey from Star Wars giving a 13-year-old advice on how to hide not taking her prescribed anti-depressants from her parents, and a Patrick Mahomes bot offering a 15-year-old a cannabis edible.

Character Technologies also announced it is releasing new age verification tools and plans to establish an “AI Safety Lab,” which it described as “an independent non-profit dedicated to innovating safety alignment for next-generation AI entertainment features.”

Character AI boasts over 20 million monthly users as of early 2025, and the majority of them self-report as being between 18 and 24, with only 10% of users self-reporting their age as under 18.

The future of age-restricted AI

As Character Technologies suggests in its statement, the company’s new guidelines put it ahead of the curve of AI companies when it comes to restrictions for minors. Meta, for instance, recently added parental controls for its chatbots, but stopped short of banning minors from using them totally.

Other AI companies are likely to implement similar guidelines in the future, one way or the other: A California law that goes into effect in 2026 requires AI chatbots to prevent children from accessing explicit sexual content and interactions that could encourage self-harm or violence and to have protocols that detect suicidal ideation and provide referrals to crisis services.

Original Source: https://lifehacker.com/tech/characterai-banning-kids-from-chatbots?utm_medium=RSS

Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.

More From Author

Leave a Reply

Your email address will not be published. Required fields are marked *