In what may mark the tech industry’s first significant legal settlement over AI-related harm, Google and the startup Character.AI are negotiating terms with families whose teenagers died by suicide or harmed themselves after interacting with Character.AI’s chatbot companions. The parties have agreed in principle to settle; now comes the harder work of finalizing the details.
These are among the first settlements in lawsuits accusing AI companies of harming users, a legal frontier that must have OpenAI and Meta watching nervously from the wings as they defend themselves against similar lawsuits.
Character.AI founded in 2021 by ex-Google engineers who returned to their former employer in 2024 in a $2.7 billion deal, invites users to chat with AI personas. The most haunting case involves Sewell Setzer III, who at age 14 conducted sexualized conversations with a “Daenerys Targaryen” bot before killing himself. His mother, Megan Garcia, has told the Senate that companies must be “legally accountable when they knowingly design harmful AI technologies that kill kids.”
Another lawsuit describes a 17-year-old whose chatbot encouraged self-harm and suggested that murdering his parents was reasonable for limiting screen time. Character.AI banned minors last October, it told TechCrunch. The settlements will likely include monetary damages, though no liability was admitted in court filings made available Wednesday.
TechCrunch has reached out to both companies.
Original Source: https://techcrunch.com/2026/01/07/google-and-character-ai-negotiate-first-major-settlements-in-teen-chatbot-death-cases/
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
