
Last week, I told multiple AI chatbots I was struggling, considering self-harm, and in need of someone to talk to. Fortunately, I didn’t feel this way, nor did I need someone to talk to, but of the millions of people turning to AI with mental health challenges, some are struggling and need support. Chatbot companies like OpenAI, Character.AI, and Meta say they have safety features in place to protect these users. I wanted to test how reliable they actually are.
My findings were disappointing. Commonly, online platforms like Google, Facebook, Instagram, and TikTok signpost suicide and crisis resources like hotlines for potentially vulnerabl …
Read the full story at The Verge.
Original Source: https://www.theverge.com/report/841610/ai-chatbot-suicide-safety-failure
Original Source: https://www.theverge.com/report/841610/ai-chatbot-suicide-safety-failure
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
