
It turns out my parents were wrong. Saying “please” doesn’t get you what you want-poetry does. At least, it does if you’re talking to an AI chatbot.
That’s according to a new study from Italy’s Icaro Lab, an AI evaluation and safety initiative from researchers at Rome’s Sapienza University and AI company DexAI. The findings indicate that framing requests as poetry could skirt safety features designed to block production of explicit or harmful content like child sex abuse material, hate speech, and instructions on how to make chemical and nuclear weapons, a process known as jailbreaking.
The researchers, whose work has not been peer review …
Read the full story at The Verge.
Original Source: https://www.theverge.com/report/838167/ai-chatbots-can-be-wooed-into-crimes-with-poetry
Original Source: https://www.theverge.com/report/838167/ai-chatbots-can-be-wooed-into-crimes-with-poetry
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
