
Anthropic will start training its AI models on user data, including new chat transcripts and coding sessions, unless users choose to opt out. It’s also extending its data retention policy to five years – again, for users that don’t choose to opt out.
All users will have to make a decision by September 28th. For users that click “Accept” now, Anthropic will immediately begin training its models on their data and keeping said data for up to five years, according to a blog post published by Anthropic on Thursday.
The setting applies to “new or resumed chats and coding sessions.” Even if you do agree to Anthropic training its AI models on yo …
Read the full story at The Verge.
Original Source: https://www.theverge.com/anthropic/767507/anthropic-user-data-consumers-ai-models-training-privacy
Original Source: https://www.theverge.com/anthropic/767507/anthropic-user-data-consumers-ai-models-training-privacy
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
