As first reported by Bloomberg, China’s Central Cyberspace Affairs Commission issued a document Saturday that outlines proposed rules for anthropomorphic AI systems. The proposal includes a solicitation of comments from the public by January 25, 2026.
The rules are written in general terms, not legalese. They’re clearly meant to encompass chatbots, though that’s not a term the document uses, and the document also seems more expansive in its scope than just rules for chatbots. It covers behaviors and overall values for AI products that engage with people emotionally using simulations of human personalities delivered via “text, image, audio, or video.â€
The products in question should be aligned with “core socialist values,†the document says.
Gizmodo translated the document to English with Google Gemini. Gemini and Bloomberg both translated the phrase â€œç¤¾ä¼šä¸»ä¹‰æ ¸å¿ƒä»·å€¼è§‚â€ as “core socialist values.â€
Under these rules, such systems would have to clearly identify themselves as AI, and users must be able to delete their history. People’s data would not be used to train models without consent.
The document proposes prohibiting AI personalities from:
- Endangering national security, spreading rumors, and inciting what it calls “illegal religious activities.â€
- Spreading obscenity, violence, or crime
- Producing libel and insults
- False promises or material that damages relationships
- Encouraging self harm and suicide
- Emotional manipulation that convinces people to make bad decisions
- And Soliciting sensitive information
Providers would not be allowed to make intentionally addictive chatbots, or systems intended to replace human relationships. Elsewhere, the proposed rules say there must be a pop-up at the two hour mark reminding users to take a break in the event of marathon usage.
These products also have to be designed to pick up on intense emotional states and hand the conversation over to a human if the user threatens self-harm or suicide.
Original Source: https://gizmodo.com/draft-chinese-ai-rules-outline-core-socialist-values-for-ai-human-personality-simulators-2000703772
Original Source: https://gizmodo.com/draft-chinese-ai-rules-outline-core-socialist-values-for-ai-human-personality-simulators-2000703772
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
