China has moved to strengthen oversight of consumer-facing artificial intelligence by releasing draft regulations targeting AI systems that simulate human personalities and elicit emotional responses from users.
On Saturday, the Cyberspace Administration of China issued the proposed rules for public consultation, signalling Beijing’s intent to shape the rapid expansion of human-like AI through tighter safety and ethical standards.
The draft framework covers AI products and services offered to the public in China that mimic human traits, thinking patterns, and communication styles. These systems often interact with users through text, images, audio, or video and are designed to build emotional engagement.
China issues draft rules to regulate AI with human-like interaction https://t.co/qeJvKGDy6U https://t.co/qeJvKGDy6U
— Reuters Tech News (@ReutersTech) December 27, 2025
Under the proposal, AI providers must actively warn users about excessive use and intervene when signs of dependency emerge. The rules place clear responsibility on service operators to manage safety across the entire product lifecycle, including algorithm governance, data security, and the protection of personal information.
The regulator has also highlighted psychological risks linked to emotionally responsive AI. Companies would need to assess users’ emotional states and levels of reliance on the service. When users show extreme emotional responses or addictive behaviour, providers must take timely corrective action.
In addition, the draft sets firm boundaries on acceptable content. AI services must not produce material that threatens national security, spreads misinformation, or promotes violence or obscenity.
The proposal reflects China’s broader strategy to balance innovation with control as AI tools become more integrated into daily life. By targeting emotionally interactive systems, regulators aim to reduce potential harm while guiding the responsible development of advanced AI technologies.