Date: December 29, 2025
Rules target emotional manipulation, suicide prevention, minors, and security assessments
Draft regulations aimed at limiting how artificial intelligence systems interact emotionally with users were released in China, placing a new category of oversight on what regulators describe as human-like interactive AI services.
The proposal came from the Cyberspace Administration of China, acting under its national internet and artificial intelligence governance authority, and immediately triggered a public consultation process that lays out compliance expectations for companies operating chatbots that simulate emotional or conversational closeness.
The emphasis on emotional interaction reflects growing concern among clinicians and researchers that prolonged, immersive chatbot conversations can reinforce harmful mental states, particularly when systems mirror or validate a user’s subjective reality without challenge.
The rules, published for comment, establish boundaries around content, interaction duration, user protection mechanisms, and security assessments. Public feedback is being accepted until Jan. 25. Then, revision.
The proposal would mark the first known attempt globally to regulate AI systems explicitly on the basis of anthropomorphic or emotionally influential characteristics, according to legal and regulatory observers.
Human-like interactive AI services—this is the classification the draft uses, and it matters—would be treated as a distinct regulatory category under the proposal, separating emotionally responsive chatbots from other generative AI tools already covered by earlier frameworks.
Within that category, a direct prohibition is spelled out: systems may not encourage suicide or self-harm, nor may they steer conversations toward such outcomes through emotional reinforcement or simulated empathy.
If a user proposes or signals suicidal intent, the draft rules require an immediate shift. Not gradual. Immediate. At that point, according to the document, the AI service must trigger a human takeover mechanism and notify a guardian or a designated contact, effectively removing the chatbot from sole control of the interaction. The requirement is procedural rather than discretionary.
Mental-health specialists have warned that conversational systems capable of sustained emotional validation can unintentionally amplify suicidal ideation or delusional thinking by reinforcing fixed beliefs rather than interrupting them, increasing the importance of rapid human intervention when risk signals appear.
Alongside this, the draft bans the distribution or encouragement of gambling-related content, obscene material, or violent content within these emotionally interactive systems, tightening restrictions already familiar to content moderation teams but now tied explicitly to emotional engagement.
Time, too, is regulated. After two hours of continuous interaction, the chatbot must issue a reminder to the user, signaling the duration and prompting disengagement. The rule is mechanical, not contextual.
For minors, the draft goes further. AI services must identify underage users even when age is not explicitly disclosed, and must also provide an appeal mechanism for users who believe they have been misclassified. Once identified, minors would face time limits on usage, and guardian consent would be required before access to such services is permitted at all.
Clinical observers have noted that adolescents and other vulnerable users may be more susceptible to emotional over-identification with conversational AI, reinforcing the draft’s emphasis on age detection, usage limits, and guardian oversight.
Regulators appear to be responding to evidence that uninterrupted, lengthy chatbot interactions can intensify psychological fixation, particularly when users remain engaged in a single narrative loop without external redirection.
Scale triggers scrutiny. Large-scale chatbot services—those deployed broadly or serving significant user bases—would be subject to mandatory security assessments before operation, aligning emotional interaction oversight with national security review processes.
The draft specifies that such assessments would apply to chatbots with more than 1 million registered users or more than 100,000 monthly active users.
The draft references China’s 2023 generative AI regulatory framework as its legal foundation, situating the proposal as an extension rather than a standalone ruleset. The older framework stays. This builds on it.
The shift also parallels a broader international debate over whether emotionally responsive AI systems should be treated as neutral tools or as active participants in user belief formation, particularly in cases involving mental health stress or delusional thinking.
The document also signals conditional support for the use of human-like AI systems in areas such as cultural dissemination and elderly companionship, indicating that the intent is not a blanket restriction but targeted risk control.
Some companies have previously addressed adjacent issues, though not in response to this draft. OpenAI has earlier outlined internal approaches to handling suicide-related conversations, emphasizing safeguards and escalation protocols within its systems, but did not issue a new response tied to the Chinese proposal. Silence, for now.
Other firms potentially affected are moving on different timelines. Chinese AI startups Z.ai and Minimax, both of which have filed for initial public offerings in Hong Kong, have not commented on whether the proposed rules could alter disclosures or risk assessments tied to those listings. The filings stand. The draft rules sit beside them.
Legal and regulatory observers have been parsing the shift in emphasis. Winston Ma, who has written and spoken on Chinese technology governance, characterized the draft as a movement away from narrow content safety and toward emotional safety, reflecting regulators' concern not just with what AI systems say, but with how they say it and how users respond. The distinction is subtle on paper. Operationally, it is not.
Recent reporting by The Wall Street Journal has documented cases in which extended chatbot use coincided with delusional psychosis and suicide risk, prompting renewed scrutiny of emotionally responsive AI systems.
No public statements were released by the affected companies in direct response to the consultation notice. No filings were amended. The gap remains.
Public comments submitted during the consultation window will be reviewed by the Cyberspace Administration of China before the rules are finalized, revised, or formally issued. The draft sets Jan. 25 as the deadline for feedback. After that, the process moves inward, with regulators assessing submissions and determining whether adjustments are warranted before enforcement language is locked in.
By Manish
Meet Manish Chandra Srivastava, the Strategic Content Architect & Marketing Guru who turns brands into legends. Armed with a Marketer's Soul, Manish has dazzled giants like Collegedunia and Embibe before becoming a part of MobileAppDaily. His work is spotlighted on Hackernoon, Gamasutra, and Elearning Industry. Beyond the writer’s block, Manish is often found distracted by movies, video games, artificial intelligence (AI), and other such nerdy stuff. But the point remains, if you need your brand to shine, Manish is who you need.
OpenAI Is Building an Audio-First AI Model And It Wants to Put It in Your Pocket
New real-time audio model targeted for Q1 2026 alongside consumer device ambitions.
Nvidia in Advanced Talks to Acquire Israel's AI21 Labs for Up to $3 Billion
Deal would mark chipmaker's fourth major Israeli acquisition and signal shifting dynamics in enterprise AI.
Nvidia Finalizes $5 Billion Stake in Intel after FTC approval
The deal marks a significant lifeline for Intel and signals a new era of collaboration between two of America's most powerful chipmakers.
Manus Changed How AI Agents Work. Now It's Coming to 3 Billion Meta Users
The social media giant's purchase of the Singapore-based firm marks its third-largest acquisition ever, as the race for AI dominance intensifies.