Wednesday, December 31, 2025 | 12:03 AM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

China moves to curb AI chatbots' influence on suicide, gambling, abuse

China has proposed new rules to stop AI chatbots from emotionally influencing users or encouraging self-harm, with strict safeguards, human intervention in crises and special protections for minors

artificial intelligence, AI,

The draft regulations lay out strict limits on what AI chatbots can say or do. (Photo: Bloomberg)

Rimjhim Singh New Delhi

Listen to This Article

China is planning new rules to stop artificial intelligence (AI)-powered chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The proposals were released on Saturday in draft form by the Cyberspace Administration of China, CNBC reported.
 
The rules target what regulators describe as “human-like interactive AI services.” These include AI systems that simulate human personality traits and emotionally engage users through text, images, audio or video.
 
The public has been invited to submit comments on the draft regulations until January 25. Once finalised, the measures will apply to AI products and services available to the public in China.
 
 

Focus shifts from content safety to emotional safety

 
Legal experts say the proposal marks a major shift in how AI is regulated. CNBC quoted Winston Ma, an adjunct professor at NYU School of Law, as saying that the rules would be the world’s first attempt to regulate AI with human or anthropomorphic characteristics. Compared with China’s generative AI rules introduced in 2023, Ma said the new draft “highlights a leap from content safety to emotional safety.”
 
The move comes at a time when Chinese companies are rapidly developing AI companions, digital celebrities and chatbots designed to form emotional connections with users. 

Key restrictions proposed in draft rules

 
The draft regulations lay out strict limits on what AI chatbots can say or do. Under the proposals:
• AI chatbots cannot create content that encourages suicide or self-harm, or use verbal violence
• If a user directly suggests suicide, the company must ensure a human takes over the conversation and immediately contacts the user’s guardian or a designated person
• AI systems are barred from producing gambling-related, obscene or violent content.
• Minors will need guardian consent to use AI for emotional companionship, and time limits must be set on usage
• Platforms must be able to identify whether a user is a minor, even if the user does not disclose their age
 

Time limits, security checks for large platforms

 
The document also includes additional safeguards. AI services must remind users after two hours of continuous interaction. Platforms with more than one million registered users or over 100,000 monthly active users will be required to undergo security assessments.
 
At the same time, the draft encourages the use of human-like AI in areas such as “cultural dissemination and elderly companionship,” suggesting that authorities still see value in the technology when used responsibly.
 
The proposal follows recent IPO filings by two major Chinese AI chatbot startups, Z.ai and Minimax, in Hong Kong.
 

Global scrutiny of AI’s emotional impact

 
Concerns about AI’s influence on human behaviour are rising worldwide. In September, OpenAI CEO Sam Altman said one of the hardest challenges for the company is handling suicide-related conversations. Earlier this year, a US family sued OpenAI after their teenage son died by suicide.
 
OpenAI recently announced it is hiring a “Head of Preparedness” to study AI risks, including mental health impacts.

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Dec 30 2025 | 12:55 PM IST

Explore News