Amid the rising circulation of fake artificial intelligence (AI)-generated content around sensitive issues, China is revising its laws on internet safety, effective January next year, according to a report by Nikkei Asia.
According to the report, the law was initially meant to improve the security of telecommunications infrastructure; however, the country revised the law to include risk management and safety monitoring of AI. The revisions received approval from the Standing Committee of the National People's Congress on October 28.
Why is China revising internet rules?
The move comes amid a surge in AI-generated content on earthquake damage and kidnappings, leading to widespread public confusion. Here are some of the recent incidents that prompted the government to revise the laws:
- According to the report, after a 6.8-magnitude earthquake in Tibet in January, a fake image of a baby under rubble went viral. However, it turned out to be AI-generated, and the person who shared it was detained.
- A man in Zhejiang province faked a kidnapping story using an online photo.
- A woman in Shanxi province posted false earthquake damage photos that didn’t exist.
China has more than 515 million AI users, and the number is growing fast. As fake content spreads, Beijing is tightening its grip to ensure AI supports government values and doesn’t threaten social stability.
ALSO READ | New AI guidelines will chart India's path to responsible innovation
What are China’s key concerns?
The government’s main worries include:
Also Read
- The spread of misinformation, such as exaggerated or fake disaster images, can harm the country’s reputation.
- Damage to the image of senior leaders, as false or misleading content could lead to public distrust or political instability.
What do China’s existing AI laws say?
Since August 2023, China has had a law to control how generative AI (like chatbots or image generators) is used. The law says AI must follow socialist values and cannot create content that causes social unrest or challenges the government. Because of this, Chinese AI tools like DeepSeek avoid political topics.
What does China’s new AI law require?
According to the report, the revised law will:
- Punish people who use AI to spread fake or harmful content.
- Make AI companies label all AI-generated images and videos clearly.
- Crack down on apps that create nude or altered faces and voices.
- Require AI platforms and social media companies to monitor and remove fake or misleading content.
Authorities have already forced corrections in 3,500 apps and 960,000 posts, showing they’re taking stronger control of AI use.
How are other countries responding to fake content?
Countries around the world, including India, are proposing stricter rules around AI-generated content to tackle the spread of deepfakes and misinformation. Last month, the Indian government ruled that all AI-generated content should be clearly labelled.
Even the European Union has introduced the AI Act, the world’s first major law to regulate artificial intelligence. The law divides AI systems into categories based on risk levels. High-risk systems, including those used in healthcare, education, or law enforcement, must follow strict rules and be checked regularly.
Meanwhile, AI tools that can manipulate people, social scoring, and facial recognition in public places are banned. Generative AI tools, such as ChatGPT, must clearly label AI-created content, prevent illegal material, and follow copyright laws.

)