ChatGPT has won the right to self-rule but it might go badly; here's why

Rishi Sunak's shift from "AI safety" champion to regulation sceptic reflects a wider retreat by governments betting on AI-led growth, even as real-world harms mount and self-policing looks shaky

ChatGPT
ChatGPT may be the fastest-growing software of all time, regularly used by 10 per cent of the global population just three years after launch | (Photo: Reuters)
Bloomberg
5 min read Last Updated : Dec 03 2025 | 11:39 AM IST

Don't want to miss the best from Business Standard?

By Parmy Olson
  Former British Prime Minister Rishi Sunak once thought artificial intelligence so risky that in 2023 he organised the world’s first “AI Safety Summit,” inviting policy makers and longtime AI doomer Elon Musk to talk up guardrails for the boom sparked by ChatGPT. Two years later and his view has softened considerably. 
“The right thing to do here is not to regulate,” he told me last month at Bloomberg’s New Economy Forum, saying companies like OpenAI were “working really well” with security researchers in London who tested their models for potential harms. Those firms were volunteering to be audited. When I pointed out they might change their minds in the future, Sunak replied, “So far we haven’t reached that point, which is positive.” But what happens when we do?
 
Sunak’s U-turn from once saying Britain should be the “home of AI safety regulation” to wanting no legislation at all reflects a broader shift happening to governments around the world. Behind it is an urge to capitalise on tech that could revitalise stagnant economies and a sense that strict rules aren’t needed without clear evidence of widespread harm.
 
But waiting for catastrophe before regulating is a gamble when new technology is spreading so quickly. ChatGPT may be the fastest-growing software of all time, regularly used by 10 per cent of the global population just three years after launch. It may also be reshaping our brains. Its owner OpenAI has been sued by the families of multiple people who’ve had delusional spirals or become suicidal after spending hours on ChatGPT. One campaign group has collected stories from more than 160 people who say it harmed their mental health. AI is meanwhile wreaking havoc on school homework, entrenching stereotypes, sparking a novel kind of dependency and engaging in artistic theft.
 
All of this has faded into the background amid a tech-hype cycle that even former safety advocates have jumped on. Sunak, for one, has taken advisory roles at AI companies Anthropic PBC and Microsoft Corp., and while he has pledged his salary to charity, those relationships will be valuable should he leave politics. Musk, who fretted about AI’s existential risks, has gone quiet on the subject since founding xAI Corp., the firm behind chatbot Grok. But throwing caution aside in a chase for uncertain economic benefits may come back to haunt governments.
 
Both the West and Asia seem to have entered this age of regulatory leniency. The US went from issuing an executive order under President Joe Biden to build safer AI in 2023, to banning that order under Donald Trump. The current administration is fast-tracking data centers and chip exports to beat China, and trying to block state-level AI laws so tech businesses can thrive. Silicon Valley billionaires such as Marc Andreessen have at the same time committed tens of millions of dollars to lobbying against any future AI restrictions.
 
The UK has a track record of creating quick and sensible tech regulation, but it also looks unlikely to crack down on generative AI. The European Union has delayed some of the strictest provisions of its AI Act until 2027, while that law’s Code of Practice has been postponed. Europe’s digital privacy rules were once a template for other governments, yet the so-called Brussels effect looks unlikely to trouble AI.
 
China is no exception to this laissez-faire trend. The country’s Communist Party has rolled out policies aimed at helping domestic AI companies flourish. Despite strict rules requiring social media firms to register their algorithms to prevent social unrest, similar standards only apply to chatbots or AI tools that generate images or videos. These businesses must label deepfakes and test their tools to make sure they don’t generate illegal or politically sensitive content.
 
But mass-market consumer chatbots are only a slice of China’s AI market. The country’s biggest AI sectors are in areas such as industrial automation, logistics, e-commerce and AI infrastructure. Companies working on this get generous research and development tax deductions, VAT rebates and lower corporate tax, according to a 2025 research paper by Angela Zhang, a professor of law at the University of Southern California and a leading authority on Chinese tech regulation.
 
China’s softer approach to AI firms is down to the CCP also being a major customer for their tools, particularly for surveillance tech like facial recognition. Beijing has too much invested in AI to smother its development, and US export restrictions on chips and a nationwide economic slump have only pushed China more toward growth than regulating.
 
That “offers little protective value to the Chinese public,” Zhang argues. She and others working in security research have warned of AI-enabled disasters sparked by China’s historically lax approach to hazards, from AI-designed pathogens to its disruption of electrical grids and oil pipelines.
 
The prevailing wisdom among governments is that AI companies should be left to self-govern. “Look, I don’t think anyone wants to put something into the world which they think would genuinely cause significant harm,” Sunak said. But unintended consequences often arise when technologists start off with the best intentions for humanity. Self-regulation works, until it doesn’t. 
Disclaimer: This is a Bloomberg Opinion piece, and these are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper
   
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

Topics :Artificial intelligenceRishi SunakOpenAIChatGPT

First Published: Dec 03 2025 | 11:39 AM IST

Next Story