European Union unveils AI rules to boost safety, transparency and trust

Makers of the most advanced artificial intelligence systems will face new obligations for transparency, copyright protection and public safety. The rules are voluntary to start

European Union
Photo: Reuters
NYT
4 min read Last Updated : Jul 10 2025 | 10:49 PM IST

Don't want to miss the best from Business Standard?

By Adam Satariano
  European Union officials unveiled new rules on Thursday to regulate artificial intelligence. Makers of the most powerful AI systems will have to improve transparency, limit copyright violations and protect public safety. 
The rules, which are voluntary to start, come during an intense debate in Brussels about how aggressively to regulate a new technology seen by many leaders as crucial to future economic success in the face of competition with the United States and China. Some critics accused regulators of watering down the rules to win industry support. 
The guidelines apply only to a small number of tech companies like OpenAI, Microsoft and Google that make so-called general-purpose AI These systems underpin services like ChatGPT, and can analyze enormous amounts of data, learn on their own and perform some human tasks. 
The so-called code of practice represents some of the first concrete details about how EU regulators plan to enforce a law, called the AI Act, that was passed last year. Tech companies played a major role in drafting the rules, which will be voluntary when they take effect on Aug. 2, before becoming enforceable in August 2026, according to the European Commission, the executive branch of the 27-nation bloc. 
The European Commission said companies that agreed to the voluntary code of practice would benefit from a “reduced administrative burden and increased legal certainty.” Officials said those that do not would have to prove compliance through other means, which could potentially be more costly and time-consuming. 
It was not immediately clear which companies would join. Google and OpenAI said they were reviewing the final text. Microsoft declined to comment. Meta, which had signaled it will not agree to the code of conduct, did not have an immediate comment. Amazon and Mistral, a leading AI company in France, did not respond to a request for comment. 
Under the guidelines, tech companies will have to provide detailed summaries about the content used for training their algorithms, something long sought by media publishers concerned that their intellectual property is being used to trained the AI systems. Other rules would require the companies to conduct risk assessments to see how their services could be misused for things like creating biological weapons that pose a risk to public safety. 
(The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to AI systems. The two companies have denied the suit’s claims.) 
What is less clear is how the law will address issues like the spread of misinformation and harmful content. This week, Grok, a chatbot created by Elon Musk’s artificial intelligence company, xAI, shared several antisemitic comments on X, including praise of Hitler.
 
Henna Virkkunen, the European Commission’s executive vice president for tech sovereignty, security and democracy, said the policy was “an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent.” 
The guidelines introduced on Thursday are just one part of a sprawling law that will take full effect over the next year or more. The act was intended to prevent the most harmful effects of artificial intelligence, but European officials have more recently been weighing the consequences of regulating such a fast-moving and competitive technology. 
Leaders across the continent are increasingly worried about Europe’s economic position against the United States and China. Europe has long struggled to produce large tech companies, making it dependent on services from foreign corporations. Tensions with the Trump administration over tariffs and trade have intensified the debate. 
Groups representing many European businesses have urged policymakers to delay implementation of the AI Act, saying the regulation threatens to slow innovation, while putting their companies at a disadvantage against foreign competition. 
“Regulation should not be the best export product from the EU,” said Aura Salla, a member of the European Parliament from Finland who was previously a top lobbyist for Meta in Brussels. “It’s hurting our own companies.”
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

Topics :European Unionartifical intelligenceChatGPTOpenAI

First Published: Jul 10 2025 | 10:34 PM IST

Next Story