Home / Opinion / Editorial / EU's code of practice sets key benchmarks for regulating AI development
EU's code of practice sets key benchmarks for regulating AI development
Most countries have still not been able to catch up. India, for instance, has no dedicated AI law
premium
The assumption that Big Tech will regulate itself and companies will deploy responsible and ethical AI, especially in the face of profit pressures and geopolitical competition, is naïve at best.
3 min read Last Updated : Jul 16 2025 | 10:42 PM IST
As artificial intelligence (AI) evolves, governments are drafting rules to govern the way AI is built, trained, and deployed. Yet, regulators across the world are struggling to keep pace. There is a growing sense of understanding that AI, especially generative AI, doesn’t recognise national borders. The European Union (EU) is leading the way in crafting a structured framework. Its AI Act came into force in August last year. Meanwhile, the recently released Code of Practice for general-purpose AI sets important benchmarks on transparency, copyright compliance, and systemic risk management, helping firms comply with those norms and offering legal clarity to those that adopt it. The code encourages AI companies to document their models, respect rights over scraped content, and monitor risks from harmful outputs, though signing up remains voluntary. Transparency measures require AI developers to disclose model documentation, training methods, and intended use cases, helping downstream providers and regulators alike. The copyright chapter mandates respect for digital rights, use of lawful data sources, and safeguards against infringing AI output. Most notably, the safety and security framework demands lifecycle assessments, post-market monitoring, and serious incident reporting for models with systemic risks. Clearly, this is the most comprehensive AI governance effort yet, combining precaution with innovation support.
Most countries have still not been able to catch up. India, for instance, has no dedicated AI law. The Digital Personal Data Protection Act, passed in 2023, offers only a partial safeguard and is not built to address the complexities of model training, open-source proliferation, or cross-border data scraping. Meanwhile, companies building these models — OpenAI, Google, or Meta — are operating at a global scale. Their crawlers scour the web, collecting information, often with little regard for copyright or consent. There is barely any regulation that governs how this data is collected. Most countries are trying to adapt old laws — copyright, privacy, and intermediary liability — to fit this new technology.
The assumption that Big Tech will regulate itself and companies will deploy responsible and ethical AI, especially in the face of profit pressures and geopolitical competition, is naïve at best. Even during the first international summit on AI safety, held at Bletchley Park (United Kingdom) in 2023, founders and chief executives of large tech companies, for instance, were unable to arrive at a consensus regarding the severity of long-term risks posed by AI. However, despite these challenges, it is encouraging that the EU has come up with a common framework to contain some of the potential bad effects of AI. India can gain from global standards because it has a large pool of tech professionals who can help develop solutions for the world. Thus, developing global standards and regulations will be critical, but it will not be easy. There is indeed a need to develop AI with minimal possible restrictions. Such applications should, however, undergo risk assessment to determine their appropriate risk category before deployment. Additionally, every AI system should maintain an audit trail of its decision-making processes to facilitate investigations and forensics, ensuring transparency and accountability.