EU's code of practice sets key benchmarks for regulating AI development
Most countries have still not been able to catch up. India, for instance, has no dedicated AI law
)
premium
The assumption that Big Tech will regulate itself and companies will deploy responsible and ethical AI, especially in the face of profit pressures and geopolitical competition, is naïve at best.
Listen to This Article
As artificial intelligence (AI) evolves, governments are drafting rules to govern the way AI is built, trained, and deployed. Yet, regulators across the world are struggling to keep pace. There is a growing sense of understanding that AI, especially generative AI, doesn’t recognise national borders. The European Union (EU) is leading the way in crafting a structured framework. Its AI Act came into force in August last year. Meanwhile, the recently released Code of Practice for general-purpose AI sets important benchmarks on transparency, copyright compliance, and systemic risk management, helping firms comply with those norms and offering legal clarity to those that adopt it. The code encourages AI companies to document their models, respect rights over scraped content, and monitor risks from harmful outputs, though signing up remains voluntary. Transparency measures require AI developers to disclose model documentation, training methods, and intended use cases, helping downstream providers and regulators alike. The copyright chapter mandates respect for digital rights, use of lawful data sources, and safeguards against infringing AI output. Most notably, the safety and security framework demands lifecycle assessments, post-market monitoring, and serious incident reporting for models with systemic risks. Clearly, this is the most comprehensive AI governance effort yet, combining precaution with innovation support.