As artificial intelligence (AI) evolves, governments are drafting rules to govern the way AI is built, trained, and deployed. Yet, regulators across the world are struggling to keep pace. There is a growing sense of understanding that AI, especially generative AI, doesn’t recognise national borders. The European Union (EU) is leading the way in crafting a structured framework. Its AI Act came into force in August last year. Meanwhile, the recently released Code of Practice for general-purpose AI sets important benchmarks on transparency, copyright compliance, and systemic risk management, helping firms comply with those norms and offering legal clarity to those that adopt it. The code encourages AI companies to document their models, respect rights over scraped content, and monitor risks from harmful outputs, though signing up remains voluntary. Transparency measures require AI developers to disclose model documentation, training methods, and intended use cases, helping downstream providers and regulators alike. The copyright chapter mandates respect for digital rights, use of lawful data sources, and safeguards against infringing AI output. Most notably, the safety and security framework demands lifecycle assessments, post-market monitoring, and serious incident reporting for models with systemic risks. Clearly, this is the most comprehensive AI governance effort yet, combining precaution with innovation support.
The assumption that Big Tech will regulate itself and companies will deploy responsible and ethical AI, especially in the face of profit pressures and geopolitical competition, is naïve at best. Even during the first international summit on AI safety, held at Bletchley Park (United Kingdom) in 2023, founders and chief executives of large tech companies, for instance, were unable to arrive at a consensus regarding the severity of long-term risks posed by AI. However, despite these challenges, it is encouraging that the EU has come up with a common framework to contain some of the potential bad effects of AI. India can gain from global standards because it has a large pool of tech professionals who can help develop solutions for the world. Thus, developing global standards and regulations will be critical, but it will not be easy. There is indeed a need to develop AI with minimal possible restrictions. Such applications should, however, undergo risk assessment to determine their appropriate risk category before deployment. Additionally, every AI system should maintain an audit trail of its decision-making processes to facilitate investigations and forensics, ensuring transparency and accountability.