3 min read Last Updated : Nov 25 2025 | 10:53 PM IST
Prime Minister Narendra Modi’s call at the G20 summit in Johannesburg for a “global compact on artificial intelligence (AI)” comes at a time when nations are trying to keep pace with the rapid expansion of AI and the risks that accompany it. Mr Modi highlighted the need for strong human oversight, safety-by-design systems, transparency, and firm bans on the usage of AI in deepfakes, and criminal and terrorist activities. These concerns are not abstract: There are warnings that by next year, nearly 90 per cent of online content may be AI-generated, vastly increasing the risk of misinformation and making it increasingly difficult for people to distinguish fact from manipulation.
It is, therefore, encouraging that some movement towards multilateral coordination has begun. The United Nations (UN) recently launched a universal platform for a “Global Dialogue on AI Governance” to promote safe, secure, and trustworthy AI systems, strengthen cooperation between fragmented governance frameworks, and encourage open, inclusive innovation. India, too, has released detailed AI governance guidelines this month, emphasising safety, accountability, transparency, and responsible innovation.
Yet the world faces steep hurdles in establishing effective and fair AI rules. Hundreds of guides, frameworks, and principles have been published by governments, corporations, and international organisations, but none is truly global or comprehensive. The UN reports that while seven countries participate in all major AI governance initiatives, over 100 countries, mostly in the Global South, are part of none, leaving much of the world without a voice in decisions that directly affect them. Meanwhile, only a handful of nations control the computing power required to build advanced AI models, deepening inequalities and limiting the ability of developing countries to participate meaningfully in global rule-making.
This scattered landscape creates three core governance challenges: Poor representation, weak coordination, and limited implementation. Many countries and organisations continue to design their own rules, often overlapping or contradicting one another. Without better coordination, the world risks fragmented regulatory regimes or, worse, a “race to the bottom” where safety and rights are compromised for competitiveness. And representation and coordination alone are not enough: Real accountability depends on implementation covering capacity-building, technical support, and systems that help small and medium enterprises adapt responsibly.
To address these gaps, the world needs shared safety standards, coordinated independent audits for high-risk AI systems, transparent data practices, and firm bans on harmful uses. Developing countries must have access to open-source models, affordable computing, and strong digital public infrastructure so that they are not left behind. As India prepares to host the AI Impact Summit in February next year, it has an opportunity to shape an inclusive global approach and translate its national leadership into international influence. India stands to benefit significantly from global standards: Its large pool of engineers, researchers, and technology professionals can contribute solutions not just for India but for the world. But a meaningful compact will materialise only if major powers set aside narrow strategic interests and commit themselves to shared responsibilities. In a world where AI can be weaponised as easily as it can be used for the public good, the cost of inaction could be heavy.