3 min read Last Updated : Nov 12 2025 | 10:48 PM IST
Don't want to miss the best from Business Standard?
India’s “Governance Guidelines for Artificial Intelligence” (AI), released recently by the Ministry of Electronics and Information Technology, marks an important milestone in how the country envisions AI governance. In the absence of a dedicated AI law, these guidelines represent a pragmatic step towards anticipatory governance, one that seeks to manage risks without stifling innovation. Rather than replicate Western models of precautionary regulation, India is crafting a homegrown framework that aligns trust, innovation, and inclusion as mutually reinforcing principles. At the heart of this new architecture are the seven guiding sutras, or principles — trust, people-first design, innovation over restraint, fairness and equity, accountability, understandable by design, and safety, resilience and sustainability.
Supporting these principles are six structural pillars: Infrastructure, capacity building, policy and regulation, institutional design, accountability, and risk mitigation. Each pillar addresses a distinct capability gap. Expanding access to infrastructure through 38,000 graphics processing units, broadening the AI Kosha repository of local datasets, and setting up an AI Safety Institute are foundational steps. They signal India’s intent to build hardware, data, and human expertise before legislating. These guidelines come at a critical juncture. A recent NITI Aayog study estimated that AI could add $1.4 trillion-1.9 trillion to India’s gross domestic product by 2035, with productivity gains and reallocation of human effort to higher-value tasks generating $500 billion-600 billion and innovation contributing another $280 billion-475 billion.
The guidelines propose embedding accountability directly within code. Techniques such as watermarking, provenance tracking, and consent-management application programming interfaces (APIs) can make content traceable and consent enforceable through digital architecture. Institutional coordination is another aspect. The proposed AI governance group and technology and policy expert committee may harmonise efforts across sectoral regulators. Such cross-regulatory cooperation recognises that AI is not a single-sector phenomenon but a general-purpose technology. Complementing this is a planned AI-incident database, which will collect real-world evidence of harms or failures posed by AI systems. So far, India has relied on existing laws to govern digital platforms and address AI misuse.
For India, the stakes are high. Exercising restraint while promoting innovation requires a delicate balance. Voluntary compliance will hold only if companies act with responsibility. In the absence of strong oversight, poorly governed AI systems may deepen existing social biases and erode public trust. Though the new guidelines aim to prevent such outcomes by pairing technological advancement with ethical discipline, robust data governance will remain indispensable. Clear frameworks must guide the collection, curation, anonymisation, encryption, sharing, and deletion of data, alongside mechanisms for bias detection, validation of synthetic data, and adoption of open standards. Without transparent and secure data pipelines, AI could amplify systemic fragilities instead of correcting them. Equally, the government must engage meaningfully with industry bodies that raise legitimate concern over AI governance, and not remain rigid in prescription. Ultimately, principles and guidelines are merely the starting point; success will depend on sustained investment in skills and institutional accountability.