Trai’s recommendations are notable for its acknowledgement that AI systems are still evolving and its citation of various international practices as ready references for India’s AI regulatory framework. In a nod to the European Union’s recent AI Act, the regulator noted that it was important to regulate specific AI use cases that might have a direct impact on humans within a risk-based framework. However, experts have cautioned against a centralised regulatory body, and argued for guardrails in lieu of rigid rules. Regulatory sternness can discourage young tech firms from entering the AI market, leading to a dominance of established tech giants. A quick consensus has also built up that AI needs human guidance and that the regulations need to make space for a clear framework for human-AI collaboration, as well as the goals and limits of such collaborations.
Here, Trai’s reference to the EU’s risk-based framework for establishing obligations for both providers and users in certain AI-related activities is crucial. In the EU’s AI Act 2023, those AI-driven systems classified under “unacceptable risk” will be considered a threat to people and banned. These include the cognitive behavioural manipulation of people or specific vulnerable groups, the social scoring of people based on behaviour, socioeconomic status or personal characteristics, or real-time and remote biometric identification systems, such as facial recognition. Similarly, AI systems deemed “high risk” will be divided into two categories. The first will include AI systems that are used in products falling under the EU’s product safety legislation, such as toys, aviation, and cars. The second category will include AI systems deployed in eight specific areas such as biometric identification, management, and operation of critical infrastructure and law enforcement, which will have to be registered in an EU database. The EU parliament lists similar preliminary precautionary measures for “GAI” and “limited risk” classifications as well.
Such a classification helps clear the clutter of an expanding basket of AI-driven tools and services. As AI innovators discover new ways of harnessing GAI or large language model-based tools, most such systems can be reliably classified into these “risk-based” subheads. For India, therefore, more crucial than establishing an AIDAI is the implementation of such a preliminary cataloguing, which can lay down the groundwork for most future AI regulations, no matter how rapidly the ecosystem evolves.