Wednesday, March 19, 2025 | 08:30 PM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

Implementing robust legal guardrails: How to make AI safe and trusted

AI systems need ongoing monitoring and control mechanisms that can detect and prevent potential misuse or intended/unintended harm

AI, Artificial Intelligence

The concerns of misuse and harm from AI is real. Compounding these worries is that AI tech is increasingly concentrated in a few countries and companies. | (Photo: Shutterstock)

Rajeev Chandrasekhar New Delhi

Listen to This Article

AI is the coming wave and we are living in the age of AI. 
This is the last piece of a 3-part series The first two pieces are recommended reading before you step into this one.
 
First, I wrote about need for Sovereign AI
   
And here, I write about the need for a framework that will ensure AI is safe and trusted.
 
In November 2023, at the New Delhi Global Partnership for AI conference, Prime Minister Narendra Modi ji laid out a remarkably clear vision for AI.
 
“In India, we are witnessing an AI innovation spirit. India is committed to responsible and ethical use of AI. There is no doubt that AI is transformative, but it is up to us to make it more and more transparent. Trust on AI will grow only when related ethical, economic and social aspects are addressed. We have to work together to prepare a global framework for the ethical use of AI. Can a Software Watermark be introduced to mark any information or product as AI-generated. Explore an audit mechanism that can categorise AI tools into red, yellow or green as per their capabilities”.
 
 
PM Modi ji was the first global leader who brought into focus the misuse of AI and the need to ensure its responsible and ethical use to ensure that these AI systems are safe and trusted.
 
Misuse of AI
 
The concerns of misuse and harm from AI is real. Compounding these worries is that AI tech is increasingly concentrated in a few countries and companies. As artificial intelligence systems become increasingly powerful and pervasive across society, the need for robust safety measures and guardrails has never been more urgent. These safeguards aren’t just technical requirements — They represent a crucial social and/or legal contract between AI developers and the public, one that will determine whether AI systems earn genuine trust and acceptance.
 
The worry of tech misuse is real, history is replete with them. Take the case of Aadhaar when launched during UPA, it was so poorly guardrailed legally and process wise, that millions of illegal immigrants gatewayed into new Indian identities. The history of tech teaches us this- tech will invariably and inevitably land up in bad hands e.g. AQ Khan and atom bomb centrifuge tech, AI chips in China despite US export control, missiles in Korea etc. Consider AI in that same category.
 
While many in the innovation ecosystem will argue that these guardrails are not required and fear regulatory interference – it is clear from our experiences of broader internet platforms and the absence of guardrails, that user harm becomes a significant issue for users and huge systemic problems are created down the road.
 
At its core, AI safety requires multiple layers of protection working in concert. It begins during development and training – where bad data sets or bad algorithms or both can cause models to be trained incorrectly. AI systems must be trained on carefully curated datasets that reflect appropriate values and behaviour. Includes strict filters against harmful content and building in fundamental constraints that prevent the system from engaging in deceptive or manipulative behaviour.
 
Training isn’t enough. AI systems need ongoing monitoring and control mechanisms that can detect and prevent potential misuse or intended/unintended harm. This includes rate limiting to prevent abuse, content filtering to block inappropriate outputs, and kill switches that can immediately halt operation if problems arise.
 
So the three questions that this article pose:
 
What framework will ensure responsible and ethical use of AI?
 
There are two broad models for this – one a social contract i.e. self-regulation and second legally prescribed guardrails for safety and trust. First model which Big Tech loves and loves to violate and puts the onus on user or the government to point out the violations, for platforms to then decide to act or not - has failed in the case social media. The alternative is that of legal guardrails that make it necessary for AI systems to comply with a legal framework that will ensure safety and trust i.e. to not cause user harm.
 
What or who will regulate the use of Data that is used to train these AI models?
 
With the passing off the Digital Data Protection Bill by PM Narendra Modi - an important marker has been set – that personal data can be used with only consent. However, to ensure data sovereignty, we need legislation to deal with other side of the data coin – the non-personal data/anonymised and publicly scraped data which increasingly are the fuel for training big AI platforms like Googles Gemini (that uses Search Engine, Android and Gmail harvested data), Meta (that uses their social media harvested data), Open AI that is using publicly scraped data and other sources that are unknown. The legal framework that deals with Safety Trust obligations could also deal with this important issue of data sovereignty.
 
Government must build the capabilities to audit/test AI systems
 
Government, through its private sector/academic network, must create capabilities to test AI models. This is certainly not trivial and is a medium-term project that is also AI research. The testing framework should involve diverse stakeholders, including ethicists, security researchers, and representatives from potentially affected communities.
 
Safety & trust must be evolving to be futureproof
 
As AI capabilities advance, safety and trust frameworks must evolve in parallel and not be left to individual organizations or platforms definitions and interpretation of responsibility and ethics. Today’s guardrails may prove insufficient for tomorrow’s systems, and we must anticipate future safety challenges and develop more sophisticated protections and/or rapidly evolving rule framework.
 
The goal isn’t just to prevent harm—it is to ensure a framework that allows innovation to push the frontiers of AI, but at same time create AI systems that are genuinely beneficial and trustworthy partners in human endeavours. Building safe and trusted AI systems isn’t optional —it’s a fundamental requirement for responsible deployment and use of AI. By implementing robust legal guardrails alone can we ensure that AI platforms and systems that are deployed and available in India are always safe and trusted and safe & trusted AI defines India’s AI.
 

 
The writer is a former Union minister. This is the conclusion of a 3-part series exclusively for Business Standard
Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Jan 14 2025 | 1:32 PM IST

Explore News