India needs specific AI regulations for Grok-like issues: Experts

The Grok controversy has exposed gaps in India's tech laws, reviving calls for AI-specific regulation, conditional safe harbour, and stronger safeguards against misuse of generative tools

grok ai, xai
According to other experts, though the current legal framework may help, it is not comprehensive enough to cover evolving issues such as AI.
Aashish AryanShivani Shinde New Delhi/Mumbai
3 min read Last Updated : Jan 11 2026 | 10:36 PM IST
The controversy around Elon Musk-owned X’s artificial intelligence (AI)-powered chatbot, Grok, generating sexually explicit and objectionable images of women, has highlighted serious legal gaps in existing Indian laws, according to experts.
 
“Issues with AI chatbots like Grok have highlighted significant gaps and mismatches in how existing laws, such as the Indian IT Act, apply to AI. There is an urgent need for the government to reconsider its position and consider separate, AI-specific regulations,” said Salman Waris, founder and managing partner, TechLegis.
 
Last week, female users of X in India, and several other countries, reported the misuse of their images, as other users generated sexually explicit and objectionable photos using Grok through prompts. 
While UK and Indonesia were considering banning the chatbot and X, India was also considering legal action against the platform, for failing to comply with laws governing obscenity.
 
According to other experts, though the current legal framework may help, it is not comprehensive enough to cover evolving issues such as AI.
 
“While existing laws such as the Information Technology Act and the Digital Personal Data Protection Act provide a foundational regulatory structure, they were not drafted with autonomous, self-learning systems in mind,” pointed out Rahul Mehta, partner at King Stubb & Kasiva.
 
As these AI tools become increasingly embedded in business-critical decision-making, issues relating to accountability, bias, transparency, data governance, and liability remain only partially addressed, he added.
 
Even as the Indian government believes that safe harbour provisions of platforms such as X should be withdrawn in such cases, some experts believe that such a step could affect platform innovation as well.
 
“In the case of X, there is growing concern that immunity protections are being stretched well beyond their original intent. Yet, withdrawing safe harbour protections altogether would be a blunt response — one that risks undermining innovation, chilling free expression, and weakening the very openness that made digital platforms transformative in the first place,” said Jaspreet Bindra, cofounder of AI & Beyond.
 
Experts said that to prevent a similar situation in the future, the Indian government should adopt a conditional safe harbour that provides immunity to platforms only if they demonstrate responsibility for the content they host.
 
“This would require clear evidence of proactive moderation, transparency in enforcement, and heightened safeguards for AI-amplified content. Immunity in this model is not automatic; it is earned and continuously justified,” Bindra said.
 
Others, such as Mehta, believe that a calibrated, risk-based AI governance framework, aligned with global best practices, yet tailored to India’s economic and social context, will be essential to provide legal certainty, while continuing to support responsible AI adoption at scale.
 
There should also be mandatory safeguards, including pre-market risk assessments and robust safety guardrails for high-risk AI systems that are capable of generating deepfakes, Waris said.
 
“The challenges presented by rapidly evolving AI suggest that a comprehensive, forward-looking legal architecture is ultimately necessary to balance innovation with public safety and accountability. Hence, while the Indian government currently may be trying to work within the existing legal framework and issue advisories, the rapid development of AI technologies and their consequences on society would ultimately force them to consider some form of new regulation in the near future,” he added.

One subscription. Two world-class reads.

Already subscribed? Log in

Subscribe to read the full story →
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

Topics :AI ModelsChatbotsartifical intelligenceTwitter

Next Story