Home / Technology / Tech News / MeitY's amendments on fake content to up risks for social media platforms
MeitY's amendments on fake content to up risks for social media platforms
Experts say MeitY's amendment to the IT Rules for AI-generated content, which cuts takedown timelines to three hours, will increase compliance costs and risks for platforms
)
premium
Representative image from file.
4 min read Last Updated : Feb 11 2026 | 6:55 PM IST
Listen to This Article
Social media platforms face a sharp increase in compliance pressure after the Ministry of Electronics and Information Technology (MeitY) amended the IT Rules for AI-generated content, cutting takedown timelines from 36 hours to three and tightening due diligence requirements under the threat of losing safe harbour protections.
MeitY’s amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mark a shift from a notice-and-takedown model to an active, rules-based governance model, experts said.
Though Big Tech companies such as Meta, X, and Google, among others, are likely to face an increased compliance burden due to the significantly reduced timelines for taking down unlawful content, the government has also relaxed some norms on labelling of synthetically generated information (SGI), industry experts said.
“The move to three-hour takedown obligations and two-hour action on impersonation and intimate imagery effectively requires round-the-clock compliance operations. The real challenge for platforms will lie in designing proportionate, verifiable systems that can meet these timelines without over-removal or chilling legitimate speech,” said Ankit Sahni, a partner at law firm Ajay Sahni & Associates.
Though the government has, for the first time, clearly defined what kinds of content will be classified under the SGI umbrella, some compliance obligations on social media intermediaries seem excessive, an industry executive said.
“In particular, the due diligence requirements extend beyond a standard of ‘reasonable efforts’ and move towards a more hard-coded obligation, failure of which may result in loss of safe harbour,” said Huzefa Tavawalla, partner and head of digital disruption at law firm Cyril Amarchand Mangaldas.
Additionally, the IT ministry’s mandate to take down unlawful or flagged content within three hours, instead of the currently allowed 36-hour timeline, is likely to lead to enforcement errors by companies, another industry executive said.
Salman Waris, partner at TechLegis, believes that takedown deadlines being reduced to three hours (or two hours for sensitive content) will force reliance on automated systems rather than human review.
“This marks a departure from the Shreya Singhal precedent, which limited intermediary liability to actual knowledge and rejected proactive monitoring. This effectively shifts platforms from passive hosts to proactive gatekeepers by imposing strict timelines and constructive knowledge-based liability,” he pointed out.
Waris also noted that failure to detect or label deepfakes, even without explicit notice, can result in loss of safe harbour protection under Section 79.
“For new platforms and players, including homegrown platforms, this significantly increases the cost of doing business and barriers to entry. Finally, the time period given to comply with these rules is not long — these provisions become applicable in barely 10 days,” said Vikram Jeet Singh, partner at law firm BTG Advay.
How will compressed takedown timelines affect platforms?
These extremely compressed timelines could also carry the risk of precautionary removal by social media intermediaries, said Divye Agarwal, co-founder of Binge Labs.
“The focus should therefore remain on addressing misuse decisively, while preserving due process and safeguarding legitimate expression within the digital ecosystem,” Agarwal said.
What do the new IT Rules say on synthetic and AI content?
For the first time, the rules also differentiate between what is synthetic content and what is allowed. For instance, routine editing or use of filters for photos, transcribing videos, removing background noise, making PPTs, and using AI to generate diagrams and graphs is allowed. However, they also add that social media platforms have to make sure that content made using purely AI is declared as such by the creator.
However, experts caution that the labelling, traceability, and user disclosure requirements under the 2026 IT Rules face significant technical and operational challenges at scale.
“While platforms like Meta and YouTube already use visible labels or watermarks, the requirement for labels to cover 10 per cent of visual content (in the draft) was relaxed due to industry pushback. Final rules allow more flexibility, but automated, real-time labelling across 22+ languages and formats remains complex,” added Waris.
Then there are metadata embedding requirements. Permanent, tamper-proof metadata or unique identifiers (for example, C2PA-style provenance) may be technically possible but not universally adopted, he said, adding that smaller platforms may lack the infrastructure to implement such an order.