These companies will also be required to remove non-consensual intimate imagery from their platforms within two hours, instead of a 24-hour window provided now, and clearly label all AI-generated and synthetic content, the Ministry of Electronics and Information Technology said on Tuesday.
The ministry notified these changes, which will come into effect from February 20, as part of amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Platforms like Facebook, YouTube, Instagram, and X will be impacted by the new rules.
The timelines have been compressed following feedback from various stakeholders that the earlier mandated takedown windows of 36 hours and 24 hours were too long, especially in the case of sensitive content, and could not prevent the content’s virality.
“Tech companies have an obligation now to remove unlawful content much more quickly than before. They certainly have the technical means to do so,” a senior IT ministry official said.
In the latest amendments, the IT ministry mandated that the platforms that enable users to generate content using AI must be clearly identified or labelled through visible disclosures of the content being synthetically generated or modified.
In addition to placing visible disclaimers on such AI-generated content, intermediaries must also, wherever possible, embed permanent metadata or other such identifiers to help trace the origin of the content.
While defining synthetically generated information (SGI) as any audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner it appears to be real, authentic or true, the ministry has exempted “good faith" editing of content using AI tools from the definition of SGI.
The newly notified amendments also state that as soon as an intermediary is made aware of the misuse of its tools to create, host or disseminate SGI, it must deploy “reasonable” and “appropriate” technical measures to prevent such content from being present on the platform.
The new amendments are materially broader than what were circulated in the draft for consultation, said Aman Taneja, a partner at Delhi-based law firm Ikigai Law.
“While the government has sharpened the definition of synthetic content and moved away from prescriptive requirements such as mandatory 10 per cent visual watermarking, it has simultaneously reduced takedown timelines across all categories of content to just a few hours. This significantly raises the compliance bar. For large platforms, meeting these timelines at scale will be operationally challenging and could push companies towards over-removal,” Taneja said.
Other experts, however, believe that the amendments mark a more calibrated approach to regulating AI-generated deepfakes.
“By narrowing the definition of synthetically generated information, easing overly prescriptive labelling requirements, and exempting legitimate uses like accessibility, the government has responded to key industry concerns while still signalling a clear intent to tighten platform accountability,” said Rohit Kumar, founding partner at the public policy firm The Quantum Hub.