AI ads face ASCI test as draft rules target deepfakes, misleading claims

The guidelines also state that AI-driven paid or sponsored product suggestions would need to be specifically labelled as "sponsored by". ASCI has sought feedback by June 13

Artificial intelligence, AI
Representative image from file.
Akshita Singh New Delhi
3 min read Last Updated : May 12 2026 | 4:48 PM IST
The Advertising Standards Council of India (ASCI) on Tuesday released draft guidelines proposing responsible labelling norms for AI-generated content in advertising, as the use of artificial intelligence in brand campaigns becomes more widespread.
 
The self-regulatory body, in a statement, said the draft framework has been aligned with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, amended on February 10, and aims to ensure transparency without causing “consumer label fatigue” around synthetically generated content.
 
ASCI said the proposed framework follows a “risk-based approach”, focusing on consumer impact rather than regulating the technology itself.
 
According to the draft guidelines, AI use in advertising would be considered misleading or harmful only when it creates “unfulfillable expectations” that could exploit vulnerable consumers, depict unsafe situations, or replicate a real person’s likeness without consent.
 
The draft guidelines divide AI-generated advertising content into three categories: high risk, medium risk and low risk, depending on the level of risk posed to consumers.
 
Under the “high risk” category, advertisements that are illegal, misleading, infringe rights, or violate the ASCI Code would remain prohibited even if AI disclosures are added.
 
Examples listed by ASCI include fabricated testimonials, exaggerated product claims or visuals, fake but realistic locations, unauthorised deepfakes, use of copyrighted work without consent, and AI-generated fictional authority figures such as fake doctors endorsing products.
 
The “medium risk” category covers advertisements where AI-generated content could materially influence consumer decisions and where non-disclosure may mislead audiences. In such cases, labelling would be mandatory.
 
ASCI said this category includes virtual influencers, AI-generated likeness or voice replication, synthetic visuals demonstrating product performance, entirely AI-created events or situations, demonstrations of products that do not yet exist, and exaggerated AI-generated sound effects linked to core product features.
 
The guidelines also state that AI-driven paid or sponsored product suggestions would need to be specifically labelled as “sponsored by”.
 
For “low risk” content, ASCI said no disclosure would be necessary where AI is used only for limited modifications that do not materially affect consumer understanding. These include routine editing, colour correction, background effects, ambient sound, fantasy elements that the audience recognise as not depicting reality, and administrative uses such as generating ad copy or accessibility descriptions.
 
Where disclosure is required, ASCI said advertisers may use labels such as “Audio/Video created using AI” or “Audio/Video enhanced using AI”, or any other wording that accurately informs consumers. It added that disclaimers must comply with the ASCI Code on disclaimer guidelines wherever applicable.
 
The draft guidelines have been opened for public consultation till June 13, 2026. ASCI said feedback from industry bodies, consumer groups and other stakeholders can be submitted before the finalisation process begins.

More From This Section

Topics :Artificial intelligencesponsored contentAdvertismentadvertising in Indiamisleading advertisementsAdvertising industryBS Web Reports

First Published: May 12 2026 | 4:48 PM IST

Next Story