The advent of Artificial Intelligence (AI) has led to an increase in deepfake-related issues across sectors.
The Gartner report titled “Detect Deepfakes to Guard Against Impersonation and Disinformation” said that 62 per cent of chief executive officers (CEOs) and senior business executives globally think deepfakes will create at least some operating costs and complications for their organisations in the next three years, while 5 per cent considered it an existential threat.
“Deepfake content is on the rise, posing a problem for organisational security processes and acting as a vector for disinformation. Executive leaders must understand how deepfakes create risk, how to detect them, and how to protect against potential financial, reputational, and operational harm,” said the report.
The impact of the illicit use of AI to create hyper-realistic deepfake content can also be felt by Indian firms.
Companies that Business Standard spoke to did not recognise deepfakes as an existential threat at this moment but acknowledged that it was capable of creating financial losses through attempts to malign their leaders or the brand itself.
“Deepfakes are increasingly becoming a significant challenge, especially for organisations focused on user identity or consumer verification. The impact is profound, particularly for companies engaged in transactions, such as e-commerce platforms, banks, telecom operators, and food delivery apps,” said Saurabh Gupta, chief executive officer and founder, VeriSmart AI, a digital identity verification platform.
Explaining the implications that deepfakes can have, Gupta, citing an example, said, “If a telecom operator fails to verify a user and inadvertently allows a deepfake to access their services, it could lead to regulatory scrutiny, penalties, and even the shutdown of digital onboarding systems.”
“The magnitude of this impact can be in the double-digit percentage range, affecting both operational efficiency and business continuity. Therefore, deepfakes pose a severe threat to daily operations, especially in industries where user verification is paramount,” he added.
The author of the Gartner study also pointed out some serious risks that deepfakes can pose for an enterprise, including bypassing the security firewall, intellectual property theft, and planting ransomware in a company's systems.
“An attacker could impersonate the chief financial officer’s (CFO) voice or face and instruct an employee to transfer money into accounts belonging to the attacker. This could lead to significant financial losses for the company. Another example could be attackers submitting deepfake content to abuse a company’s processes. For instance, an insurance company might receive images of damage to a car which are deepfake images, and they could lose money by paying out on that fraudulent claim,” said Akif Khan, vice-president analyst, Gartner.
Deepfakes are synthetic media, typically videos or images, created using AI to convincingly alter or fabricate content, often making it appear as if someone said or did something they never actually did.
According to Adobe’s “Future of Trust Study for India,” 81 per cent of Indians believe that the content they see online has been altered in some way, indicating a lack of trust in online information.
Governments, regulatory bodies, and digital rights groups have been stressing the urgent need to curb this issue, terming the rise of deepfakes a threat to democratic governance, corporates, public awareness, and trust.
The issue of deepfakes is commonly linked to reputational damage caused by digital impersonation, but a recent Gartner report highlighted that these manipulations might also cause significant financial losses for companies.
Deepfakes primarily cause two types of damage—monetary fraud and mass hysteria—and organisations can guard against these by improving verification methods, which may involve deploying advanced anti-AI tools, said experts.
“Technologies include liveliness checks, which assess whether a user’s behaviour in a video is consistent with a live human rather than an AI-generated deepfake. Further, anti-AI tools are also being deployed to detect AI-generated content by analysing attributes like eye movement, which can differ significantly between real and AI-generated images,” said Gupta of VeriSmart AI.
He further said that companies should strategically integrate these technologies into their existing processes, even though it may incur additional costs, to prevent heavy damages from deepfakes.
Another big deterrent to the proliferation of deepfakes from the government could be enforcing stricter penalties for deepfake creation and mandating clear tagging of AI-generated content, helping users differentiate between real and AI-made material, experts suggested.
Additionally, regular audits and accountability measures for large language models and AI technologies are essential to prevent misuse, they said.
“There should be frameworks in place to ensure that large language models and other AI technologies are audited regularly, with accountability measures to prevent their abuse. Without such oversight, the unchecked growth of deepfake technology could lead to a significantly more dangerous digital environment,” Gupta added.