India has emerged as a key source of identity theft, a cybercrime that has surged in the last three years. Criminals, now armed with artificial intelligence and generative AI, are using sophisticated tools to blur the line between what’s real and what’s fake, making it harder than ever to protect personal information.
Employees of Indian companies have had their official emails or logins taken over by threat actors who use the information to attack other enterprises. The crime compounds the security challenges for companies, which also have to deal with ransomware, phishing and malware.
India is among the top 10 sources for identity takeovers, according to data provided by Proofpoint. “This is due to the use of botnets and account takeovers to issue attacks globally,” said Sumit Dhawan, the chief executive officer (CEO) of the California-based cybersecurity company, adding that “it has been quite phenomenal because India has never been a source for global attacks” as he referred to cybercrimes.
Increasing cases of identity theft have prompted banks, information technology and financial service companies to ensure robust threat defence mechanisms. It is difficult to say whether the people carrying out identity thefts are based in India but the impact is global. The crime often involves social engineering to create a sense of trust and urgency among an organisation’s employees to click on a malicious link.
Identity theft attacks also involve profiling a person based on their date of birth, family, parents, and place of travel. It involves targeting system administrators, data centre managers and decision-makers at an organisation.
Social media and GenAI technologies allow threat actors to quickly research and target people in large and sophisticated attacks. Vinayak Godse, CEO of the Data Security Council of India, told Business Standard recently that a combination of high digital footprint exposure and AI has accentuated such attacks.
AI has become a double-edged sword for organisations, according to cybersecurity experts. While it helps them to quickly identify threats, AI-enabled applications create the ways and means for threat actors to break in.
“When you look at AI for security, it helps me identify patterns much faster. If I had a security analyst sitting and looking for certain alerts or incidents, correlating them and making sense out of that, it can be completely taken over by AI,” said Muraleekrishnan Nair, managing director of CyberProof, a cybersecurity company owned by UST.
“AI makes much better judgment compared to a human agent because it can understand patterns and make decisions quickly. AI is helping me in acting, understanding and acting on threats quickly. But at the same time, the way the attacks are coming has become much stronger and much wider because of AI. I need to have defence systems which can stop that,” he said.
“These attacks are written in proper language. You can't just read it and say ‘oh this looks like someone is trying to spoof me’ because the language is not written right. Second, a lot of research and data is available on social media. There are tools that actually can be sent to scour the internet and create automated attacks using GenAI,” said Dhawan.
Cybercrimes, including identity theft, are likely to have cost the world $9.5 trillion in 2024, according to Gartner. Synthetic-identity-based impersonation attacks, where a fraudster creates a new, fake identity by combining real and fabricated personal information, are causing reputational and financial damage.
Experts say the risks are even higher for Indian enterprises that use third-party authentication services for identity verification. Gartner’s research predicts that by 2027, AI agents will halve the time needed to exploit account takeovers, giving organisations even less time to respond to threats. The agents will automate more steps in the account takeover process, including using deep-fake voices, to make social engineering more convincing.
“In India, we are seeing these trends accelerate at a unique pace. The sheer volume of data and the speed at which it moves through a digital ecosystem creates a perfect environment for disinformation campaigns,” said Apeksha Kaushik, principal analyst at Gartner.
“The digital attack surface, including an organisation’s assets, leadership profiles and sensitive data, is an easy target for adversaries. Publicly accessible assets such as social media accounts, executive photos and corporate logos are especially vulnerable to misuse in impersonation attacks,” said Kaushik.
As criminals misuse AI, organisations are using the technology to protect themselves. AI and machine learning are powering deep-fake detection and the former has become crucial for automated identity verification and document checks.
The best way remains training employees and customers about vulnerabilities. “This is a more difficult process than the technology process. I can hire experts, buy tools, and seal these holes. I can do vulnerability audits and find out problems and then fix them. But then the other side of making people aware and controlling their actions, that is the complex thing,” said Nair.