Home / Economy / Analysis / Global governance gap: How AI is powering next wave of money laundering
Global governance gap: How AI is powering next wave of money laundering
As AI reshapes finance, it is also enabling money laundering, deepfake fraud and regulatory forum shopping, underscoring urgent gaps in global AI governance
AI could also be used to develop an elaborate, but fake, paper trail to legitimise black money
5 min read Last Updated : Feb 16 2026 | 1:46 PM IST
The global financial architecture is currently facing a ‘dual-use’ paradox. While artificial intelligence (AI) is being hailed as a transformative force for governance and productivity, it is simultaneously enabling new solutions for fraudsters, money launderers and terrorist financiers. To stay ahead of them and protect the integrity of the global financial system, regulators and other stakeholders need to embrace responsible use of AI.
Risks from AI technologies
A recent horizontal scan on AI and deepfakes published by the Financial Action Task Force (FATF) claims that different forms of AI technology pose varied new risks of money laundering and terrorist financing. Criminals can exploit ‘predictive AI’ models, known for detecting patterns and making predictions, to bypass traditional systems implemented by banks to detect suspicious transactions.
'Generative AI', on the other hand, creates deepfakes, such as realistic videos, audios, invoices and IDs, that can be used to circumvent due diligence for preventing money laundering, especially through fake KYC documentation.
'Agentic AI' can provide launderers with autonomous systems for layering and integration of illicit gains. For instance, agentic AI can operate millions of mule accounts and use them to perform high-frequency, low-value transfers to layer funds without creating patterns. Another example is that to obfuscate the origin of funds, multiple AI agents could play against each other on online gambling platforms, using funds obtained illegally, and the winners would cash out as legitimate gambling winnings. Speculators can also use agentic AI in stock markets to run quick ‘pump-and-dump’ schemes and manipulate the market.
‘General AI’, which is still under development, presumably can reason like humans and can launder funds in such complex ways that Law Enforcement Authorities (LEAs) would find it difficult to follow the money and collect the evidence needed for mounting a prosecution. General AI will make it easier to make accommodation entries in books of account and will make it impossible for tax inspectors to determine the source of funds.
Risk of forum shopping
While all countries are required to implement the anti-money laundering standards of FATF in their jurisdiction, they are doing so with varying levels of effectiveness. Generative AI can be trained on laws, regulatory texts and the country context of different jurisdictions to identify weaknesses. AI trained on such content can then design layering strategies that leverage the weaknesses of different jurisdictions. Such forum shopping could frustrate efforts of a country to investigate cross-border transactions.
Such AI can also be used to develop complex corporate structures with multi-jurisdictional splits. For example, a company incorporated in one jurisdiction, resident in another, bank accounts held in another and ownership hidden from all three.
AI could also be used to develop an elaborate, but fake, paper trail to legitimise black money. Funds moved through multiple bank accounts in different jurisdictions could be backed up with realistic invoices, shipping documents and payment instructions, all deepfakes. Agentic AI can create fake businesses with operational websites and dummy email correspondence between them in minutes.
Need for governance
If agentic AI can be trained to play a launderer, then it can also be trained to play a financial crime buster. It is now being acknowledged that banks and other financial institutions, supervisory bodies, financial intelligence units, tax authorities and LEAs need to embrace AI to stay ahead of criminals in this game. However, these stakeholders are constrained by the lack of a consistent or enforceable global standard for AI regulation. Only a few jurisdictions have developed standards for AI regulation, leaving most parts of the world exposed.
This fragmentation creates ‘regulatory grey zones’ where criminals can map out jurisdictions with the lowest enforcement risk. Due to the increasingly cross-border nature of financial crimes, only AI governance protocols standardised across jurisdictions can be effective in developing AI-driven controls to check AI-driven laundering.
Moreover, the ‘black box’ nature of AI systems presents a hurdle for prosecution. When AI agents execute a layering strategy across multiple jurisdictions in seconds, traditional laws struggle to assign human liability or collect evidence of the mens rea (‘evil mind’) behind the laundering.
Global governance standards are required to standardise audit of AI systems and establish the evidentiary standard for AI-driven money laundering. Operational standards need to be developed for digital KYC processes. Countries need to commit to universal ‘common goods’ so that low-capacity countries have access to the same deepfake detection tools as international financial centres.
The dialogue around the global governance gap in AI use is expected to take centre stage at the India AI Impact Summit 2026, being held in New Delhi this week. It is interesting to note that one of the seven chakras of this summit is ‘Safe and Trusted AI’, which seeks to create interoperable safety and governance frameworks and provide countries of the Global South with equal access to AI safety testing, evaluation tools and transparency mechanisms.
(The author is co-chair of the working group on money laundering, terrorism financing, and proliferation financing risks in FATF)(Disclaimer: These are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper)