Artificial intelligence may soon become a fixture in terrorist arsenals, warns a new joint report by the United Nations Counter-Terrorism Centre (UNCCT) and the United Nations Interregional Crime and Justice Research Institute (UNICRI).
Titled Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes, the May 2025 report outlines how AI could be used in cyberattacks, autonomous weapons, deepfake propaganda, and terrorist financing. Though no confirmed cases of AI being used in terrorist attacks have emerged so far, the report urges immediate global action, citing signs of growing interest and experimentation.
Expert survey underscores perceived threat
A survey of 27 experts conducted for the report found that 44 per cent believe AI-based terrorism is “very likely”, with the rest calling it “somewhat likely”.
The UN agencies highlight four core concerns: the availability of open-source AI tools; scalability of attacks; the asymmetric advantage terrorists enjoy due to fewer legal constraints; and society’s increasing dependence on digital infrastructure.
Also Read
ISIL’s early experiments with emerging tech
Islamic State of Iraq and the Levant's (ISILs) early efforts demonstrate the trajectory. In 2016, the group reportedly tested self-driving cars in Syria and later developed a drone unit called the “Unmanned Aircraft of the Mujahedeen”. In 2020, an ISIL supporter shared a video showing how facial recognition might identify targets despite attempts to obscure identity—suggesting a basic awareness of AI’s potential.
AI applications across cyber, finance and weapons
The report warns of AI being used to automate password cracking, enhance ransomware, and deploy drone swarms. In financial operations, deepfake videos could be used to impersonate trusted figures, while AI-powered bots may support fraudulent crowdfunding and obscure cryptocurrency flows.
Propaganda and recruitment in the age of AI
In the propaganda space, generative AI and social bots could reinforce extremist narratives by mimicking real users and amplifying echo chambers. These tools could improve online recruitment by simulating peer validation and ideological affinity.
Call for regulation and global coordination
The report calls for pre-emptive responses from governments, legal bodies, and private tech companies. It also urges stronger intergovernmental collaboration and tighter regulation of open-source AI platforms.
“The potential for the malicious use of Artificial Intelligence for terrorist purposes merits the close attention of the international community,” the report concludes.

)