The hyperwar is here

How predictive AI is changing outcomes in conflict

21 min read
Updated On: Apr 10 2026 | 1:39 PM IST
US Army National Guard

Airman 1st Class Gerald Mack monitors cyberattacks during Exercise Southern Strike at Camp Shelby in Mississippi, US, on April 21, 2023 (PHOTO: Staff SGT. Renee Seruntine/US Army National Guard)

In November 2020, a convoy of vehicles was making its way to Absard on the outskirts of Tehran. As it neared the town, it was ambushed by an ordinary-looking pickup that was carrying a weapon that had never been used before. It was a Belgian-made machine gun, but no human pulled the trigger — it was controlled remotely by satellites, guided by artificial intelligence (AI) and programmed to recognise just one man’s face.
 
When the car being driven by Mohsen Fakhrizadeh, Iran’s leading nuclear scientist, approached to make a u-turn, the system locked on to him
 
with precision and riddled him with bullets, according to an account published by the New York Times. The mechanised assassin was being remotely controlled by members of Israel's spy agency Mossad thousands of kilometres away — the first known use of AI in  an assassination.
 
Throughout history, human intelligence  (HUMINT) has often been used to target and kill people. Ironically, the very word ‘assassin’ can be traced back to Persia (modern-day Iran), where an 11th-13th century sect called the Hashashin was notorious for targeted killings carried out with precision.
 
Assassinations were shunned in the past because they were seen as unviable and not the best means to achieve a political objective. However, beginning with Archduke Franz Ferdinand in 1914, which sparked off World War I, assassinations became more common in the 20th century, not least in India, where two prime ministers have been assassinated.
 
Both state and non-state actors have long relied on networks of informants, surveillance and patient intelligence work to find and assassinate individuals. Back when Fakhrizadeh was killed, the use of technology appeared to represent an outlier, an advanced but one-time operation. In retrospect, it was an early signal of a broader shift.
 
Israel’s Mossad has built up its reputation as a ruthless spy agency, partly because of its well-publicised success in hunting down enemies. Particularly well-chronicled is how it took revenge for the so-called Munich massacre, when eight members of the Israeli team were killed during the 1972 Olympics by the Palestinian militant organisation Black September.
 
On the instructions of former Israeli prime minister Golda Meir, Mossad launched Operation Wrath of God, a covert operation to find and kill militants linked with the attacks. Operatives carried out a string of targeted assassinations across Europe and West Asia through a mix of human intelligence, intercepted calls and travel details.
 
Over the past few years, intelligence, surveillance and targeting have begun to converge into systems that do not merely assist human operators but reshape how targets are generated, evaluated and acted upon.
 
Although Mossad is thought to have used a remotely detonated telephone bomb to kill one of the reported participants of the Munich massacre in 1973, the more recent advent of AI has overhauled the entire process of assassinations. Where it once depended on fragmentary information assembled over time, it is now shaped by systems designed to ingest large volumes of data and produce usable outputs at speed.
 
AI models are trained to observe, analyse and predict behavioural patterns that can help determine the target’s location and movement. In the assassination of Iran’s supreme leader, Ayatollah Ali Khamenei, on February 28, Israeli and United States (US) intelligence used predictive intelligence by fusing hacked surveillance feeds, communication data and long-term behavioural analysis in order to ascertain his whereabouts.
 
“In this case, however, top Iranian officials were already implicated in assassinations or assassination attempts and enough of them wished to destroy Israel itself, that the usual guardrails were removed,” said Michael O’Hanlon, director of research of foreign policy at the Washington, DC-based Brookings Institution.
 
As detailed in the book Target Tehran, by Israeli journalists Ilan Evyatar and Yonah Jeremy Bob, Israel has for years used sabotage, cyberwarfare, targeted assassinations, and clandestine intelligence-gathering to disrupt Iran’s nuclear programme and leadership networks.
 
Israel has combined HUMINT with sophisticated technical capabilities, recruiting local dissidents while simultaneously intercepting communications, hacking into phones, surveillance systems such as closed-circuit television networks, and even deploying Farsi (Persian) radio channels to coordinate with spies on the ground.
 
Between 2010 and 2020, several Iranian nuclear scientists were assassinated in operations by Israel. This campaign was supported by cyber operations such as Stuxnet, developed in coordination with the US to cripple centrifuges at Iran's Natanz facility.
 
This fusion of HUMINT and digital intelligence has delivered tangible results: Israeli operations have demonstrated an ability to rapidly degrade or decapitate segments of Iran’s military and political leadership.
 
So pervasive is this surveillance architecture that Iranian officials and operatives have reportedly been forced to abandon routine use of mobile phones and internet-based communication, reverting instead to encrypted or traditional methods to evade detection and targeting.
 
“Artificial intelligence has sped up the targeting cycle,” Herb Lin, a senior research scholar at the US-based Stanford’s Centre for International Security and Cooperation, said.
 
“It does it by accounting for many more different sources of intelligence and integrating information that would otherwise remain unanalysed or be lost in the noise, and it does it at machine speed rather than human speed.”
 
Nowhere is this shift more visible than in the reported use of AI systems by Israel in Gaza.
 

Target architecture

 
Throughout the duration of the Gaza War (2023-25), Israel used multiple AI software programmes for the assassination of Palestinian militants and the targeting of infrastructure. What is used in Gaza is not a single system but a layered system of AI software that turns raw data into bombing targets.
 
A set of AI-enabled platforms commonly referred to as Gospel, Lavender, Fire Factory, and Alchemist work together to process vast quantities of data and generate targets at scale.
 
“AI models are becoming more powerful but also more efficient: Any given size of model tends to get easier to run with less computing power in a smaller space. That allows increasing possibilities of deploying AI on the ‘edge’, i.e., on hardware on the frontline or in the field, rather than just on servers at home,” Shashank Joshi, defence editor at The Economist and senior fellow at King’s College London, told Blueprint.
 
“That could allow a single drone conducting surveillance to make powerful inferences or an intelligence officer in the field to have a powerful agentic tool that could assist with human intelligence operations, even in remote or denied areas.”
 
At the core of Israeli tech is Alchemist, a data integration platform that pulls together surveillance feeds, intercepted communications, biometric records and historical intelligence into a single operational picture. The data that would otherwise be fragmented is assembled through automation by this tool.
 
 
At the core of Israeli technology is Alchemist, a data integration platform that combines surveillance feeds and intercepted communications (Photo: Reuters)
  This data is then processed by systems like Fire Factory, which sorts and categorises potential targets as militant cells, weapons sites, tunnels and buildings linked to operatives.
 
These are not final targets yet, but organised possibilities.
 
From here, AI moves into target generation. Gospel produces strikeable locations like buildings, infrastructure, even high-rise structures, along with suggested munitions and estimates of collateral damage.
 
Lavender focuses on individuals, using historical and real-time data to generate lists of suspected operatives and link them to specific locations, often their homes. Targeted killings of operatives tracked to their homes were facilitated by “Where’s Daddy?”, a system designed to confirm the target’s presence before a strike.
 
“AI has sped up the targeting cycle. Presumably, it does it by accounting for many more different sources of intelligence, integrating information that would otherwise remain unanalysed or be lost in the noise, and does it at machine speed rather than human speed,” Lin said.
 
What ties all of this together is speed and scale. The scale, however, is not sustained by Israeli systems alone.
 
An investigation by the Associated Press found out that US tech giants such as Microsoft and Google have been providing the necessary cloud infrastructure and AI systems. These are used to power the computing needed to store and analyse vast amounts of intelligence information.
 
Microsoft’s Azure has been used to analyse large amounts of information. This includes transcribing and translating information. Additionally, it has been used to access advanced AI models developed by OpenAI that are used together with Israel’s in-house targeting systems.
 
Other companies that have been used in this system include Google and Amazon. This is under a project dubbed Nimbus which is a $1.2 billion deal that was signed in 2021.
 
“Like any digital technology, AI is a tool; it can be used in accordance with the law or not,” said Professor Peter Asaro of The New School University based in New York City and a leading scholar on AI ethics and armed conflict.
 
“The real issue is the degree of human control and judgment. If you put data into a system and it produces a target list, how do you know those targets are valid? Just because the system is said to be reliable, that’s not sufficient.”
 
Asaro argued that the problem is not simply whether AI identifies targets correctly, but how those outputs are used.
 
It would be a general presumption that AI systems should make military targeting more ‘efficient’, but the evidence from the Gaza war shows that efficiency does not necessarily mean accuracy or fairness.
 
The scale of destruction in Gaza is difficult to reconcile with the promise of precision. By the time the ceasefire came into effect, at least 70,000-75,000 Palestinians had been killed, according to converging estimates from Gaza health authorities, Israeli officials, and independent studies. Some analyses suggest the true toll could be significantly higher, possibly above 100,000, when undercounting and indirect deaths are included.
 
What is striking is not just the scale but the composition: A majority of those killed are women, children and non-combatants, with some estimates putting the civilian toll as high as 70-80 per cent.
 
The question, therefore, is whether AI systems are fair and accurate when specifying or recommending a target. Experts say that the fault might not lie with the AI system but with human intelligence and judgment.
 
“There are people who are tempted to say that the use of artificial intelligence is associated with more killing of innocent civilians,” Lin said.
 
“It’s clear that bad information was somehow fed into the system. But, of course, I point out that we don’t need AI to make mistakes because of bad information. So, if you give an AI system bad information, of course, the AI system is going to make a bad decision.”
 
The outcomes suggest that faster targeting is not the same as more discriminating targeting. In practice, these systems appear to expand the number of potential targets, compress decision-making time and, in some cases, normalise higher acceptable levels of collateral damage.
 
This creates what analysts describe as an “association problem”, where civilians risk being classified as targets based on patterns rather than verified roles. In the case of the Palestinian militant group Hamas, for instance, a large number of people who work for civil society — including doctors, traffic police and teachers — who may be associated with the civil or political side of the militant organisation, are labelled as legitimate targets.
 
“These systems are based on associational networks — that’s how they work. But say if you’re just the pizza delivery guy and the local Hamas office orders a pizza every week, you’re making regular deliveries,maybe even getting payments from them, the system might then say you’re deeply associated with Hamas. But you’re just delivering a pizza once a week. It can’t tell the difference,” Asaro said.
 
Analysis from the United Kingdom’s Royal United Services Institute suggests that AI in conflicts like the Gaza Strip is being used less to refine targeting and more to accelerate it. These systems process vast streams of intelligence, but their outputs are inherently probabilistic rather than definitive.
 
Lavender reportedly generated around 37,000 potential targets during the early weeks of Israel's assault on Gaza. Even if the system operates with high accuracy, an error rate of even 10 per cent would translate into thousands of misidentifications. At this scale, small margins of error can accumulate.
 
At the same time, targeting categories themselves can expand. An investigation by Israeli media outlets +972 Magazine and Local Call describes the use of “power targets”, buildings that are not strictly military in nature but are considered relevant by Israel for exerting pressure. AI systems can identify such targets, but their inclusion reflects a strategic choice rather than technical necessity.
 
Data and deception
 
Data and human intelligence are the lifeblood of predictive intelligence, and when it comes to targeted assassinations, data and intel gathered can span from months to years.
 
Targeted assassinations do not rely on a single method. They are often the result of combining data analysis with direct access to an adversary’s systems.
 
The killing of Lebanese paramilitary group Hezbollah’s secretary-general Hassan Nasrallah, for instance, required years of intelligence work, including tracking movements, monitoring communications, and confirming his location before the strike.
 
In an interview with Dan Senor of Call me Back Podcast, former Israeli defence minister Yoav Gallant revealed that the operation was based on assessing the probability that Nasrallah was inside his underground headquarters at a specific moment.
 
Once that probability reached a high level of confidence, the strike was executed. Israeli aircraft hit the site in Beirut on 27 September 2024, using 83 bunker-penetrating bombs to destroy the underground command centre. The location, beneath a densely built area, required a concentrated strike to ensure the target was eliminated.
 
“Predictive models at the individual level are newer and less established. But there is no reason to think these could not outperform human analysts in, for instance, predicting how a target might respond to a certain approach psychologically or predicting which route they might choose to take under certain conditions,” Joshi said.
 
Another AI defence project of the US military, Project Maven, was designed to surveille images from drones.
 
The critical aspect is not what it looks at but how quickly it can act on it. Maven evolved into a full battlefield AI system, now officially adopted as a Pentagon “programme of record” in 2026. In Iraq and Afghanistan, these systems were utilised for identifying insurgent networks and their moving patterns.
 
It uses Anthropic’s Claude AI model, which is embedded in its system. The software enabled the US to hit more than 1,000 targets in Iran in the first 24 hours of the Iran war, according to The Washington Post. Although Anthropic has denied permission for military use of its model, the US Department of Defense still deployed Claude AI in combat operations in the current war.
 
Private companies are also embedded within the targeting process, influencing how data is collected, organised, and presented. Systems built by Palantir Technologies are a case in point; its platforms, Palantir Gotham and Palantir Foundry, are used to pull together surveillance feeds, intercepted communications, drone images and operational data into a single view.
 
During the current war, Palantir’s systems reportedly supported 2,000 strikes on Iran in 48 hours by fusing intelligence and operational data.
 
The US Tomahawk cruise missile strike near Shajareh Tayyebeh elementary girls’ school in Minab in southern Iran, where more than 100 girls aged 7-12 years were killed, has been cited in ongoing discussions on modern targeting. The incident has caused global outrage.
 
While there is no confirmed evidence that AI systems selected the target, preliminary investigation suggests the site may have been misidentified in part because of its proximity to an Iran’s Islamic Revolutionary Guard Corps (IRGC)-linked facility.
 
Over 120 Democratic lawmakers sent a letter to US Defense Secretary Pete Hegseth demanding information on the Pentagon’s use of AI in target selection and asked whether safeguards were in place to prevent civilian casualties.
 
Investigators have pointed to the risks of relying on layered datasets that are not always updated in real time. In fast-moving operations, where large volumes of information are processed and acted upon quickly, such gaps become harder to detect. 
Smoke rises after Israeli strikes in Beirut, Lebanon, on March 12, 2026 (Photo: Reuters)
 
In one way, targeted strikes are delivering the US and Israel results: The supreme leader of Iran, senior IRGC officials and many scientists were killed in the first days of the war. Decapitation as a tool has long been seen as a way to disrupt adversaries quickly. What has changed is not the logic of the strategy, but the tools available to execute it.
 
However, the use of decapitation remains deeply contested. Academic research points out that although leadership targeting can be effective in some cases in neutralising, it is by no means a magic bullet and does not necessarily translate into the desired political results.
 
“There is a discernible connection between the American and Israeli decapitation operations and AI systems,” said Ingvild Bode, professor of international relations and director at the Centre for War Studies at the University of Southern Denmark.
 
On whether the capabilities displayed during the assassination of Ali Khamenei, from analysing substantial volumes of data to reportedly predicting the movements of the targets, would necessarily lower the threshold for such decapitation strikes going forward, Bode said this was more a question of doctrine than of capability.
 
Despite the killing of its senior leaders, Iran’s military apparatus has continued to function and wage war. At times, this strategy can even have the opposite effect by increasing the determination of the group, hastening the rise of even more radical successors, and triggering bigger retaliation.
 
“There is fear of retaliation. If I try to kill you, as a leader of my country, and you of yours, whether I succeed or not, your people are likely to try to kill me,” Hanlon said. 
The USS Frank E Petersen Jr (DDG 121) is deployed in West Asia during Operation Epic Fury, which started on February 28, 2026 (Photo: US Navy)
 
AI-enabled targeting systems are designed to process huge swaths of data. If data is the foundation, it is also its most significant vulnerability. Intelligence systems that rely on large datasets are inherently exposed to manipulation, whether through deliberate deception, noise, or incomplete information.
 
“The more data that you use, the more opportunity you have to mess with the data,” Lin said.
 
“So there are ways of corrupting the analytical process because it’s so dependent on data. And the other part of the problem is that you'll never know.”
 
In a conflict, adversaries are constantly trying to take advantage of these weaknesses. False signals can be injected, patterns can be obscured, and behaviour can be changed to evade detection. Even non-state actors can manipulate highly sophisticated AI software and battle networks.
 
In the war in Ukraine, electronic warfare has shown how misleading signals and communications have been used to distort the interpretation of battlefield activity. Even when visibility appears high, data-level deception can still create gaps and blind spots.
 
A report by the US think tank New America noted that despite AI making the “battlespace more visible”, both sides repeatedly surprised each other through deception.
 
What stands out in the use of AI for targeting is that it is not evenly distributed. The ability to run these kinds of operations depends on more than just having access to software. It requires infrastructure, data, and a doctrine that ties it all together.
 
Bode, who is also a principal investigator on the European Union-funded AutoNorms project that is investigating the impact of weaponised AI on standards of appropriateness, pointed out that the use of AI in military operations needs to be seen in the global context.
 
“Will more states, apart from the US and Israel, adopt such a strategy? Perhaps it will be more tempting. But, historically, it really has been only these two states that have been conducting such operations,” she said.
 
She added that this is not just about access to technology. “We haven’t seen similar doctrinal development by other states.” The US and Israel, she pointed out, have built the infrastructure to gather intelligence from both “good old spies” and “cutting-edge electronic surveillance”.
 
That, in a nutshell, is the difference these systems make: Scale in terms of collection and processing. Another factor is political intent. The intent behind the use is as important as the tools themselves.
 
The US drone strike that killed senior Iranian military commander Qassem Soleimani in 2020, is a case in point. It was the first time a high-ranking state official had been killed in an openly acknowledged drone strike; it demonstrated the difficulty of predicting how far these tools would be taken once they were in use.
 
For countries observing these conflicts from the outside, the question is not whether AI will be used but how.
 
“Many countries uninvolved in the Iran conflict, including but not limited to India, are learning from what they observe. For instance, how AI may allow new combinations of mass and precision for strategic effect,” said Antoine Levesques, senior fellow for South and Central Asian defence and strategy at the London-based International Institute for Strategic Studies.
 
He noted that some of these systems are beginning to incorporate feedback and learning functions. Over time, that could allow them to improve, based on past operations, making them more effective in identifying and striking targets.
 
But adopting such systems will not be straightforward. The ability to move quickly will depend on the quality and quantity of data available and on whether countries have the capability to deploy high-performance AI systems at scale without constraints.
       
A US Air Force F-35 takes flight in West Asia during Operation Epic Fury, on March 2, 2026 (Photo: US Air Force)

  Lessons for India 

India has already begun making moves in this direction, though these are at a nascent stage.
 
During Operation Sindoor, the Indian Army deployed an electronic intelligence coalition application developed by the Directorate General of Information Systems. The system was trained on nearly 26 years of data collected by Indian agencies and the armed forces. It included information on adversary sensors, their frequencies, movement patterns and the units they were attached to.
 
In the course of the conflict with Pakistan, the model was able to predict and track these systems with a high degree of accuracy. “We achieved over 90 per cent accuracy,” Rajiv Kumar Sahni, who served as the Director General of Information Systems during the operation told Blueprint.
 
This was one of the several indigenous AI tools used during the May clashes last year, to improve battlefield awareness and speed up decision-making. These systems are now being expanded, including in a military-specific large language model.
 
As predictive intelligence complemented by AI advances each day, where does the buck stop?
 
“They’re going to continue integrating, and then that’s going to get accepted as a kind of normalisation. What is emerging is not an exception, but a new baseline, where AI-mediated targeting, backed by private data ecosystems, becomes routine,” Asaro said.
 
In September 2024, in an unprecedented Israeli operation,  thousands of pagers used by Hezbollah members exploded almost simultaneously across Lebanon.
 
The incident was seen as a highly sophisticated operation that involved tampering with the devices at a point deep within the supply chain. While it was seen by some as a highly precise operation, many others saw it as setting a dangerous precedent. 
 
“They’re collecting communications data geolocation and then essentially using the data from your cell phone to determine whether you’re a member of Hamas. That seems like a terrible precedent for any society,” Asaro added.
 
Despite clear implications for international law, if the use of AI-enabled targeting is not regulated by international frameworks and if adversaries do not play by the well-established rules of warfare, this development opens a new pandora’s box — one that will be very difficult to close.
Premium ContentPremium ContentSubscription ExpiredSubscription Expired

Your access to Blueprint has ended. But the story is still unfolding.

No longer a subscriber? There’s a new reason to return.

Introducing Blueprint - A magazine on defence & geopolitics

Introducing Blueprint - A magazine on defence & geopolitics

Like what you read? There’s more in every issue of Blueprint

From military strategy to global diplomacy, Blueprint offers sharp, in-depth reportage on the world’s most consequential issues.

Exclusive pricing for Business Standard digital subscribers

Choose your plan

Exclusive Pricing

Choose your plan

58% off
₹6,000

Blueprint Digital

₹2,500

annual (digital-only)

₹208/Month

70% off
₹12,000

Blueprint Complete

₹3,500

annual (digital & print)

₹291/Month

41% off
₹6,000

Blueprint Digital

₹3,500

annual (digital-only)

₹291/Month

62% off
₹12,000

Blueprint Complete

₹4,500

annual (digital & print)

₹375/Month

Here's what's included:

  • Access to the latest issue of the Blueprint digital magazine

  • Online access to all the upcoming digital magazines along with past digital archives

  • * Delivery of all the upcoming print magazines at your home or office

  • Full access to Blueprint articles online

  • Business Standard digital subscription

  • 1-year unlimited complimentary digital access to The New York Times (News, Games, Cooking, Audio, Wirecutter, The Athletic)

Written By :

Mohammad Asif Khan

Mohammad Asif Khan is a Senior Correspondent at Business Standard, where he covers defence, security, and strategic affairs.

Bhaswar Kumar

Bhaswar Kumar has over seven years of experience in journalism. He has written on India Inc, corporate governance, government policy, and economic data. Currently, he covers defence, security and geopolitics, focusing on defence procurement policies, defence and aerospace majors, and developments in India’s neighbourhood.
First Published: Apr 10 2026 | 5:00 AM IST

Next Story