Home / Technology / Tech News / Project Ire: Know about Microsoft's AI agent to detect malicious software
Project Ire: Know about Microsoft's AI agent to detect malicious software
Microsoft's Project Ire is an AI-powered agent that can reverse engineer unknown software, analyse its behaviour, and autonomously classify it as malicious or benign - without human intervention
3 min read Last Updated : Aug 07 2025 | 2:48 PM IST
Don't want to miss the best from Business Standard?
Microsoft has unveiled a prototype AI agent called Project Ire that can autonomously reverse-engineer software and identify cybersecurity threats like malware, without any human input. The company shared details of this research project in a recent blog post, calling it a step forward in using AI to analyse and classify software more efficiently.
What is Microsoft’s Project Ire?
Project Ire is a prototype developed by researchers from Microsoft Research, Microsoft Defender Research, and Microsoft Discovery & Quantum. It’s designed to act like a digital analyst that can inspect unknown software, understand how it works, and determine if it’s harmful or not.
The system is built on the same underlying framework as Microsoft’s earlier Discovery platform. It uses large language models (LLMs) and a set of advanced tools that specialise in reverse engineering, the process of taking apart a software program to figure out what it does.
According to Microsoft, Project Ire can investigate a software file even if there’s no information about where it came from or what it’s supposed to do. It pulls apart the code using decompilers and other technical tools, analyses the output, and decides whether the software is safe or malicious.
Microsoft said that its Defender products currently scan over a billion devices every month for threats. But when software looks suspicious, it often requires a security expert to investigate. That process is slow, difficult, and prone to burnout, especially since it involves combing through countless alerts and making judgment calls without clear right answers.
That’s where Project Ire comes in. Unlike many other AI systems used in cybersecurity, this one is not just reacting to known threats. It’s making informed decisions based on complex signals, even when there’s no obvious answer. For instance, some programmes might include reverse engineering protection not because they’re malicious, but simply to guard their intellectual property.
Project Ire attempts to solve this by working like a smart agent. It starts by scanning a file using automated tools that identify its type, structure, and anything unusual. Then it reconstructs how the software works internally, mapping out its functions and flow using tools like Ghidra and Angr.
From there, the AI model digs deeper. It calls on a variety of tools through an application programming interface (API) to inspect specific parts of the code, summarise key functions, and build a detailed “chain of evidence” that explains every step it took to reach a conclusion.
At the end of the process, the system generates a final report and classifies the file as either benign or malicious. It can even cross-check its findings against expert-validated data to reduce errors.
How will Microsoft use Project Ire?
In tests using real-world malware data from Microsoft Defender, Project Ire was able to correctly identify many malicious files while keeping false alarms to a minimum — just four per cent false positives, according to Microsoft.
Thanks to this strong performance, Microsoft says it will begin integrating the technology into its Defender platform under the name “Binary Analyzer.” The goal is to scale the system to work quickly and accurately across all types of software, even those it’s never seen before.
Ultimately, Microsoft wants Project Ire to become capable of detecting brand-new malware directly from memory, at a large scale.
You’ve reached your limit of {{free_limit}} free articles this month. Subscribe now for unlimited access.