AI agents go mainstream, but most firms lack proper oversight: Report
Microsoft's Cyber Pulse Report shows AI agents are now widely used across enterprises, but fewer than half of firms have GenAI security controls in place
)
A Microsoft Cyber Pulse report highlights how AI agents are being adopted across enterprises, even as oversight and security controls remain uneven (Illustration: Ajaya Mohanty)
Listen to This Article
Microsoft has released its Cyber Pulse Report, outlining how the security landscape is changing as enterprises adopt generative AI tools and AI agents across their operations. The report shows that these systems are no longer limited to pilot projects or experimental use, but are now being deployed widely inside large organisations.
According to the report, 80 per cent of Fortune 500 companies now have active AI agents built using low-code or no-code tools. At the same time, the findings point to gaps in oversight, with many of these agents described as unsanctioned, unobserved, or over-privileged, and only 47 per cent of organisations saying they have GenAI-specific security controls in place.
AI agents move into mainstream operations
According to the report, the rapid spread of low-code and no-code platforms has made it easier for teams outside central IT to build and deploy AI agents for specific tasks. This has pushed AI agents into day-to-day use across large organisations, often without the same review processes that apply to traditional software deployments.
Also Read
The report notes that this ease of deployment also reduces organisational visibility into where agents are being used, what systems they connect to, and what data they can access. As a result, many agents are running without being fully tracked or reviewed by security teams.
Unsanctioned and over-privileged use
One of the main risks highlighted in the Cyber Pulse Report is the scale of unsanctioned usage. According to the findings, 29 per cent of employees admit to using unsanctioned AI agents at work.
The report also states that many agents currently in use are unsanctioned, unobserved, or over-privileged, meaning they may have broader access to systems or data than required for their intended role. This increases the risk of unintended data exposure or misuse inside enterprise environments.
Security controls lag behind adoption
While AI agents are becoming more common, the report suggests that security frameworks have not kept pace. Only 47 per cent of organisations surveyed said they have GenAI-specific security controls in place.
This indicates that more than half of organisations are deploying or allowing the use of AI agents without dedicated policies or technical safeguards designed for the risks associated with generative AI systems. The report frames this as a widening gap between how widely AI agents are being used and how consistently they are being governed.
The Cyber Pulse Report also provides a sector-level view of adoption. Financial services account for around 11 per cent of all active AI agents globally, making it one of the largest contributors to enterprise AI agent usage.
India context: high AI usage, rising exposure
The governance gaps highlighted by Microsoft’s report align with trends already visible in enterprise AI usage in India. In a separate study, the Zscaler ThreatLabz 2026 AI Security Report found that Indian enterprises are among the heaviest users of AI and machine-learning tools globally, with India emerging as the second-largest source of enterprise AI/ML traffic after the US.
That report, which analysed nearly one trillion AI/ML transactions, showed that enterprises worldwide sent 18,033 terabytes of data to AI/ML applications in a year, while India recorded 82.3 billion transactions and over 300 per cent year-on-year growth. It also found that India accounted for 46.2 per cent of all AI/ML traffic in the Asia-Pacific and Japan region.
The same study flagged a sharp rise in data leakage incidents linked to mainstream AI tools, including hundreds of millions of data loss prevention violations tied to services such as ChatGPT and a year-on-year increase in leakage linked to coding assistants. Together, these findings point to a pattern in which AI adoption is expanding faster than oversight and controls.
More From This Section
Topics : artifical intelligence AI technology AI Models
Don't miss the most important news and views of the day. Get them on our Telegram channel
First Published: Feb 13 2026 | 3:12 PM IST