Thursday, April 30, 2026 | 03:52 PM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

Bots dominate web traffic as AI reshapes online threat landscape: Report

Bots account for 53 per cent of global web traffic in 2025, with AI making them harder to detect. The shift is reshaping how systems distinguish legitimate activity from malicious automation

AI bots

Bots now account for 53 per cent of global web traffic, with AI making them harder to detect (Image: Magnific)

Harsh Shivam New Delhi

Listen to This Article

For a long time, the internet was built around a simple assumption: most of the activity flowing through websites and applications came from people. That assumption no longer holds as bots accounted for 53 per cent of all web traffic globally in 2025, overtaking human activity, according to the Thales Bad Bot Report 2026.
 
But the shift is not just about volume. It is about how that traffic behaves, and what that means for systems that were designed to separate legitimate users from malicious automation.

Bots are now the default, not the exception

According to the report, bots made up roughly 53 per cent of total internet traffic in 2025, pushing human activity into the minority.
 
 
This is not entirely new. Automated traffic has been growing steadily for years, rising from 38 per cent in 2018 to 51 per cent in 2023, where it first overtook humans. This traffic has likely been driven by everything from search engine indexing to monitoring tools and enterprise automation.
 
But the more telling detail lies within that split. Of the total traffic in 2025, 40 per cent is driven by bad or unverified bots, while 13 per cent comes from verified bots performing legitimate functions. This means that nearly two in every five requests on the internet are generated by a potentially malicious automated system.
 
What this creates is a layered environment where legitimate and malicious automation coexist. The report stated that from the perspective of a system receiving that traffic, both can look structurally similar — requests coming through expected channels, interacting with applications in ways that do not immediately appear abnormal.
 
The global internet traffic profile of these “bad bots” has also been on the rise within the split. In 2023, unverified bots accounted for 33 per cent of all traffic on the internet, which rose to 37 per cent in 2024.

AI is reshaping how bots operate

The report also points to artificial intelligence as a factor influencing how bots are evolving. This is not just about increasing the volume of automated traffic, but about changing how that traffic behaves.
 
Bots are increasingly able to follow expected user flows, generate realistic interaction patterns, and operate within the normal boundaries of applications. As a result, traditional indicators that once helped identify automation are becoming less reliable. Requests can appear valid, interactions can be consistent, and behaviour can align with what systems expect from legitimate users.
 
The report also noted that the rise of AI has caused the emergence of a third category of automated bots, AI agents.
These agents are designed to interact directly with applications and APIs, retrieving data and performing tasks on behalf of users. Unlike traditional bots, they are often embedded within browsers, search platforms, and enterprise tools. This changes how automated activity appears at a system level. Interactions that would previously have been flagged as unusual are increasingly treated as expected behaviour.
 
This is where AI begins to reshape the threat landscape. The distinction between automated and human-driven activity becomes less visible, not because systems lack data, but because the behaviour itself is designed to blend in.

AI-driven attacks are scaling rapidly

The report goes beyond behavioural changes to point to a sharp rise in the scale of AI-driven attacks.
 
In 2025, the average number of AI-driven bot attacks mitigated increased more than tenfold, rising 12.5 times compared to the previous year, with organisations blocking an average of 25 million such attacks per day.
 
This is not just a gradual increase. It reflects a phase where AI is being used to deploy automated activity at a much larger scale than before.
 
At the same time, the report makes a distinction that shifts how this growth should be understood. While the increase in AI-powered attacks is significant, the larger change in 2025 is the normalisation of AI and automation within internet infrastructure itself.
 
AI-driven activity is no longer limited to specific use cases or attack types. It is now being observed across industries and geographies, indicating that automation powered by AI is becoming a consistent layer within global internet traffic.

Distribution of AI-driven attacks

The report shows that retail is the most targeted sector, accounting for 20 per cent of AI bot activity, making it the single largest focus area. This is followed by business services and financial services at 14 per cent each, indicating that both enterprise platforms and financial systems are seeing sustained automated pressure.
Other sectors, such as travel (9 per cent), education (8 per cent), and law and government (7 per cent), also form a significant share of targeted activity, while industries like news and automotive account for 5 per cent each.
 
The distribution matters because it reflects intent. Sectors that handle transactions, user data, or pricing systems are more likely to be targeted, not just because they are high-value, but because they rely heavily on structured workflows that bots can interact with.

What can be seen is only part of the picture

Even as AI-driven traffic becomes more visible, the report highlights a gap between what can be detected and what actually exists.
 
Analysis in the report is based on detectable AI traffic, meaning systems that either identify themselves or trigger existing security controls. However, a much larger portion of AI-driven automation remains unverified. This creates a visibility gap as organisations are only responding to the traffic they can see, while a parallel layer of automated activity may remain outside that view.
 
The report further notes that attackers can deploy self-hosted or modified large language models that do not identify themselves as AI agents and can be fine-tuned for specific use cases, including malicious ones. This means that what is observable today likely represents only a fraction of the total attack surface.

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Apr 30 2026 | 3:48 PM IST

Explore News