India's Cert-In warns of AI-led cyber threats, lists protection steps
New CERT-In advisory highlights how frontier AI systems are enabling large-scale scams, deepfakes, and automated cyberattacks with minimal human intervention
)
Listen to This Article
CERT-In has issued a high-severity advisory warning that newer “frontier” AI systems are significantly increasing cyberattack capabilities, allowing threats to be carried out faster, at scale, and with less human effort. According to the agency, these AI systems can identify vulnerabilities, generate exploits, and execute multi-stage attacks autonomously. While such capabilities also have defensive applications, CERT-In said their dual-use nature raises risks for individuals, as cyberattacks could become more automated, convincing, and harder to detect.
What are frontier agentic AI models
CERT-In’s advisory focuses on a new generation of AI systems often referred to as frontier agentic models — tools that go beyond answering queries and can instead plan, take actions, and complete multi-step tasks on their own.
Models like GPT-5.5 are cited as examples of this shift. Unlike earlier AI systems that relied on step-by-step instructions, these models can handle messy, multi-part prompts, decide how to approach a task, use digital tools, and continue working until the task is complete.
The advisory also refers to systems such as Anthropic’s Mythos, which represent similar advances in autonomous AI behaviour. Mythos recently made headlines for reportedly uncovering 271 previously unknown, exploitable vulnerabilities in Mozilla Firefox. These were issues that had gone undetected despite years of development and audits.
Also Read
Unlike traditional tools, Mythos doesn’t just scan code; it interacts with it, executing functions, testing inputs, and learning from each outcome in a continuous loop. This allows it to trace how different parts of a system interact, identify deeper flaws, and even validate whether vulnerabilities can be practically exploited, significantly accelerating how security gaps are discovered.
Notably, capabilities powered by Mythos have been rolling out under the company’s Project Glasswing as a tightly controlled cybersecurity system and are still in limited testing and restricted deployment. Anthropic has clarified that this will be available to select companies only and not for the general public.
The main risk that these models pose is their dual-use nature. If they can find loopholes in existing systems for companies to fix the issue, the same tools can also be used by attackers to exploit those vulnerabilities.
What is changing with AI-driven cyber threats
CERT-In said advanced AI models are now capable of performing tasks that previously required skilled cybersecurity professionals. These include analysing large codebases to identify vulnerabilities, conducting automated reconnaissance of systems, and generating phishing or impersonation content.
The advisory notes that AI can also plan and execute multi-stage attacks, including credential harvesting, privilege escalation, and lateral movement within networks. Importantly, these actions can happen at a speed and scale that was not possible earlier, increasing the likelihood of rapid and widespread cyber incidents.
Why this matters for everyday users
According to CERT-In, individuals are increasingly becoming direct targets as AI tools make it easier to create highly convincing scams. These include phishing emails, fake websites, and impersonation attempts that can mimic trusted individuals or organisations.
The agency also warned about AI-generated voice and video content, which can be used for deepfake-based fraud. Users may encounter messages or calls that appear legitimate but are designed to extract sensitive information or prompt urgent financial actions.
What kind of risks are involved
The advisory highlights several potential impacts of AI-driven cyberattacks, including unauthorised access to accounts, identity compromise, financial fraud, and data theft. It also points to the possibility of service disruptions and broader system-level compromises.
CERT-In added that such attacks could be executed at lower cost and with greater automation, lowering the barrier for malicious actors and increasing the frequency of attacks targeting both individuals and organisations.
What users are advised to do
CERT-In has outlined a detailed set of precautions for individuals, focusing on strengthening basic cyber hygiene and staying alert to AI-enabled threats. Users are advised to keep operating systems, browsers, and applications updated, enable automatic updates, and install patches quickly, as AI-driven exploits can spread rapidly.
The agency recommends avoiding downloads from unverified sources and using strong, unique passwords across all accounts, along with enabling multi-factor authentication wherever possible. Users should be cautious when dealing with unsolicited emails, messages, links, or attachments, especially those that create urgency or ask for sensitive information.
CERT-In also emphasised verifying the authenticity of voice calls, video messages, and urgent requests, particularly those involving financial transactions, as AI-generated deepfakes and impersonation attempts can be highly convincing. Users are advised to carefully check links before clicking, remain sceptical of “too good to be true” offers, and avoid sharing sensitive personal or financial information through unverified channels.
Additionally, individuals should use strong Wi-Fi passwords with WPA3 encryption where available, avoid public Wi-Fi for sensitive activities or use a VPN when necessary, and regularly review privacy and security settings across platforms. The advisory also recommends backing up important data regularly, maintaining secure copies, and staying informed about emerging AI-related threats through trusted sources.
Bigger shift in cyber risk landscape
CERT-In said organisations and individuals must adapt to a changing threat environment where AI can accelerate cyberattacks. The advisory emphasises maintaining strong cyber hygiene and vigilance, noting that personal devices, accounts, and data are now part of the broader attack surface.
More From This Section
Topics : artifical intelligence cybersecurity AI Models
Don't miss the most important news and views of the day. Get them on our Telegram channel
First Published: Apr 28 2026 | 5:06 PM IST
