I remember last year, while writing about what to watch out for in AI in 2025, being particularly excited about AI agents. Not because they sounded futuristic, but because they hinted at a shift in how we interact with technology. Around that time, Google had just introduced
Project Astra, showing how Gemini could use a smartphone’s camera to understand visual context and offer real-time assistance. It felt ambitious, almost ahead of its time.
Fast forward to 2025, and that idea quietly became part of everyday use. Camera sharing through
Gemini Live stopped feeling like a demo feature and started becoming something I actually relied on. The same applied across devices. AI wasn’t just answering questions anymore — it was helping me get things done across my phone, tablet, laptop, TV and even in the car.
That, more than anything else, defined AI in 2025. Not bigger models or flashier demos, but a change in behaviour. AI stopped waiting for instructions and started acting with intent. It began handling multi-step tasks, moving across apps, understanding context and fitting itself more naturally into daily routines.
What is agentic AI and why it feels different
At its simplest,
agentic AI refers to systems that don’t just respond, but follow through. Instead of answering a question and stopping there, these systems can observe what’s happening, plan a series of steps and then act on a user’s behalf.
In practical terms, this means AI no longer feels limited to suggestions. It can read webpages, pull out relevant details, compare information across sources and then take the next step — saving notes, drafting a message or completing a form — without needing constant guidance.
This shift became possible as models grew better at reasoning and handling longer context, while interfaces expanded beyond text. Screens, cameras and system-level tools gave AI a clearer picture of what users were actually doing. As a result, AI started feeling less like a search replacement and more like something closer to an assistant — one that could execute intent, not just interpret it.
2025 was the year AI stopped waiting for prompts
What changed most noticeably in 2025 was the effort required from users. AI systems began asking less and assuming more — not recklessly, but contextually.
Multi-step tasks became easier to hand off. Planning travel no longer meant bouncing between apps. Managing emails and calendars felt less manual. Even organising information became simpler, as AI could retain context across interactions and suggest what to do next rather than waiting to be told.
Features like live screen sharing, camera input and persistent context played a major role here. By seeing what was on the screen or in front of the camera, AI reduced the need for carefully worded prompts. The shift didn’t arrive with a single announcement. It settled in gradually. By the end of the year, asking AI to handle something — rather than explain it — felt surprisingly normal.
ALSO READ | Year ender 2025: Tracing rise of AI assistants from reactive to proactive Agentic AI reaches the browser
One of the clearest signs of agentic AI going mainstream in 2025 was its arrival in the browser.
This wasn’t just about smarter search results. AI-powered browsers and browser-based agents began interacting with the web the way people do — scrolling through pages, clicking links, filling forms and working across multiple tabs. Tasks that once involved dozens of open pages could now be guided end to end, with AI moving from reading information to acting on it.
That shift changed how the web was used. Booking travel, comparing products or compiling research stopped being entirely manual processes. The browser became an execution layer, not just a place to consume information. At the same time, this new power surfaced real concerns. Prompt injection, malicious webpages and unclear trust boundaries became harder to ignore once AI could act rather than simply observe.
Agentic AI goes ambient
By the second half of 2025, agentic AI no longer felt confined to apps or chat windows. It started appearing across devices in ways that were quieter, but more meaningful.
On smartphones, assistants moved beyond voice commands and isolated features. With deeper system access, they began helping across messaging apps, email, navigation, media and system settings. Tasks that once took multiple taps — sharing information between apps, searching across screenshots, pulling context from what was on screen — became more fluid. AI worked alongside the interface instead of sitting on top of it.
On PCs, this shift felt even more natural. AI tools embedded into operating systems and productivity software started handling routine actions like organising files, summarising meetings and preparing drafts. Instead of opening an assistant to ask for help, assistance often arrived itself.
Cars and living-room devices followed a similar path. In vehicles, assistants became better at understanding intent rather than rigid commands. On TVs and smart displays, AI grew more aware of viewing habits and content context, making interactions feel less mechanical and more situational.
Interoperability becomes the missing link
As agentic AI spread across apps and devices, its limitations became easier to spot. An assistant that worked well in one place quickly felt constrained when it couldn’t move across services or carry context safely.
This is where interoperability emerged as a central theme in 2025. For agentic AI to be genuinely useful, it needed controlled access to calendars, files, browsers, messaging apps and other tools — without creating new privacy or security risks.
Efforts like Model Context Protocol (MCP) reflected this shift. MCP isn’t something users interact with directly. Instead, it represents an attempt to standardise how AI systems connect to external tools and services. It’s background infrastructure, but it plays a critical role in determining how reliably agents can operate across ecosystems.
Where agentic AI still falls short
Even with these advances, 2025 also made it clear that agentic AI is far from settled.
Reliability remains a major concern. When AI systems are allowed to act, small mistakes carry larger consequences. A misunderstood instruction or an incorrect assumption can lead to the wrong action, not just a wrong answer. That’s why most consumer-facing agents still rely on confirmations for critical steps, even if it slows things down.
Privacy and trust are equally unresolved. Agentic systems need access to emails, files, calendars and personal data to be useful. While safeguards are improving, the idea of AI operating across sensitive services still makes many users uneasy. Transparency around what an agent can see, store and act on has improved, but it remains inconsistent.