Google is reportedly developing a new feature called “Live for AI” mode, which could soon be integrated into Google Lens. According to 9To5Google, this mode will allow users to share their screens with Google, enabling the (artificial intelligence) AI to provide real-time information based on what is being displayed – marking a step forward from Lens' current camera-only functionality.
Earlier, Google introduced AI functionality in Lens, allowing users to point their camera at an object to receive relevant information. The upcoming Live for AI mode takes that capability further by enabling screen sharing for similar contextual responses.
How Live for AI mode compares to Project Astra
The feature bears resemblance to Google’s Project Astra, also known as Gemini Live, which allows contextual interactions based on both camera views and on-screen elements. However, Live for AI mode is reportedly more search-focused, while Project Astra is designed as a more comprehensive AI assistant.
As per 9To5Google, the distinction lies in the core function: Live for AI is geared towards identifying and retrieving information, whereas Astra leans more towards real-time assistance and interaction.
Also Read
Limitation of the feature
According to Android Central, the Live for AI mode comes with a notable limitation. Users will not be able to ask follow-up questions. Each query will be treated as a new, independent search, which may limit the continuity of interactions and context retention.
What is Google’s Project Astra?
Project Astra is a multimodal AI agent developed by Google DeepMind, designed to engage users through text, voice, images, and video. It aims to serve as a universal AI assistant, capable of delivering real-time responses by drawing on visual and contextual inputs.
By combining real-world cues with internet-based information, Astra is intended to offer a more natural and responsive interaction experience, simulating human-like understanding and communication.

)