Google tests Thinking Mode, experimental AI tools for Gemini Live: Report
Gemini Live may soon get the ability to offer more detailed replies through Thinking Mode, along with early-access tools like visual cues and agent-style phone controls
)
Google Gemini Live may get new features soon
Listen to This Article
Google is reportedly preparing major changes to Gemini Live, its real-time, voice-based AI assistant. A report by Android Authority stated that beta versions of the Google app contain references to new “Labs” features, including a “Live Thinking Mode” and a set of “Live Experimental Features.” These changes suggest Gemini Live could soon offer more detailed responses when needed, while also gaining early access to upcoming AI features.
The new features are not live yet, but code strings in a recent beta of the Google app show that Google is actively testing them behind the scenes.
What’s new coming to Gemini Live
Live Thinking Mode:
One of the new options found in the app is called “Live Thinking Mode.” Its description says it will “take time to think and provide more detailed responses.”
Right now, Gemini Live runs on Gemini 2.5 Flash model designed for quick replies. Thinking Mode suggests Google wants to give users a choice: faster responses when speed matters, and slower but more thoughtful replies when accuracy or depth is more important. This mirrors how Google already offers “Fast” and “Thinking” modes in the regular Gemini chat experience.
If this comes to Gemini Live, it would mean voice conversations that sometimes pause longer before answering, but give more complete or carefully reasoned replies.
Also Read
Live Experimental Features:
Google also reportedly plans on offering early access to some of its upcoming AI features to users. The code string suggests enabling a toggle called “Live Experimental Features” will offer:
- Better noise handling during voice conversations
- Automatic response from Gemini Live “when it sees something”
- Multimodal memory, where Gemini Live remembers things across voice and visual inputs
- More personalised results using data from Google apps
Deep Research:
The code also mentions a “Deep Research” option with the description: “Delegate complex research tasks.”
Gemini already has a Deep Research mode in text chat that can break big topics into steps and gather information over time. This suggests Google may be bringing similar long-form research abilities to Gemini Live.
UI Control:
The report stated that the code also references a feature called “UI Control” which is described as “Agent controls phone to complete tasks.” This points to a more agent-like version of Gemini that can interact with apps directly — tapping buttons, opening apps, and completing actions on users behalf. Google already showed early versions of this idea with Gemini Agent, mostly tied to web browsing. This looks like a step toward letting Gemini control the phone’s interface.
More From This Section
Topics : Gemini AI Google's AI artifical intelligence
Don't miss the most important news and views of the day. Get them on our Telegram channel
First Published: Jan 20 2026 | 10:41 AM IST