Google introduces Gemini 3.1 Flash Live AI model as it expands Search Live
Google introduces Gemini 3.1 Flash Live with faster voice responses, multilingual support, and improved conversations, powering Search Live, Gemini Live, and AI features across apps and service
)
Gemini 3.1 Flash (Image: Google)
Listen to This Article
Google has introduced Gemini 3.1 Flash Live, a new audio and voice AI model designed to make real-time conversations more natural and responsive. According to Google, the model powers several Google services, including Search Live and Gemini Live. Google said that the model better understands and responds to voice queries, making interactions smoother, faster, and more conversational. Gemini 3.1 Flash Live also supports multiple languages, helping expand voice-based AI features such as Search Live to more users.
Gemini 3.1 Flash Live: What’s new
Gemini 3.1 Flash Live is Google’s latest voice-focused AI model built for real-time conversations. As per the company, it is designed to respond quickly while maintaining a more natural flow in dialogue. The new model is being rolled out across different platforms. For regular users, it powers features like Search Live and Gemini Live, enabling voice-based interactions within Google apps. Developers can access it through the Gemini Live API in Google AI Studio, while businesses can use it via Gemini Enterprise tools.
Google said that all audio generated by Gemini 3.1 Flash Live includes a SynthID watermark, which is embedded directly into the sound in a way that users cannot hear. According to Google, this watermark helps identify AI-generated audio and is aimed at reducing the risk of misinformation.
Improved performance and reliability
Google said that the new model performs better in handling complex voice-based tasks. As per Google’s blog, the model has shown improved results in benchmarks that test multi-step instructions and real-world conversational challenges. This means it can better understand longer queries, follow instructions more accurately, and respond more consistently during conversations.
Also Read
Better for real-time conversations
According to Google, the focus of Gemini 3.1 Flash Live is on making AI conversations feel more natural. It delivers quicker responses and can maintain the context of a conversation for longer periods. This allows users to continue discussions without repeating themselves, across both simple queries and more detailed interactions.
Multilingual support and global reach
As mentioned in the blog, the model is built to support multiple languages, which helps Google expand its AI features globally. With this, voice-based search and conversation tools are now available to users in more than 200 countries and regions. This expansion makes it easier for people to interact with AI in their preferred language, using both voice and, in some cases, visual inputs.
Search Live expands
With the new model, Google’s Search Live is now rolling out globally to users in regions where AI Mode is available. The feature lets users interact with Search using voice, allowing them to ask questions out loud and receive spoken responses in real time. It also supports camera input, so users can point their phone at objects or situations for better context. In addition, Search Live works with Google Lens, enabling users to start a live, back-and-forth conversation based on what they see through their camera.
The feature was initially limited to the US and later expanded to more regions, including India, with support for the Hindi language.
More From This Section
Topics : Google apps Gemini AI Latest Technology News
Don't miss the most important news and views of the day. Get them on our Telegram channel
First Published: Mar 27 2026 | 11:54 AM IST
