Meta AI gets Gemini Live-like conversational feature, live visual feed
Meta AI's new capabilities allow the chatbot to hold real-time contextual conversations with the user and take live camera feed as input, similar to what Google offers with its Gemini Live interface
)
Meta AI gets audio-video chat feature in real time
Listen to This Article
Meta has introduced a new feature for Meta AI, which will allow users to have more natural conversations with the AI chatbot. Besides, it will also let users share live camera feed as input and help users with tasks such as shopping, trip planning and finding information online. It is powered by Meta’s new Muse Spark model which was detailed recently. The company is gradually rolling out the new AI features in the US across its apps and AI glasses.
The feature seems similar to what Google offers on Android devices with Gemini Live interface.
Voice conversations and live camera support
Meta AI is getting upgraded voice conversations through Muse Spark. Users can talk to the assistant more naturally, interrupt mid-conversation, switch topics, or even change languages while speaking. The assistant can also generate images and pull information from Reels, maps, and other services during conversations.
The company is also adding live AI camera support. Users can point their phone camera or AI glasses at objects, places, or scenes and ask questions in real time. Meta explained that the feature can help identify landmarks, products or everyday objects around the house.
Also Read
How it compares to Gemini Live
Google’s Gemini Live is the live conversational interface of the Gemini AI chatbot, integrated into the Gemini app on mobile devices. Similar to Meta AI’s new capabilities, it allows users to hold long and contextual conversations with the AI chatbot, interrupt mid-conversation and even search the web to fetch answers to queries.
Gemini Live also gets live camera and screen sharing capabilities Using multimodal processing, Gemini can understand what’s on the screen or in the camera view and offer relevant, real-time help.
While both Gemini Live and Meta AI’s live conversational features are positioned as real-time AI assistants, both were built for very different ecosystems.
Meta AI’s features are based on the Muse Spark frontier AI model developed by Meta Superintelligence Labs and is mainly designed around Meta’s platforms such as WhatsApp, Instagram, Facebook, Messenger, and Ray-Ban smart glasses. Gemini Live, meanwhile, is powered by Gemini models and is more deeply integrated into Android and Google services, including Gmail, Calendar, Photos, Chrome, Google TV, and connected cars.
In practical use, Gemini Live is designed to function as a system-level assistant across Android devices and Google apps, while Meta AI’s conversational capabilities are more focused on social interactions, messaging, content discovery, and Meta’s connected platforms.
Other new Meta AI features
New shopping features inside Meta AI
Meta is also expanding shopping tools within Meta AI. Users can now search Facebook Marketplace listings alongside products from across the web in a single interface. The system can show nearby listings on a map and allows filtering based on price, style, or distance. The company is also testing ways to browse products directly from creators and brands by tagging them inside conversations or searches.
Muse Spark across Meta apps and smart glasses
Meta says Muse Spark will support a wider range of AI capabilities across its products, including smart glasses, messaging apps, and social platforms.
Across apps such as WhatsApp, Instagram, Messenger, Facebook, and Threads, Meta is testing features like “side chats,” where users can privately ask Meta AI questions inside group chats without interrupting the conversation. Meta is also testing @meta.ai mentions inside Threads posts and replies.
More From This Section
Don't miss the most important news and views of the day. Get them on our Telegram channel
First Published: May 13 2026 | 5:55 PM IST
