3 min read Last Updated : Feb 06 2025 | 2:42 PM IST
Google is updating its Gemini AI chatbot with its second-generation AI models. As part of the update, the Gemini 2.0 Flash model is now available to all users, while an experimental version of the flagship Gemini 2.0 Pro model is rolling out for Gemini Advanced subscribers. Additionally, Google has introduced the Gemini 2.0 Flash-Lite model, which it claims is its most cost-efficient AI model to date.
Google is also bringing its experimental Gemini 2.0 Flash Thinking mode to the Gemini app. Introduced in December last year in Google AI Studio and Vertex AI, this mode offers improved reasoning capabilities compared to the Gemini 2.0 Flash model.
Google Gemini 2.0: What is new
Gemini 2.0 Flash wider roll-out
Initially launched as an experimental model in December, the stable version of Gemini 2.0 Flash began rolling out last week. Google has now expanded its availability across its AI products, including the Gemini mobile app. The company also stated that text-to-speech and image generation capabilities for this model will be introduced soon.
Beyond the Gemini app, 2.0 Flash is also accessible through the Gemini API in Google AI Studio and Vertex AI, providing developers with integration options.
Google has introduced an experimental version of Gemini 2.0 Pro, its latest flagship model, which is designed for handling complex prompts with enhanced reasoning and world knowledge. It also boasts the best coding performance among Gemini models.
Gemini 2.0 Pro features a 2 million token context window, allowing it to process and analyse large datasets efficiently. It can also leverage additional tools, including Google Search and code execution, to enhance its capabilities.
This experimental model is available for Gemini Advanced subscribers via the model-picker drop-down menu on both desktop and mobile. It is also accessible to developers in Google AI Studio and Vertex AI.
Gemini 2.0 Flash-Lite
Google's new 2.0 Flash-Lite model is the most cost-efficient Gemini model to date. It maintains the same speed and pricing as the 1.5 Flash model but offers a 1 million token context window and multimodal input support, similar to the 2.0 Flash model.
Gemini 2.0 Flash-Lite is now in public preview through Google AI Studio and Vertex AI.
2.0 Flash Thinking Experimental in Gemini app
Google is bringing the Gemini 2.0 Flash Thinking Experimental mode to the Gemini app, making it available in the model dropdown on desktop and mobile. Previously limited to Google AI Studio and Vertex AI, this mode is designed to enhance reasoning by explicitly displaying its thought process.
When solving complex problems, the Thinking mode presents a step-by-step breakdown of how it approaches a question. This structured explanation shows how the AI divides problems into smaller components to reach a final conclusion, making it easier for users to follow its reasoning.
You’ve reached your limit of {{free_limit}} free articles this month. Subscribe now for unlimited access.