Google has expanded its open models line-up by announcing the next-generation Gemma AI model, the Gemma 3. Google, in a blog post, stated that Gemma 3 is "built from the same research and technology that powers" the Gemini 2.0 models. It's available in 1B, 4B, 12B, and 27B sizes. The US technology giant has claimed that Gemma 3 is the "world's best single-accelerator model" (single GPU or TPU host). The company has further claimed that it outperforms Llama-405B, DeepSeek-V3 and o3-mini models on the benchmarking platform LMArena.
According to the blog post, Gemma models, which essentially are open models for developers, are designed to run fast, directly on devices—from phones and laptops to workstations. Notably, Gemma 1 was released in February 2024, and the second model followed in May. Now, the third model is here after around 10 months.
What's special about Gemma 3?
Google is highlighting the enhanced reasoning capabilities of its latest AI model, which can process text, images, and short videos at a scale of over 4 billion parameters. It features a 128,000-token context window and comes with built-in support for more than 35 languages, with pre-trained compatibility for over 140 languages.
Some key advancements include:
- AI-powered automation with function calling: Gemma 3 enables function calling and structured output, allowing users to automate workflows and create intelligent, responsive AI applications.
- Faster performance with optimised models: The latest version introduces official quantised models, which reduce size and computational demands while maintaining accuracy.
- Enhanced safety measures: Google has integrated a 4B image safety checker, ShieldGemma 2, designed to detect and label unsafe content across three categories: harmful material, explicit content, and violence. Google also said that the model aligns with its safety policies, and has gone through rigorous testing to ensure responsible deployment.
Users will be able to use Gemma 3 by visiting Google AI Studio on their browser.