Saturday, October 11, 2025 | 02:29 PM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

Gemma 3n: All about Google's open model for on-device AI on phones, laptops

Google says Gemma 3n makes use of a new technique called Per-Layer Embeddings (PLE), which allows the model to consume much less RAM than similarly sized models

Google Gemma 3n

Google Gemma 3n

Harsh Shivam New Delhi

Listen to This Article

At its annual Google I/O conference, Google unveiled the Gemma 3n, a new addition to its Gemma 3 series of open AI models. The company said that the model is designed to run efficiently on everyday devices like smartphones, laptops, and tablets. Gemma 3n shares its architecture with the upcoming generation of Gemini Nano, the lightweight AI model that already powers several on-device AI features on Android devices such as voice recorder summaries on Pixel smartphones. 

Gemma 3n model: Details

Google says Gemma 3n makes use of a new technique called Per-Layer Embeddings (PLE), which allows the model to consume much less RAM than similarly sized models. Although the model has 5 billion and 8 billion parameters (5B and 8B), this new memory optimisation brings its RAM usage closer to that of a 2B or 4B model. In practical terms, this means Gemma 3n can run with just 2GB to 3GB of RAM, making it viable for a much wider range of devices. 
 

Gemma 3n model: Key capabilities

  • Audio input: The model can process sound-based data, enabling applications like speech recognition, language translation, and audio analysis.
  • Multimodal input: With support for visual, text, and audio inputs, the model can handle complex tasks that involve combining different types of data.
  • Broad language support: Google said that the model is trained in over 140 languages.
  • 32K token context window: Gemma 3n supports input sequences up to 32,000 tokens, allowing it to handle large chunks of data in one go—useful for summarising long documents or performing multi-step reasoning.
  • PLE caching: The model’s internal components (embeddings) can be stored temporarily in fast local storage (like the device’s SSD), helping reduce the RAM needed during repeated use.
  • Conditional parameter loading: If a task doesn’t require audio or visual capabilities, the model can skip loading those parts, saving memory and speeding up performance.

Gemma 3n model: Availability

As part of the Gemma open model family, Gemma 3n is provided with accessible weights and licensed for commercial use, allowing developers to tune, adapt, and deploy it across a variety of applications. Gemma 3n is now available as a preview in Google AI Studio. 

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: May 22 2025 | 5:12 PM IST

Explore News