Friday, April 03, 2026 | 10:48 AM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

Google unveils Gemma 4 open models that can run on smartphones, PC: Details

Google has launched Gemma 4 open models for Android and PCs, enabling on-device AI, offline capabilities, and future support for Gemini Nano 4 across the Android ecosystem

Gemma 4 models (Image: Google)

Gemma 4 open models (Image: Google)

Aashish Kumar Shrivastava New Delhi

Listen to This Article

Google has introduced its new open AI model family, Gemma 4, designed to run across a wide range of devices — from smartphones to personal computers and developer workstations. The company says the models are built for advanced reasoning and agent-based workflows, while also being efficient enough to run locally on consumer hardware. With this release, Google is targeting developers who want to build AI applications that can function both on-device and offline, without relying entirely on cloud infrastructure. Additionally, Google also detailed Gemini Nano 4 for Android, which is based on Gemma 4.

Gemma 4 models: Sizes and positioning

Google has released Gemma 4 in four configurations: Effective 2B (E2B), Effective 4B (E4B), 26B Mixture of Experts (MoE), and 31B Dense. 
 
According to the company, the larger 31B model and the 26B model focus on delivering higher performance per parameter. Google added that these models are built on the same research foundation as Google’s Gemini 3 models and are positioned as complementary to its proprietary AI offerings. 
A key focus of Gemma 4 is its ability to run on-device. Google says the smaller E2B and E4B models are optimised for mobile and edge devices, including Android smartphones, Raspberry Pi systems, and NVIDIA Jetson hardware. These models are said to be designed to operate offline with low latency and support multimodal inputs. Google adds that Android developers can begin testing these capabilities through the AICore Developer Preview, with future compatibility planned for Gemini Nano 4. 
For larger workloads, the 26B and 31B models are designed to run on personal computers, including systems equipped with GPUs. Google notes that these models can also be used locally for coding tools, AI agents, and development workflows.

What Gemma 4 can do

Google says Gemma 4 goes beyond basic chat-based AI and is built for more complex use cases. Key capabilities include:
  • Advanced reasoning: Support for multi-step logic and improved performance in math and instruction-based tasks
  • Agentic workflows: Native function-calling, structured outputs, and system instructions for building autonomous AI agents
  • Code generation: Ability to generate code offline, enabling local AI coding assistants
  • Multimodal support: Native processing of images and video across all models, with audio input support on E2B and E4B variants
  • Long context handling: Up to 128K context window on smaller models and 256K on larger ones
  • Language support: Training across more than 140 languages

Open licence and developer access

Gemma 4 is released under an Apache 2.0 licence, which Google says allows developers to use, modify, and deploy the models without restrictive conditions. The company adds that developers can access the models through platforms such as Google AI Studio and AI Edge tools, and can download model weights from repositories like Hugging Face, Kaggle, and Ollama. Support is also available across a range of development tools and frameworks. 
Developers can also fine-tune Gemma 4 for specific tasks using local hardware or cloud platforms like Google Cloud. The company adds that the models are built with the same security standards as its proprietary systems and are intended to provide a reliable base for enterprise and developer use. 

Gemini Nano 4

Google is also positioning Gemma 4 as the base for its next-generation on-device AI system, Gemini Nano 4, which is expected to arrive on Android devices later this year, according to the Android Developers Blog. The company said apps built using Gemma 4 today will remain compatible with Gemini Nano 4-enabled devices, with additional performance optimisations aimed at improving efficiency and enabling production-scale deployment across the Android ecosystem. Early access is currently available through the AICore Developer Preview. 
Google said developers will be able to test Gemma 4-based features on Android using tools such as Android Studio and the ML Kit Prompt API, with support for selecting between Gemini Nano 4 Fast using E2B, and Gemini Nano 4 Full using E4B variants during development. The use cases include reasoning, math, time understanding, and image understanding.

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Apr 03 2026 | 10:47 AM IST

Explore News