San Francisco: Google DeepMind has launched Gemma 4, its most powerful family of open AI models to date. The models are built on the same research as Gemini 3 and ship in four sizes under an Apache 2.0 licence, a major shift from previous Gemma terms that limited commercial use.
The four sizes run on everything from smartphones to data centre servers. Smaller edge models work on consumer laptops and Raspberry Pi devices. Larger models target workstations and enterprise deployments. The biggest, a 31B Dense model, currently ranks third on the global Arena AI open-model leaderboard.
All Gemma 4 models can process images and video natively. The smaller edge models also understand audio, enabling speech recognition out of the box. Context windows range from 128,000 tokens for edge models to 256,000 for the larger versions. Every model supports more than 140 languages.
The Apache 2.0 licence is the headline business story. Previous Gemma releases carried restrictions that blocked certain enterprise and commercial uses. The new licence removes those barriers entirely, no usage caps, no risk of access termination, and full commercial freedom for businesses building on the models.
The launch is also a competitive statement. Open-weight models from Chinese labs: Qwen 3.5, DeepSeek V3, and Kimi K2.5, have rapidly closed the gap on Western frontier models. Gemma 4 is Google’s answer. It gives enterprises a domestically built alternative backed by verified benchmarks and an unrestricted licence.
Google partnered with Nvidia, Qualcomm, and MediaTek to optimise Gemma 4 across a wide range of hardware. The models are available today on Hugging Face, Kaggle, Ollama, and Google AI Studio. The two smallest models will also power Gemini Nano 4, Google’s next on-device model for Android arriving on consumer devices later this year.
Demis Hassabis, CEO of Google DeepMind, called Gemma 4 the best open models in the world for their sizes. Gemma has now surpassed 400 million downloads and 100,000 community variants since its first release.
