Nvidia opened CES 2026 in Las Vegas with a series of announcements such as new AI hardware platforms, open AI models, and expanded efforts in autonomous driving, robotics, and personal computing. CEO Jensen Huang confirmed that the company’s next-generation Rubin AI platform is now in production and detailed plans to scale AI across consumer devices, vehicles, and industrial systems over the coming year.
Rubin platform:
One of the biggest announcements was Rubin, Nvidia’s next-generation AI computing platform and the successor to its Blackwell architecture. Rubin is Nvidia’s first “extreme-codesigned” platform, meaning its chips, networking and software are developed together rather than as separate parts.
According to Nvidia, Rubin is now in full production and is designed to reduce the cost of generating AI outputs significantly compared to previous platforms. The platform combines new GPUs, CPUs, networking and data processing hardware to handle large AI models more efficiently.
Also Read
Alongside Rubin, Nvidia also introduced an AI-focused storage system aimed at improving how large language models handle long conversations and large context windows, allowing AI systems to respond faster while using less power.
Open AI models:
Nvidia also highlighted its growing portfolio of open AI models, which are trained on Nvidia’s own supercomputers and made available for developers and organisations to build upon.
These models are organised by use case, covering areas such as healthcare, climate research, robotics, reasoning-based AI and autonomous driving. The idea is to provide ready-to-use AI foundations that can be customised and deployed without starting from scratch.
Nvidia said that the portfolio spans six domains:
- Clara for healthcare
- Earth-2 for climate science
- Nemotron for reasoning and multimodal AI
- Cosmos for robotics and simulation
- GR00T for embodied intelligence
- Alpamayo for autonomous driving
For consumers, this approach is intended to speed up how AI features appear in apps, vehicles and devices, as developers can build on shared models rather than developing their own from the ground up.
Physical AI, robotics and simulation:
A large part of Nvidia’s presentation focused on what it calls physical AI — AI systems that interact with the real world through robots, machines and vehicles.
Nvidia showcased how robots and machines are trained inside simulated environments before being deployed in real-world settings. These simulations are used to test edge cases, safety scenarios and complex movements that would be difficult or unsafe to recreate physically.
At the heart of it is Nvidia’s new Cosmos foundation model. The company said that it has been trained on videos, robotics data and simulation. The new model can:
- Generates realistic videos from a single image
- Synthesizes multi-camera driving scenarios
- Models edge-case environments from scenario prompts
- Performs physical reasoning and trajectory prediction
- Drives interactive, closed-loop simulation
Autonomous driving and Alpamayo:
In the automotive space, Nvidia announced Alpamayo, a new open AI model portfolio designed specifically for autonomous driving. It includes the following AI models:
Alpamayo R1: The first open, reasoning VLA (vision language action) model for autonomous driving
AlpaSim: A fully open simulation blueprint for high-fidelity AV (autonomous vehicle) testing
VLA models are built to process camera and sensor data, reason about driving situations and decide how a vehicle should respond. In simple terms it allows an AV to think more like a human so it can handle complex cases such as navigating through a busy intersection without any previous experience.
Nvidia said Alpamayo will be used within its existing autonomous vehicle software stack and that the first passenger car using this system will be the new Mercedes-Benz CLA, launching soon.

)