NVIDIA wants to create the "Android" of "Physics AI"

Written by: Bao Yilong

Source: Wall Street Insights

NVIDIA is fully committed to establishing the default platform in the robotics field, aiming to replicate Android’s dominance in the smartphone operating system market.

On January 5th, at CES 2026, NVIDIA announced several open-source foundational models, including multiple models that enable robots to perform reasoning, planning, and adaptation across various tasks and environments. All models are available on the Hugging Face platform.

NVIDIA also launched the next-generation Blackwell architecture-based Jetson T4000 GPU and an open-source command center called OSMO to support the entire robotics development workflow. The company further deepened its collaboration with Hugging Face to lower hardware requirements and technical barriers for robot training.

This strategic layout reflects the industry trend of artificial intelligence migrating from the cloud to the physical world. As sensor costs decrease, simulation technology advances, and AI model generalization improves, robots are evolving from performing single tasks to becoming more versatile. Companies like Boston Dynamics and Caterpillar have already begun using NVIDIA technology, and the robotics category has become the fastest-growing sector on the Hugging Face platform.

Building a Complete Model Matrix

The foundational models released by NVIDIA this time form the core capabilities layer of physical AI.

Cosmos Transfer 2.5 and Cosmos Predict 2.5 are world models responsible for data synthesis and robot strategy evaluation, allowing verification of robot behaviors within simulated environments.

Cosmos Reason 2, as an inference-based visual language model, endows AI systems with the ability to observe, understand, and act in the physical world.

Isaac GR00T N1.6 is a visual language action model developed specifically for humanoid robots, using Cosmos Reason as its reasoning core to achieve full-body control, enabling humanoid robots to perform both movement and object manipulation simultaneously.

At CES, NVIDIA also introduced Isaac Lab-Arena, an open-source simulation framework hosted on GitHub, designed to address industry pain points in verifying robot capabilities.

As robots learn to handle complex tasks such as precise object manipulation and cable installation, verifying these abilities in physical environments is often costly, time-consuming, and risky.

The platform integrates resources, task scenarios, training tools, and existing benchmarks like Libero, RoboCasa, and RoboTwin, establishing a universal framework for an industry that previously lacked standardized standards. The accompanying open-source platform OSMO serves as a command center, integrating the entire workflow from data generation to training, supporting both desktop and cloud environments.

Lowering Hardware Barriers

The new Jetson T4000 GPU from the Thor series, based on the Blackwell architecture, offers a cost-effective upgrade for edge computing devices, providing 1.2 quintillion floating-point operations per second (FLOPS) and 64GB of memory, with power consumption controlled between 40 and 70 watts.

NVIDIA also deepened its collaboration with Hugging Face, integrating Isaac and GR00T technologies into the latter’s LeRobot framework, connecting 2 million NVIDIA robot developers with 13 million AI builders on Hugging Face.

The open-source humanoid robot Reachy 2 now directly supports NVIDIA’s Jetson Thor chips, allowing developers to test different AI models without being locked into proprietary systems.

Early signs indicate that NVIDIA’s strategy is bearing fruit. Robots have become the fastest-growing category on the Hugging Face platform, with NVIDIA’s models leading in download volume. Companies such as Boston Dynamics, Caterpillar, Franka Robots, and NEURA Robotics are already using NVIDIA technology.

This strategic layout demonstrates the company’s intent to make robot development more accessible while positioning itself as a provider of underlying hardware and software, similar to Android’s role for smartphone manufacturers.

As AI shifts from the cloud to machines capable of learning from the physical world, cheaper sensors, advanced simulation technologies, and cross-task generalized AI models are driving industry-wide transformation.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)