NVIDIA Vera Rubin

NVIDIA Vera Rubin

NVIDIA Vera Rubin is the rack-scale NVIDIA AI system Jensen Huang discusses as a successor pattern to Grace Blackwell, designed for agent-heavy workloads that use tools and stress storage, CPU, networking, and rack integration differently.

Key facts

  • Type: Rack-scale AI computing system
  • Maker: NVIDIA
  • Jensen contrasts Grace Blackwell racks, optimized for LLM/MoE inference, with Vera Rubin racks that include storage accelerators, the Vera CPU, Rubin, NVLink 72, and an additional rack component he calls Rock [src-065].
  • The system reflects a shift from processing only LLM inference toward running agents that call tools and hit broader parts of the infrastructure [src-065].
  • Jensen says each Vera Rubin rack has roughly 1.3 to 1.5 million components and relies on hundreds of suppliers, illustrating the supply-chain complexity of AI Factories [src-065].
  • The move to NVLink-72 rack-scale computing shifts some supercomputer integration from the data center into manufacturing and supply chain [src-065].

Related entities

Related concepts

Source references

  • [src-065] Lex Fridman – “Jensen Huang: NVIDIA – The $4 Trillion Company & the AI Revolution | Lex Fridman Podcast #494” (2026-03-23)