- Cadence and Nvidia are combining physics simulation with AI training models to close the ‘sim-to-real’ gap in robotics.
- The partnership leverages Nvidia’s Blackwell GPUs to deliver a 30x speedup in multi-physics simulation.
- Real-world partners including Honda R&D, Samsung, and Argonne National Lab are already deploying the system.
Training a robot to grip a coffee mug sounds trivial until you realize it needs to understand friction coefficients, thermal feedback, and stress distribution—simultaneously. Cadence Design Systems and Nvidia announced an expanded partnership on April 15 to tackle exactly that problem, combining Cadence’s physics simulation engines with Nvidia’s AI training infrastructure to build robots that actually understand how the physical world works.
The deal was announced at CadenceLIVE Silicon Valley by both CEOs—Jensen Huang and Anirudh Devgan—in what amounted to a mutual endorsement ceremony in Santa Clara. The technical core: integrating Cadence’s Reality Digital Twin Platform with Nvidia Omniverse and Isaac, Nvidia’s robotics-specific compute platform.
Why Physical Simulation Is the Bottleneck in Robotics AI
Here’s the problem nobody talks about in robotics: you can’t just throw data at a model. Unlike language models that feed on internet text, robots need training data about how physical objects behave—how metal flexes, how rubber grips, how heat dissipates through a joint. That data doesn’t exist in a convenient dataset somewhere. It has to be generated by physics simulation software, which has historically been painfully slow.
Powered by Nvidia’s Blackwell architecture, Cadence claims its multi-physics simulation now runs 30x faster. That’s not a benchmark number pulled from a vacuum—it’s the difference between simulating a robot’s arm movement overnight versus doing it in under an hour. For companies trying to iterate on humanoid designs or warehouse automation, that’s the difference between shipping in 2027 and shipping in 2029.
The integration targets what engineers call the “sim-to-real” gap: the discrepancy between how a robot behaves in simulation versus how it stumbles, overheats, or drops things in a real factory. Cadence’s physics engines model friction, stress feedback, and thermal dynamics in real-time, feeding Nvidia’s AI models with training data that actually resembles reality.
“The more accurate [generated training data] is, the better the model will be,” said Devgan at the conference. Huang, for his part, framed accelerated computing and generative AI as the tipping point for making engineering “software-first.”
On the agentic AI front, Cadence unveiled AgentStack—a head agent designed to orchestrate semiconductor and system design workflows. Early deployments at more than 10 customers have already shown a 10x productivity boost in design and verification tasks. Nvidia is adopting AgentStack in its own chip design flows, which is either a strong validation signal or the kind of thing you do when your hardware partner asks nicely.
The partnership also extends into Cadence’s broader EDA and SDA toolchains. Cadence is leveraging Nvidia CUDA-X and AI-physics libraries to accelerate its solvers by up to 100x. Honda R&D, Samsung, SK Hynix, and Argonne National Laboratory are already using the accelerated solutions, according to the press materials.
For the robotics industry specifically, the collaboration fills a critical gap. Nvidia’s Isaac platform handles the compute side—training models, running inference—but it’s been light on high-fidelity physics feedback. Cadence’s expertise in material mechanics and electromagnetics complements that, potentially establishing what the industry might eventually call a “physical simulation standard” for autonomous systems.
The broader implication is that robotics competition is shifting from hardware specs to integrated capability stacks. Nvidia recently expanded its reach in physical AI alongside other partners, but the Cadence deal specifically targets the simulation bottleneck that’s been holding back real-world robot deployment. Whether that bottleneck breaks depends on whether 30x is fast enough to generate the training data volume these models actually need—and right now, nobody has a definitive answer to that question.
