Tech Edition, ft. NVIDIA × Disney Imagineering

via GIPHY

NVIDIA GTC 2026 brought me pure joy as I watched Disney’s Olaf step into our physical world. Not only that, Olaf performed for 15 minutes on a moving, rotating boat at Disneyland Paris.

From a product lens, NVIDIA × Disney absolutely nailed the “user delight” factor!

And fun fact: Disney trained Olaf to physically walk in just 2 days! Do you feel the tech leap?

This is the kind of magic that happens whenever Hollywood and Silicon Valley come together, not just to make dreams come true, but to pour life into our favorite fictional characters. I’ve seen this while working in the R&D wings at Disney and Warner Bros. in 2015, and later throughout my time in the VR/AR industry, where media-tech is really creative-tech.

Watching Olaf interact with Jensen Huang and other tech news reporters was genuinely exciting! Jensen Huang, in his keynote, said, “Newton works! Omniverse works!” – Hell yeah!

But why Olaf and not some other character?

Picking Olaf as the MVP was a very intentional choice to showcase their tech:

  • Small enough to build a true-to-scale animatronic
  • Tiny and stubby to recreate the “snow shuffle” walk
  • Can speak human, and is chill, and happy (not hyper) as a character

Now imagine if they picked Lilo’s Stitch, they’d have to match chaotic, high-energy expressiveness. Or Moana’s Heihei… well, not exactly conversational! Or Aladdin’s Abu - fun but can’t speak human (similar to the earlier attempts like Star Wars BDX droid). Or Timon and Pumba, but they would need to come as a pair, which would add complexity.

But Olaf is perfect! Built on Frozen’s popularity, still recent, and adored by all. Simple yet challenging for a workable demo performance. A perfect MVP.

So what’s the magic behind this?

They used Olaf to showcase a full stack of animatronics + AI

  • Jetson, the computer inside Olaf, powering real-time AI inference on the edge.
    • Allows for “free roaming” by balancing on uneven surfaces (like a moving boat), without tracks or fixed mounts.
  • Kamino, Disney’s AI simulator built on NVIDIA Warp
    • Using reward-based reinforcement learning, where 100,000 virtual instances learned to walk and balance in only two days (using Nvidia RTX 4090 GPUs).
  • Newton, an open-source physics engine, a platform co-developed by Disney Research, NVIDIA, and Google DeepMind
    • makes simulations accurately mirror real-world physics (rigid surfaces, cloth, snow, gravel).

And when all these come together, we get what they call Physical AI.

This Kamino simulator, integrated with Newton, enables “physical AI,” allowing robots to learn complex behaviors in simulation through rewards-based reinforcement (machine) learning and directly apply them in the real world without additional training. They transfer the training from the digital character to the physical character (sim-to-real transfer). They managed to make the robot walk on a moving or rotating boat, which is a challenging feat.

The AI also learnt to walk quietly and become thermally aware to reduce strain on the neck sensors and motors by adjusting its posture and avoiding overheating the big, heavy head (which weighs 15kgs) and melt away!

And then there’s the craftsmanship.

The outer shell is spot on. They worked directly with the original animators, not just on walking style, hand movements, and expressions, but also to get the costume texture. The soft snow glow comes from the iridescent fibers woven into the costume. The folds between his chin, neck, and body feel natural.

But the moment that stood out to me most? Watching Olaf make eye contact with the audience. That subtle detail is what makes it feel real. It doesn’t feel like a robot dressed as Olaf. It feels like Olaf showed up.

For now, he’s still guided by a puppet master, a joystick controller (as animatronics traditionally are), with pre-recorded phrases and movement cues. But imagine this with GenAI layered in.

To put things in perspective, check out this video on how far along the Olaf animatronic has come since 2013. In 2026, it looks more complete, natural, and alive!

This is the magic that happens when multiple disciplines, AI, physics, animation, hardware, storytelling, come together around a single idea. It’s also how characters move from film into the real world in months, not years.

As the tech matures, I can’t wait to see the characters they’d bring to life next.

I’d love to see Simba 🙂