
Google AI: Release Notes Project Genie: Create and explore worlds
30 snips
Jan 30, 2026 A tour of interactive, real-time world generation and the tech behind it. They demo swapping creatures, remixing scenes, and turning photos into explorable spaces. Conversation covers compute limits, session design, style transfer, and how these worlds could train agents or reshape entertainment. The team teases personalization, shared world histories, and where the tech might go next.
AI Snips
Chapters
Transcript
Episode notes
World Models Create Interactive Video Worlds
- A world model generates an interactive environment frame-by-frame that you can navigate and control.
- It extends video generation by making scenes responsive to user actions in real time.
Demo: From Canvas To Immersive Reef
- Jack demos building a coral reef world, switching a goldfish to a shark, and stepping into the generated environment.
- Trusted testers found creating a 2D canvas first then entering the world to be a surprisingly powerful moment.
Emergent Physics From Frame Generation
- The model implicitly learns world dynamics so objects respond realistically to interactions.
- Physics-like behavior emerges frame-to-frame without explicit physics engines in many cases.
