AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD at Stanford with “godmother of AI,” Fei-Fei Li. He graduated into a research scientist position at Nvidia and now leads its Embodied AI “GEAR” group. The lab’s current work spans foundation models for humanoid robots to agents for virtual worlds.
Jim describes a three-pronged data strategy for robotics, combining internet-scale data, simulation data and real world robot data. He believes that in the next few years it will be possible to create a “foundation agent” that can generalize across skills, embodiments and realities—both physical and virtual. He also supports Jensen Huang’s idea that “Everything that moves will eventually be autonomous.”
Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital
Mentioned in this episode:
-
World of Bits: Early OpenAI project Jim worked on as an intern with Andrej Karpathy. Part of a bigger initiative called Universe
-
Fei-Fei Li: Jim’s PhD advisor at Stanford who founded the ImageNet project in 2010 that revolutionized the field of visual recognition, led the Stanford Vision Lab and just launched her own AI startup, World Labs
-
Project GR00T: Nvidia’s “moonshot effort” at a robotic foundation model, premiered at this year’s GTC
-
Thinking Fast and Slow: Influential book by Daniel Kahneman that popularized some of his teaching from behavioral economics
-
Jetson Orin chip: The dedicated series of edge computing chips Nvidia is developing to power Project GR00T
-
Eureka: Project by Jim’s team that trained a five finger robot hand to do pen spinning
- MineDojo: A project Jim did when he first got to Nvidia that developed a platform for general purpose agents in the game of Minecraft. Won NeurIPS 2022 Outstanding Paper Award
- ADI: artificial dog intelligence
-
Mamba: Selective State Space Models, an alternative architecture to Transformers that Jim is interested in (original paper here)
00:00 Introduction
01:35 Jim’s journey to embodied intelligence
04:53 The GEAR Group
07:32 Three kinds of data for robotics
10:32 A GPT-3 moment for robotics
16:05 Choosing the humanoid robot form factor
19:37 Specialized generalists
21:59 GR00T gets its own chip
23:35 Eureka and Issac Sim
25:23 Why now for robotics?
28:53 Exploring virtual worlds
36:28 Implications for games
39:13 Is the virtual world in service of the physical world?
42:10 Alternative architectures to Transformers
44:15 Lightning round