
Generally Intelligent
Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.
Latest episodes

15 snips
Feb 9, 2023 • 1h 45min
Episode 27: Noam Brown, FAIR, on achieving human-level performance in poker and Diplomacy, and the power of spending compute at inference time
Noam Brown is a research scientist at FAIR. During his Ph.D. at CMU, he made the first AI to defeat top humans in No Limit Texas Hold 'Em poker. More recently, he was part of the team that built CICERO which achieved human-level performance in Diplomacy. In this episode, we extensively discuss ideas underlying both projects, the power of spending compute at inference time, and much more.

Jan 17, 2023 • 1h 44min
Episode 26: Sugandha Sharma, MIT, on biologically inspired neural architectures, how memories can be implemented, and control theory
Sugandha Sharma is a Ph.D. candidate at MIT advised by Prof. Ila Fiete and Prof. Josh Tenenbaum. She explores the computational and theoretical principles underlying higher cognition in the brain by constructing neuro-inspired models and mathematical tools to discover how the brain navigates the world, or how to construct memory mechanisms that don’t exhibit catastrophic forgetting. In this episode, we chat about biologically inspired neural architectures, how memory could be implemented, why control theory is underrated and much more.

18 snips
Dec 16, 2022 • 1h 49min
Episode 25: Nicklas Hansen, UCSD, on long-horizon planning and why algorithms don't drive research progress
Nicklas Hansen is a Ph.D. student at UC San Diego advised by Prof Xiaolong Wang and Prof Hao Su. He is also a student researcher at Meta AI. Nicklas' research interests involve developing machine learning systems, specifically neural agents, that have the ability to learn, generalize, and adapt over their lifetime. In this episode, we talk about long-horizon planning, adapting reinforcement learning policies during deployment, why algorithms don't drive research progress, and much more!

16 snips
Dec 6, 2022 • 1h 57min
Episode 24: Jack Parker-Holder, DeepMind, on open-endedness, evolving agents and environments, online adaptation, and offline learning
Jack Parker-Holder recently joined DeepMind after his Ph.D. with Stephen Roberts at Oxford. Jack is interested in using reinforcement learning to train generally capable agents, especially via an open-ended learning process where environments can adapt to constantly challenge the agent's capabilities. Before doing his Ph.D., Jack worked for 7 years in finance at JP Morgan. In this episode, we chat about open-endedness, evolving agents and environments, online adaptation, offline learning with world models, and much more.

20 snips
Nov 22, 2022 • 1h 53min
Episode 23: Celeste Kidd, UC Berkeley, on attention and curiosity, how we form beliefs, and where certainty comes from
Celeste Kidd is a professor of psychology at UC Berkeley. Her lab studies the processes involved in knowledge acquisition; essentially, how we form our beliefs over time and what allows us to select a subset of all the information we encounter in the world to form those beliefs. In this episode, we chat about attention and curiosity, beliefs and expectations, where certainty comes from, and much more.

4 snips
Nov 17, 2022 • 1h 38min
Episode 22: Archit Sharma, Stanford, on unsupervised and autonomous reinforcement learning
Archit Sharma is a Ph.D. student at Stanford advised by Chelsea Finn. His recent work is focused on autonomous deep reinforcement learning—that is, getting real world robots to learn to deal with unseen situations without human interventions. Prior to this, he was an AI resident at Google Brain and he interned with Yoshua Bengio at Mila. In this episode, we chat about unsupervised, non-episodic, autonomous reinforcement learning and much more.

17 snips
Nov 3, 2022 • 40min
Episode 21: Chelsea Finn, Stanford, on the biggest bottlenecks in robotics and reinforcement learning
Chelsea Finn is an Assistant Professor at Stanford and part of the Google Brain team. She's interested in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction at scale. In this episode, we chat about some of the biggest bottlenecks in RL and robotics—including distribution shifts, Sim2Real, and sample efficiency—as well as what makes a great researcher, why she aspires to build a robot that can make cereal, and much more.

50 snips
Oct 14, 2022 • 1h 47min
Episode 20: Hattie Zhou, Mila, on supermasks, iterative learning, and fortuitous forgetting
Hattie Zhou is a Ph.D. student at Mila working with Hugo Larochelle and Aaron Courville. Her research focuses on understanding how and why neural networks work, starting with deconstructing why lottery tickets work and most recently exploring how forgetting may be fundamental to learning. Prior to Mila, she was a data scientist at Uber and did research with Uber AI Labs. In this episode, we chat about supermasks and sparsity, coherent gradients, iterative learning, fortuitous forgetting, and much more.

75 snips
Jul 19, 2022 • 1h 54min
Episode 19: Minqi Jiang, UCL, on environment and curriculum design for general RL agents
Minqi Jiang is a Ph.D. student at UCL and FAIR, advised by Tim Rocktäschel and Edward Grefenstette. Minqi is interested in how simulators can enable AI agents to learn useful behaviors that generalize to new settings. He is especially focused on problems at the intersection of generalization, human-AI coordination, and open-ended systems. In this episode, we chat about environment and curriculum design for reinforcement learning, model-based RL, emergent communication, open-endedness, and artificial life.

34 snips
Jul 11, 2022 • 2h 1min
Episode 18: Oleh Rybkin, UPenn, on exploration and planning with world models
Oleh Rybkin is a Ph.D. student at the University of Pennsylvania and a student researcher at Google. He is advised by Kostas Daniilidis and Sergey Levine. Oleh's research focus is on reinforcement learning, particularly unsupervised and model-based RL in the visual domain. In this episode, we discuss agents that explore and plan (and do yoga), how to learn world models from video, what's missing from current RL research, and much more!
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.