Generally Intelligent

Kanjun Qiu
undefined
17 snips
Mar 1, 2023 • 1h 35min

Sergey Levine, UC Berkeley: The bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems

Sergey Levine, an assistant professor of EECS at UC Berkeley, is one of the pioneers of modern deep reinforcement learning. His research focuses on developing general-purpose algorithms for autonomous agents to learn how to solve any task. In this episode, we talk about the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems.
undefined
15 snips
Feb 9, 2023 • 1h 45min

Noam Brown, FAIR: Achieving human-level performance in poker and Diplomacy, and the power of spending compute at inference time

Noam Brown is a research scientist at FAIR. During his Ph.D. at CMU, he made the first AI to defeat top humans in No Limit Texas Hold 'Em poker. More recently, he was part of the team that built CICERO which achieved human-level performance in Diplomacy. In this episode, we extensively discuss ideas underlying both projects, the power of spending compute at inference time, and much more.
undefined
Jan 17, 2023 • 1h 44min

Sugandha Sharma, MIT: Biologically inspired neural architectures, how memories can be implemented, and control theory

Sugandha Sharma is a Ph.D. candidate at MIT advised by Prof. Ila Fiete and Prof. Josh Tenenbaum. She explores the computational and theoretical principles underlying higher cognition in the brain by constructing neuro-inspired models and mathematical tools to discover how the brain navigates the world, or how to construct memory mechanisms that don’t exhibit catastrophic forgetting. In this episode, we chat about biologically inspired neural architectures, how memory could be implemented, why control theory is underrated and much more.
undefined
18 snips
Dec 16, 2022 • 1h 49min

Nicklas Hansen, UCSD: Long-horizon planning and why algorithms don't drive research progress

Nicklas Hansen is a Ph.D. student at UC San Diego advised by Prof Xiaolong Wang and Prof Hao Su. He is also a student researcher at Meta AI. Nicklas' research interests involve developing machine learning systems, specifically neural agents, that have the ability to learn, generalize, and adapt over their lifetime. In this episode, we talk about long-horizon planning, adapting reinforcement learning policies during deployment, why algorithms don't drive research progress, and much more!
undefined
16 snips
Dec 6, 2022 • 1h 57min

Jack Parker-Holder, DeepMind: Open-endedness, evolving agents and environments, online adaptation, and offline learning

Jack Parker-Holder recently joined DeepMind after his Ph.D. with Stephen Roberts at Oxford. Jack is interested in using reinforcement learning to train generally capable agents, especially via an open-ended learning process where environments can adapt to constantly challenge the agent's capabilities. Before doing his Ph.D., Jack worked for 7 years in finance at JP Morgan. In this episode, we chat about open-endedness, evolving agents and environments, online adaptation, offline learning with world models, and much more.
undefined
20 snips
Nov 22, 2022 • 1h 53min

Celeste Kidd, UC Berkeley: Attention and curiosity, how we form beliefs, and where certainty comes from

Celeste Kidd is a professor of psychology at UC Berkeley. Her lab studies the processes involved in knowledge acquisition; essentially, how we form our beliefs over time and what allows us to select a subset of all the information we encounter in the world to form those beliefs. In this episode, we chat about attention and curiosity, beliefs and expectations, where certainty comes from, and much more.
undefined
4 snips
Nov 17, 2022 • 1h 38min

Archit Sharma, Stanford: Unsupervised and autonomous reinforcement learning

Archit Sharma is a Ph.D. student at Stanford advised by Chelsea Finn. His recent work is focused on autonomous deep reinforcement learning—that is, getting real world robots to learn to deal with unseen situations without human interventions. Prior to this, he was an AI resident at Google Brain and he interned with Yoshua Bengio at Mila. In this episode, we chat about unsupervised, non-episodic, autonomous reinforcement learning and much more.
undefined
17 snips
Nov 3, 2022 • 40min

Chelsea Finn, Stanford: The biggest bottlenecks in robotics and reinforcement learning

Chelsea Finn is an Assistant Professor at Stanford and part of the Google Brain team. She's interested in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction at scale. In this episode, we chat about some of the biggest bottlenecks in RL and robotics—including distribution shifts, Sim2Real, and sample efficiency—as well as what makes a great researcher, why she aspires to build a robot that can make cereal, and much more.
undefined
50 snips
Oct 14, 2022 • 1h 47min

Hattie Zhou, Mila: Supermasks, iterative learning, and fortuitous forgetting

Hattie Zhou is a Ph.D. student at Mila working with Hugo Larochelle and Aaron Courville. Her research focuses on understanding how and why neural networks work, starting with deconstructing why lottery tickets work and most recently exploring how forgetting may be fundamental to learning. Prior to Mila, she was a data scientist at Uber and did research with Uber AI Labs. In this episode, we chat about supermasks and sparsity, coherent gradients, iterative learning, fortuitous forgetting, and much more.
undefined
75 snips
Jul 19, 2022 • 1h 54min

Minqi Jiang, UCL: Environment and curriculum design for general RL agents

Minqi Jiang is a Ph.D. student at UCL and FAIR, advised by Tim Rocktäschel and Edward Grefenstette. Minqi is interested in how simulators can enable AI agents to learn useful behaviors that generalize to new settings. He is especially focused on problems at the intersection of generalization, human-AI coordination, and open-ended systems. In this episode, we chat about environment and curriculum design for reinforcement learning, model-based RL, emergent communication, open-endedness, and artificial life.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app