Generally Intelligent cover image

Generally Intelligent

Latest episodes

undefined
Feb 28, 2022 • 1h 59min

Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology

Andrew Lampinen is a Research Scientist at DeepMind. He previously completed his Ph.D. in cognitive psychology at Stanford. In this episode, we discuss generalization and transfer learning, how to think about language and symbols, what AI can learn from psychology (and vice versa), mental time travel, and the need for more human-like tasks. [Podcast errata: Susan Goldin-Meadow accidentally referred to as Susan Gelman @00:30:34] 
undefined
Dec 21, 2021 • 1h 25min

Episode 16: Yilun Du, MIT, on energy-based models, implicit functions, and modularity

Yilun Du is a graduate student at MIT advised by Professors Leslie Kaelbling, Tomas Lozano-Perez, and Josh Tenenbaum. He's interested in building robots that can understand the world like humans and construct world representations that enable task planning over long horizons.
undefined
Oct 15, 2021 • 1h 26min

Episode 15: Martín Arjovsky, INRIA, on benchmarks for robustness and geometric information theory

Martín Arjovsky did his Ph.D. at NYU with Leon Bottou. Some of his well-known works include the Wasserstein GAN and a paradigm called Invariant Risk Minimization. In this episode, we discuss out-of-distribution generalization, geometric information theory, and the importance of good benchmarks.
undefined
Sep 24, 2021 • 1h 27min

Episode 14: Yash Sharma, MPI-IS, on generalizability, causality, and disentanglement

Yash Sharma is a Ph.D. student at the International Max Planck Research School for Intelligent Systems. He previously studied electrical engineering at Cooper Union and has spent time at Borealis AI and IBM Research. Yash’s early work was on adversarial examples and his current research interests span a variety of topics in representation disentanglement. In this episode, we discuss robustness to adversarial examples, causality vs. correlation in data, and how to make deep learning models generalize better.
undefined
6 snips
Sep 10, 2021 • 1h 21min

Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning

Jonathan Frankle (Google Scholar) (Website) is finishing his PhD at MIT, advised by Michael Carbin. His main research interest is using experimental methods to understand the behavior of neural networks. His current work focuses on finding sparse, trainable neural networks. **Highlights from our conversation:**  🕸  "Why is sparsity everywhere? This isn't an accident." 🤖  "If I gave you 500 GPUs, could you actually keep those GPUs busy?" 📊  "In general, I think we have a crisis of science in ML."
undefined
Jun 18, 2021 • 60min

Episode 12: Jacob Steinhardt, UC Berkeley, on machine learning safety, alignment and measurement

Jacob Steinhardt (Google Scholar) (Website) is an assistant professor at UC Berkeley.  His main research interest is in designing machine learning systems that are reliable and aligned with human values.  Some of his specific research directions include robustness, rewards specification and reward hacking, as well as scalable alignment. Highlights: 📜“Test accuracy is a very limited metric.” 👨‍👩‍👧‍👦“You might not be able to get lots of feedback on human values.” 📊“I’m interested in measuring the progress in AI capabilities.”
undefined
May 20, 2021 • 1h 10min

Episode 11: Vincent Sitzmann, MIT, on neural scene representations for computer vision and more general AI

Vincent Sitzmann, a postdoc at MIT, specializes in neural scene representations for computer vision. He discusses the crucial shift from 2D to 3D representations in AI, emphasizing how our understanding of vision should mirror the 3D nature of the world. Topics include the complexities of neural networks, the relationship between human perception and AI, and advancements in training techniques like self-supervised learning. Sitzmann also explores innovative applications of implicit representations and shares insights on effective research strategies for budding scientists.
undefined
May 12, 2021 • 1h 32min

Episode 10: Dylan Hadfield-Menell, UC Berkeley/MIT, on the value alignment problem in AI

Dylan Hadfield-Menell (Google Scholar) (Website) recently finished his PhD at UC Berkeley and is starting as an assistant professor at MIT. He works on the problem of designing AI algorithms that pursue the intended goal of their users, designers, and society in general.  This is known as the value alignment problem. Highlights from our conversation: 👨‍👩‍👧‍👦 How to align AI to human values 📉 Consequences of misaligned AI -> bias & misdirected optimization 📱 Better AI recommender systems
undefined
Apr 2, 2021 • 1h 12min

Episode 09: Drew Linsley, Brown, on inductive biases for vision and generalization

Drew Linsley is a Paul J. Salem senior research associate at Brown, specializing in computational models of the visual system. He discusses how neuroscience can enhance AI, particularly in machine vision, by integrating neural-inspired inductive biases. The conversation delves into challenges in panoptic segmentation and the limitations of current models like feedforward networks. Linsley also highlights the importance of theoretical innovation coupled with empirical validation, alongside the evolving role of motion recognition in neural networks.
undefined
Mar 27, 2021 • 1h 9min

Episode 08: Giancarlo Kerg, Mila, on approaching deep learning from mathematical foundations

Giancarlo Kerg (Google Scholar) is a PhD student at Mila, supervised by Yoshua Bengio and Guillaume Lajoie.  He is working on out-of-distribution generalization and modularity in memory-augmented neural networks.  Highlights from our conversation: 🧮 Pure math foundations as an approach to progress and structural understanding in deep learning research 🧠 How a formal proof on the way self-attention mitigates gradient vanishing when capturing long-term dependencies in RNNs led to a relevancy screening mechanism resembling human memory consolidation 🎯 Out-of-distribution generalization through modularity and inductive biases

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner