Generally Intelligent

Kanjun Qiu
undefined
34 snips
Jul 11, 2022 • 2h 1min

Oleh Rybkin, UPenn: Exploration and planning with world models

Oleh Rybkin is a Ph.D. student at the University of Pennsylvania and a student researcher at Google. He is advised by Kostas Daniilidis and Sergey Levine. Oleh's research focus is on reinforcement learning, particularly unsupervised and model-based RL in the visual domain. In this episode, we discuss agents that explore and plan (and do yoga), how to learn world models from video, what's missing from current RL research, and much more!
undefined
Feb 28, 2022 • 1h 59min

Andrew Lampinen, DeepMind. Symbolic behavior, mental time travel, and insights from psychology

Andrew Lampinen is a Research Scientist at DeepMind. He previously completed his Ph.D. in cognitive psychology at Stanford. In this episode, we discuss generalization and transfer learning, how to think about language and symbols, what AI can learn from psychology (and vice versa), mental time travel, and the need for more human-like tasks. [Podcast errata: Susan Goldin-Meadow accidentally referred to as Susan Gelman @00:30:34] 
undefined
Dec 21, 2021 • 1h 25min

Yilun Du, MIT: Energy-based models, implicit functions, and modularity

Yilun Du is a graduate student at MIT advised by Professors Leslie Kaelbling, Tomas Lozano-Perez, and Josh Tenenbaum. He's interested in building robots that can understand the world like humans and construct world representations that enable task planning over long horizons.
undefined
Oct 15, 2021 • 1h 26min

Martín Arjovsky, INRIA: Benchmarks for robustness and geometric information theory

Martín Arjovsky did his Ph.D. at NYU with Leon Bottou. Some of his well-known works include the Wasserstein GAN and a paradigm called Invariant Risk Minimization. In this episode, we discuss out-of-distribution generalization, geometric information theory, and the importance of good benchmarks.
undefined
Sep 24, 2021 • 1h 27min

Yash Sharma, MPI-IS: Generalizability, causality, and disentanglement

Yash Sharma is a Ph.D. student at the International Max Planck Research School for Intelligent Systems. He previously studied electrical engineering at Cooper Union and has spent time at Borealis AI and IBM Research. Yash’s early work was on adversarial examples and his current research interests span a variety of topics in representation disentanglement. In this episode, we discuss robustness to adversarial examples, causality vs. correlation in data, and how to make deep learning models generalize better.
undefined
6 snips
Sep 10, 2021 • 1h 21min

Jonathan Frankle, MIT: The lottery ticket hypothesis and the science of deep learning

Jonathan Frankle (Google Scholar) (Website) is finishing his PhD at MIT, advised by Michael Carbin. His main research interest is using experimental methods to understand the behavior of neural networks. His current work focuses on finding sparse, trainable neural networks.**Highlights from our conversation:** 🕸  "Why is sparsity everywhere? This isn't an accident."🤖  "If I gave you 500 GPUs, could you actually keep those GPUs busy?"📊  "In general, I think we have a crisis of science in ML."
undefined
Jun 18, 2021 • 60min

Jacob Steinhardt, UC Berkeley: Machine learning safety, alignment and measurement

Jacob Steinhardt (Google Scholar) (Website) is an assistant professor at UC Berkeley.  His main research interest is in designing machine learning systems that are reliable and aligned with human values.  Some of his specific research directions include robustness, rewards specification and reward hacking, as well as scalable alignment.Highlights:📜“Test accuracy is a very limited metric.”👨‍👩‍👧‍👦“You might not be able to get lots of feedback on human values.”📊“I’m interested in measuring the progress in AI capabilities.”
undefined
May 20, 2021 • 1h 10min

Vincent Sitzmann, MIT: Neural scene representations for computer vision and more general AI

Vincent Sitzmann, a postdoc at MIT, specializes in neural scene representations for computer vision. He discusses the crucial shift from 2D to 3D representations in AI, emphasizing how our understanding of vision should mirror the 3D nature of the world. Topics include the complexities of neural networks, the relationship between human perception and AI, and advancements in training techniques like self-supervised learning. Sitzmann also explores innovative applications of implicit representations and shares insights on effective research strategies for budding scientists.
undefined
May 12, 2021 • 1h 32min

Dylan Hadfield-Menell, UC Berkeley/MIT: The value alignment problem in AI

Dylan Hadfield-Menell (Google Scholar) (Website) recently finished his PhD at UC Berkeley and is starting as an assistant professor at MIT. He works on the problem of designing AI algorithms that pursue the intended goal of their users, designers, and society in general.  This is known as the value alignment problem.Highlights from our conversation:👨‍👩‍👧‍👦 How to align AI to human values📉 Consequences of misaligned AI -> bias & misdirected optimization📱 Better AI recommender systems
undefined
Apr 2, 2021 • 1h 12min

Drew Linsley, Brown: Inductive biases for vision and generalization

Drew Linsley is a Paul J. Salem senior research associate at Brown, specializing in computational models of the visual system. He discusses how neuroscience can enhance AI, particularly in machine vision, by integrating neural-inspired inductive biases. The conversation delves into challenges in panoptic segmentation and the limitations of current models like feedforward networks. Linsley also highlights the importance of theoretical innovation coupled with empirical validation, alongside the evolving role of motion recognition in neural networks.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app