The Thesis Review cover image

The Thesis Review

Latest episodes

undefined
Feb 5, 2021 • 1h 26min

[18] Eero Simoncelli - Distributed Representation and Analysis of Visual Motion

Eero Simoncelli is a Professor of Neural Science, Mathematics, Data Science, and Psychology at New York University. His research focuses on representation and analysis of visual information. Eero's PhD thesis is titled "Distributed Representation & Analysis of Visual Motion", which he completed in 1993 at MIT. We discuss his PhD work which focused on optical flow, which ideas and methods have stayed with him throughout his career, making biological connections with machine learning models, and how Eero's perspective of vision has evolved. Episode notes: https://cs.nyu.edu/~welleck/episode18.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.patreon.com/thesisreview or www.buymeacoffee.com/thesisreview
undefined
Jan 22, 2021 • 1h 36min

[17] Paul Middlebrooks - Neuronal Correlates of Meta-Cognition in Primate Frontal Cortex

Paul Middlebrooks is a neuroscientist and host of the Brain Inspired podcast, which explores the intersection of neuroscience and artificial intelligence. Paul's PhD thesis is titled "Neuronal Correlates of Meta-Cognition in Primate Frontal Cortex", which he completed at the University of Pittsburgh in 2011. We discuss Paul's work on meta-cognition - informally, thinking about thinking - then discuss neuroscience for A.I. and A.I. for neuroscience. Episode notes: https://cs.nyu.edu/~welleck/episode17.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at https://www.patreon.com/thesisreview
undefined
Jan 8, 2021 • 1h 19min

[16] Aaron Courville - A Latent Cause Theory of Classical Conditioning

Aaron Courville is a Professor at the University of Montreal. His research focuses on the development of deep learning models and methods. Aaron's PhD thesis is titled "A Latent Cause Theory of Classical Conditioning", which he completed at Carnegie Mellon University in 2006. We discuss Aaron's work on the latent cause theory during his PhD, talk about how Aaron moved into machine learning and deep learning research, chart a path to today's deep learning methods, and discuss his recent work on systematic generalization in language. Episode notes: cs.nyu.edu/~welleck/episode16.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.buymeacoffee.com/thesisreview
undefined
Dec 22, 2020 • 1h 7min

[15] Christian Szegedy - Some Applications of the Weighted Combinatorial Laplacian

Christian Szegedy, a Research Scientist at Google, delves into his journey from pure mathematics to groundbreaking machine learning. He shares insights on his PhD work, focusing on the Weighted Combinatorial Laplacian and its surprising applications in chip design. Szegedy explores the philosophical debate of whether mathematics is invented or discovered, and discusses the challenges of implementing mathematical reasoning in AI. His passion for meaningful projects over mere productivity offers inspiration for aspiring researchers.
undefined
Dec 10, 2020 • 1h 4min

[14] Been Kim - Interactive and Interpretable Machine Learning Models

Been Kim is a Research Scientist at Google Brain. Her research focuses on designing high-performance machine learning methods that make sense to humans. Been's PhD thesis is titled "Interactive and Interpretable Machine Learning Models for Human Machine Collaboration", which she completed in 2015 at MIT. We discuss her work on interpretability, including her work in the thesis on the Bayesian Case Model and its interactive version, as well as connections with her subsequent work on black-box interpretability methods that are used in many real-world applications. Episode notes: https://cs.nyu.edu/~welleck/episode14.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.buymeacoffee.com/thesisreview
undefined
Nov 26, 2020 • 1h 8min

[13] Adji Bousso Dieng - Deep Probabilistic Graphical Modeling

Adji Bousso Dieng is currently a Research Scientist at Google AI, and will be starting as an assistant professor at Princeton University in 2021. Her research focuses on combining probabilistic graphical modeling and deep learning to design models for structured high-dimensional data. Her PhD thesis is titled "Deep Probabilistic Graphical Modeling", which she completed in 2020 at Columbia University. We discuss her work on combining graphical models and deep learning, including models and algorithms, the value of interpretability and probabilistic models, as well as applications and making an impact through research. Episode notes: https://cs.nyu.edu/~welleck/episode13.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.buymeacoffee.com/thesisreview
undefined
Nov 12, 2020 • 1h 9min

[12] Martha White - Regularized Factor Models

Martha White is an Associate Professor at the University of Alberta. Her research focuses on developing reinforcement learning and representation learning techniques for adaptive, autonomous agents learning on streams of data. Her PhD thesis is titled "Regularized Factor Models", which she completed in 2014 at the University of Alberta. We discuss the regularized factor model framework, which unifies many machine learning methods and led to new algorithms and applications. We talk about sparsity and how it also appears in her later work, as well as the common threads between her thesis work and her research in reinforcement learning. Episode notes: https://cs.nyu.edu/~welleck/episode12.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.buymeacoffee.com/thesisreview
undefined
Oct 29, 2020 • 1h 20min

[11] Jacob Andreas - Learning from Language

Jacob Andreas is an Assistant Professor at MIT, where he leads the language and intelligence group, focusing on language as a communicative and computational tool. His PhD thesis is titled "Learning from Language" which he completed in 2018 at UC Berkeley. We discuss compositionality and neural module networks, the intersection of RL and language, and translating a neural communication channel called 'neuralese', and how this can lead to more interpretable machine learning models. Episode notes: https://cs.nyu.edu/~welleck/episode11.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.buymeacoffee.com/thesisreview
undefined
Oct 15, 2020 • 52min

[10] Chelsea Finn - Learning to Learn with Gradients

Chelsea Finn is an Assistant Professor at Stanford University, where she leads the IRIS lab that studies intelligence through robotic interaction at scale. Her PhD thesis is titled "Learning to Learn with Gradients", which she completed in 2018 at UC Berkeley. Chelsea received the prestigious ACM Doctoral Dissertation Award for her work in the thesis. We discuss machine learning for robotics, focusing on learning-to-learn - also known as meta-learning - and her work on the MAML algorithm during her PhD, as well as the future of robotics research. Episode notes: https://cs.nyu.edu/~welleck/episode10.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.buymeacoffee.com/thesisreview
undefined
Oct 1, 2020 • 1h 21min

[09] Kenneth Stanley - Efficient Evolution of Neural Networks through Complexification

Kenneth Stanley is a researcher at OpenAI, where he leads the team on Open-endedness. Previously he was a Professor Computer Science at the University of Central Florida, cofounder of Geometric Intelligence, and head of Core AI research at Uber AI labs. His PhD thesis is titled "Efficient Evolution of Neural Networks through Complexification", which he completed on 2004 at the University of Texas. We talk about evolving increasingly complex structures and how this led to the NEAT algorithm that he developed during his PhD. We discuss his research directions related to open-endedness, how the field has changed over time, and how he currently views algorithms that were developed over a decade ago. Episode notes: https://cs.nyu.edu/~welleck/episode9.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.buymeacoffee.com/thesisreview

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner