
The Thesis Review
Each episode of The Thesis Review is a conversation centered around a researcher's PhD thesis, giving insight into their history, revisiting older ideas, and providing a valuable perspective on how their research has evolved (or stayed the same) since.
Latest episodes

Feb 5, 2021 • 1h 26min
[18] Eero Simoncelli - Distributed Representation and Analysis of Visual Motion
Eero Simoncelli is a Professor of Neural Science, Mathematics, Data Science, and Psychology at New York University. His research focuses on representation and analysis of visual information.
Eero's PhD thesis is titled "Distributed Representation & Analysis of Visual Motion", which he completed in 1993 at MIT. We discuss his PhD work which focused on optical flow, which ideas and methods have stayed with him throughout his career, making biological connections with machine learning models, and how Eero's perspective of vision has evolved.
Episode notes: https://cs.nyu.edu/~welleck/episode18.html
Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html
Support The Thesis Review at www.patreon.com/thesisreview or www.buymeacoffee.com/thesisreview

Jan 22, 2021 • 1h 36min
[17] Paul Middlebrooks - Neuronal Correlates of Meta-Cognition in Primate Frontal Cortex
In this engaging discussion, Paul Middlebrooks, a neuroscientist and the host of the Brain Inspired podcast, delves into his PhD research on meta-cognition in primate frontal cortex. He explores the intricate connections between consciousness and decision-making, sharing insights on the challenges of studying these processes in both monkeys and humans. The conversation also highlights the evolving relationship between neuroscience and artificial intelligence, revealing how each field can inform and inspire the other. Plus, Paul shares his journey from PhD to podcasting, emphasizing the importance of taking action in research.

Jan 8, 2021 • 1h 19min
[16] Aaron Courville - A Latent Cause Theory of Classical Conditioning
Aaron Courville, a Professor at the University of Montreal, dives into his PhD thesis on latent cause theory in classical conditioning. He explores the pitfalls of complexity in hypothesis testing, advocating for simplicity. Courville shares his journey from Cornwall to deep learning, discussing how cognitive frameworks shift our understanding of reinforcement. The conversation also touches on generative models and their evolution, alongside the intersection of language and machine learning dynamics, emphasizing the importance of thorough research during one's PhD journey.

Dec 22, 2020 • 1h 7min
[15] Christian Szegedy - Some Applications of the Weighted Combinatorial Laplacian
Christian Szegedy, a Research Scientist at Google, delves into his journey from pure mathematics to groundbreaking machine learning. He shares insights on his PhD work, focusing on the Weighted Combinatorial Laplacian and its surprising applications in chip design. Szegedy explores the philosophical debate of whether mathematics is invented or discovered, and discusses the challenges of implementing mathematical reasoning in AI. His passion for meaningful projects over mere productivity offers inspiration for aspiring researchers.

Dec 10, 2020 • 1h 4min
[14] Been Kim - Interactive and Interpretable Machine Learning Models
Been Kim is a Research Scientist at Google Brain. Her research focuses on designing high-performance machine learning methods that make sense to humans.
Been's PhD thesis is titled "Interactive and Interpretable Machine Learning Models for Human Machine Collaboration", which she completed in 2015 at MIT. We discuss her work on interpretability, including her work in the thesis on the Bayesian Case Model and its interactive version, as well as connections with her subsequent work on black-box interpretability methods that are used in many real-world applications.
Episode notes: https://cs.nyu.edu/~welleck/episode14.html
Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html
Support The Thesis Review at www.buymeacoffee.com/thesisreview

Nov 26, 2020 • 1h 8min
[13] Adji Bousso Dieng - Deep Probabilistic Graphical Modeling
Adji Bousso Dieng is currently a Research Scientist at Google AI, and will be starting as an assistant professor at Princeton University in 2021. Her research focuses on combining probabilistic graphical modeling and deep learning to design models for structured high-dimensional data.
Her PhD thesis is titled "Deep Probabilistic Graphical Modeling", which she completed in 2020 at Columbia University. We discuss her work on combining graphical models and deep learning, including models and algorithms, the value of interpretability and probabilistic models, as well as applications and making an impact through research.
Episode notes: https://cs.nyu.edu/~welleck/episode13.html
Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html
Support The Thesis Review at www.buymeacoffee.com/thesisreview

Nov 12, 2020 • 1h 9min
[12] Martha White - Regularized Factor Models
Martha White is an Associate Professor at the University of Alberta. Her research focuses on developing reinforcement learning and representation learning techniques for adaptive, autonomous agents learning on streams of data.
Her PhD thesis is titled "Regularized Factor Models", which she completed in 2014 at the University of Alberta. We discuss the regularized factor model framework, which unifies many machine learning methods and led to new algorithms and applications. We talk about sparsity and how it also appears in her later work, as well as the common threads between her thesis work and her research in reinforcement learning.
Episode notes: https://cs.nyu.edu/~welleck/episode12.html
Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html
Support The Thesis Review at www.buymeacoffee.com/thesisreview

Oct 29, 2020 • 1h 20min
[11] Jacob Andreas - Learning from Language
Jacob Andreas is an Assistant Professor at MIT, where he leads the language and intelligence group, focusing on language as a communicative and computational tool.
His PhD thesis is titled "Learning from Language" which he completed in 2018 at UC Berkeley. We discuss compositionality and neural module networks, the intersection of RL and language, and translating a neural communication channel called 'neuralese', and how this can lead to more interpretable machine learning models.
Episode notes: https://cs.nyu.edu/~welleck/episode11.html
Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html
Support The Thesis Review at www.buymeacoffee.com/thesisreview

Oct 15, 2020 • 52min
[10] Chelsea Finn - Learning to Learn with Gradients
Chelsea Finn is an Assistant Professor at Stanford University, where she leads the IRIS lab that studies intelligence through robotic interaction at scale.
Her PhD thesis is titled "Learning to Learn with Gradients", which she completed in 2018 at UC Berkeley. Chelsea received the prestigious ACM Doctoral Dissertation Award for her work in the thesis. We discuss machine learning for robotics, focusing on learning-to-learn - also known as meta-learning - and her work on the MAML algorithm during her PhD, as well as the future of robotics research.
Episode notes: https://cs.nyu.edu/~welleck/episode10.html
Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html
Support The Thesis Review at www.buymeacoffee.com/thesisreview

Oct 1, 2020 • 1h 21min
[09] Kenneth Stanley - Efficient Evolution of Neural Networks through Complexification
Kenneth Stanley, a leading researcher at OpenAI and former AI professor, dives into the evolution of neural networks through complexification. He explains his NEAT algorithm, which enhances neural architectures alongside weights, revealing its parallels to human cognitive development. Stanley shares insights on open-endedness in AI, contrasting traditional methods with evolutionary approaches. He also discusses innovative concepts like 'historical markings' and the future of procedural content generation, emphasizing the importance of creativity and equitable access in AI research.