The Thesis Review cover image

The Thesis Review

Latest episodes

undefined
Sep 25, 2020 • 1h 1min

[08] He He - Sequential Decisions and Predictions in NLP

He He is an Assistant Professor at New York University. Her research focuses on enabling reliable communication in natural language between machine and humans, including topics in text generation, robust language understanding, and dialogue systems. Her PhD thesis is titled "Sequential Decisions and Predictions in NLP", which she completed in 2016 at the University of Maryland. We talk about the intersection of language with imitation learning and reinforcement learning, her work in the thesis on opponent modeling and simultaneous translation, and how it relates to recent work on generation and robustness. Episode notes: https://cs.nyu.edu/~welleck/episode8.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.buymeacoffee.com/thesisreview
undefined
Sep 11, 2020 • 1h 4min

[07] John Schulman - Optimizing Expectations: From Deep RL to Stochastic Computation Graphs

John Schulman, a Research Scientist and co-founder of OpenAI, co-leads efforts in reinforcement learning, focusing on algorithms that learn through trial and error. He shares insights on the evolution from TRPO to PPO and the intricate role of stochastic computation graphs. Schulman discusses the challenges of generalization in RL and how OpenAI Five leveraged these techniques for Dota victories. The conversation also touches on navigating AI alignment challenges and the significance of integrating human intuition into machine learning.
undefined
Aug 28, 2020 • 1h 6min

[06] Yoon Kim - Deep Latent Variable Models of Natural Language

Yoon Kim is currently a Research Scientist at the MIT-IBM AI Watson Lab, and will be joining MIT as an assistant professor in 2021. Yoon’s research focuses on machine learning and natural language processing. His PhD thesis is titled "Deep Latent Variable Models of Natural Language", which he completed in 2020 at Harvard University. We discuss his work on uncovering latent structure in natural language, including continuous vector representations, tree structures, and grammars. We cover learning and variational inference methods that he developed during his PhD, and he offers a look at where latent variable models will be heading in the future. Episode notes: https://cs.nyu.edu/~welleck/episode6.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.buymeacoffee.com/thesisreview
undefined
Aug 14, 2020 • 1h 12min

[05] Julian Togelius - Computational Intelligence and Games

Julian Togelius is an Associate Professor at New York University, where he co-directs the NYU Game Innovation Lab. His research is at the intersection of computational intelligence and computer games. His PhD thesis is titled "Optimization, Imitation, and Innovation: Computational Intelligence and Games", which he completed in 2007. We cover his work in the thesis on AI for games and games for AI, and how it connects to his recent work on procedural content generation. Episode notes: https://cs.nyu.edu/~welleck/episode5.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at https://www.buymeacoffee.com/thesisreview
undefined
Jul 31, 2020 • 1h 45min

[04] Sebastian Nowozin - Learning with Structured Data: Applications to Computer Vision

Sebastian Nowozin, a researcher at Microsoft Research Cambridge, delves into the fascinating world of probabilistic deep learning and its ties to computer vision. He discusses the significance of structured data and innovative loss functions in image analysis. Listeners will learn about the evolution from traditional methods in object recognition to advanced, machine learning-driven techniques. The conversation also covers the challenges and insights in Bayesian deep learning, highlighting the importance of stability in models and sound programming practices in the field.
undefined
Jul 17, 2020 • 1h 25min

[03] Sebastian Ruder - Neural Transfer Learning for Natural Language Processing

Sebastian Ruder is currently a Research Scientist at Deepmind. His research focuses on transfer learning for natural language processing, and making machine learning and NLP more accessible. His PhD thesis is titled "Neural Transfer Learning for Natural Language Processing", which he completed in 2019. We cover transfer learning from philosophical and technical perspectives, and talk about its societal implications, focusing on his work on sequential transfer learning and cross-lingual learning. Episode notes: https://cs.nyu.edu/~welleck/episode3.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html
undefined
Jul 3, 2020 • 1h 16min

[02] Colin Raffel - Learning-Based Methods for Comparing Sequences

Colin Raffel is currently a Senior Research Scientist at Google Brain, and soon to be an assistant professor at the University of North Carolina. His recent work focuses on transfer learning and learning from limited labels. His thesis is titled "Learning-Based Methods for Comparing Sequences, with Applications to Audio-to-MIDI Alignment and Matching", which we discuss along with the connections to his later work, and plans for the future. Episode notes: https://cs.nyu.edu/~welleck/episode2.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html
undefined
Jun 18, 2020 • 58min

[01] Gus Xia - Expressive Collaborative Music Performance via Machine Learning

Gus Xia is an assistant professor at New York University Shanghai. His research explores machine learning for music, with a goal of building intelligent systems that understand and extend musical creativity and expression. His PhD thesis is titled Expressive Collaborative Music Performance via Machine Learning, which we discuss in depth along with his ongoing research at the NYU Shanghai Music X Lab. - Gus Xia's homepage: https://www.cs.cmu.edu/~gxia/ - Thesis: http://reports-archive.adm.cs.cmu.edu/anon/ml2016/CMU-ML-16-103.pdf Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html
undefined
Jun 13, 2020 • 2min

[00] The Thesis Review Podcast - Introduction

[00] The Thesis Review Podcast - Introduction by Sean Welleck

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app