

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Episodes
Mentioned books

May 19, 2020 • 1h 27min
The Lottery Ticket Hypothesis with Jonathan Frankle
Jonathan Frankle, author of The Lottery Ticket Hypothesis, shares his insights on Sparse Neural Networks and their pruning techniques. He delves into the implications of the lottery ticket hypothesis for improving neural network efficiency and discusses innovative strategies like linear mode connectivity. Frankle also explores the intersection of AI technology and policy, emphasizing the importance of ethical decision-making in AI development. Listeners will appreciate his journey in deep learning research and the challenges faced in academia.

May 19, 2020 • 1h 40min
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
The conversation dives into the fascinating world of Large-scale Transfer Learning in NLP. Key highlights include the innovative T5 model's impact and the importance of dataset size and fine-tuning strategies. The trio also explores embodied cognition and meta-learning, pondering the very nature of intelligence. They discuss the evolution of transformers and the intricacies of training paradigms, all while navigating the challenges of benchmarking and chatbot systems. This lively discussion is packed with insights into advancing AI technologies and their real-world applications.

May 2, 2020 • 1h 15min
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
Aravind Srinivas, a technical staff member at OpenAI and PhD candidate at Berkeley, dives deep into the revolutionary CURL paper he co-authored. This approach leverages contrastive unsupervised learning to enhance data efficiency in reinforcement learning, nearly matching performance with traditional methods. The conversation covers the pivotal role of pixel inputs for robotic control, challenges in sample efficiency, and the evolving dynamics between unsupervised and supervised learning. Srinivas' insights shed light on the future of machine learning.

15 snips
Apr 24, 2020 • 1h 13min
Exploring Open-Ended Algorithms: POET
Mathew Salvaris is a research scientist specializing in computer vision. He dives into the revolutionary concept of open-ended algorithms, likening their evolution to natural selection. These AI-generating algorithms autonomously create their own learning pathways, presenting increasingly complex challenges. The conversation explores how these algorithms can lead to innovative solutions beyond traditional methods, fostering adaptability and improved performance. Excitingly, Salvaris also discusses the potential implications for future AI development and the collaborative relationship between humans and machines.


