Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
May 19, 2020 • 1h 27min

The Lottery Ticket Hypothesis with Jonathan Frankle

Jonathan Frankle, author of The Lottery Ticket Hypothesis, shares his insights on Sparse Neural Networks and their pruning techniques. He delves into the implications of the lottery ticket hypothesis for improving neural network efficiency and discusses innovative strategies like linear mode connectivity. Frankle also explores the intersection of AI technology and policy, emphasizing the importance of ethical decision-making in AI development. Listeners will appreciate his journey in deep learning research and the challenges faced in academia.
undefined
May 19, 2020 • 1h 40min

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

The conversation dives into the fascinating world of Large-scale Transfer Learning in NLP. Key highlights include the innovative T5 model's impact and the importance of dataset size and fine-tuning strategies. The trio also explores embodied cognition and meta-learning, pondering the very nature of intelligence. They discuss the evolution of transformers and the intricacies of training paradigms, all while navigating the challenges of benchmarking and chatbot systems. This lively discussion is packed with insights into advancing AI technologies and their real-world applications.
undefined
May 2, 2020 • 1h 15min

CURL: Contrastive Unsupervised Representations for Reinforcement Learning

Aravind Srinivas, a technical staff member at OpenAI and PhD candidate at Berkeley, dives deep into the revolutionary CURL paper he co-authored. This approach leverages contrastive unsupervised learning to enhance data efficiency in reinforcement learning, nearly matching performance with traditional methods. The conversation covers the pivotal role of pixel inputs for robotic control, challenges in sample efficiency, and the evolving dynamics between unsupervised and supervised learning. Srinivas' insights shed light on the future of machine learning.
undefined
15 snips
Apr 24, 2020 • 1h 13min

Exploring Open-Ended Algorithms: POET

Mathew Salvaris is a research scientist specializing in computer vision. He dives into the revolutionary concept of open-ended algorithms, likening their evolution to natural selection. These AI-generating algorithms autonomously create their own learning pathways, presenting increasingly complex challenges. The conversation explores how these algorithms can lead to innovative solutions beyond traditional methods, fostering adaptability and improved performance. Excitingly, Salvaris also discusses the potential implications for future AI development and the collaborative relationship between humans and machines.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app