Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
May 25, 2020 • 1h 38min

Harri Valpola: System 2 AI and Planning in Model-Based Reinforcement Learning

Harri Valpola, the CEO and Founder of Curious AI, specializes in optimizing industrial processes through advanced AI. In this discussion, he dives into the fascinating world of System 1 and System 2 thinking in AI, illustrating the balance between instinctive and reflective reasoning. Valpola shares insights from his recent research on model-based reinforcement learning, emphasizing the challenges of real-world applications like water treatment. He also highlights innovative approaches using denoising autoencoders to improve planning in uncertain environments.
undefined
May 22, 2020 • 2h 34min

ICLR 2020: Yoshua Bengio and the Nature of Consciousness

Yoshua Bengio, a pioneer in deep learning and Professor at the University of Montreal, dives into the intriguing intersection of AI and consciousness. He discusses the role of attention in conscious processing and explores System 1 and System 2 thinking as outlined by Daniel Kahneman. Bengio raises profound questions about the nature of intelligence and self-awareness in machines. He also addresses the implications of sparse factor graphs and the philosophical dimensions of consciousness, offering fresh insights into how these concepts can enhance AI models.
undefined
32 snips
May 19, 2020 • 2h 12min

ICLR 2020: Yann LeCun and Energy-Based Models

Yann LeCun, a pioneer in machine learning and AI, discusses the latest in self-supervised learning and energy-based models (EBMs). He compares how humans and machines learn concepts, advocating for methods that mimic human cognition. The conversation dives into EBMs' applications in optimizing labels and addresses challenges in traditional models. LeCun also explores the potential of self-supervised learning techniques for enhancing AI capabilities, such as in natural language processing and image recognition.
undefined
May 19, 2020 • 1h 27min

The Lottery Ticket Hypothesis with Jonathan Frankle

Jonathan Frankle, author of The Lottery Ticket Hypothesis, shares his insights on Sparse Neural Networks and their pruning techniques. He delves into the implications of the lottery ticket hypothesis for improving neural network efficiency and discusses innovative strategies like linear mode connectivity. Frankle also explores the intersection of AI technology and policy, emphasizing the importance of ethical decision-making in AI development. Listeners will appreciate his journey in deep learning research and the challenges faced in academia.
undefined
May 19, 2020 • 1h 40min

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

The conversation dives into the fascinating world of Large-scale Transfer Learning in NLP. Key highlights include the innovative T5 model's impact and the importance of dataset size and fine-tuning strategies. The trio also explores embodied cognition and meta-learning, pondering the very nature of intelligence. They discuss the evolution of transformers and the intricacies of training paradigms, all while navigating the challenges of benchmarking and chatbot systems. This lively discussion is packed with insights into advancing AI technologies and their real-world applications.
undefined
May 2, 2020 • 1h 15min

CURL: Contrastive Unsupervised Representations for Reinforcement Learning

Aravind Srinivas, a technical staff member at OpenAI and PhD candidate at Berkeley, dives deep into the revolutionary CURL paper he co-authored. This approach leverages contrastive unsupervised learning to enhance data efficiency in reinforcement learning, nearly matching performance with traditional methods. The conversation covers the pivotal role of pixel inputs for robotic control, challenges in sample efficiency, and the evolving dynamics between unsupervised and supervised learning. Srinivas' insights shed light on the future of machine learning.
undefined
15 snips
Apr 24, 2020 • 1h 13min

Exploring Open-Ended Algorithms: POET

Mathew Salvaris is a research scientist specializing in computer vision. He dives into the revolutionary concept of open-ended algorithms, likening their evolution to natural selection. These AI-generating algorithms autonomously create their own learning pathways, presenting increasingly complex challenges. The conversation explores how these algorithms can lead to innovative solutions beyond traditional methods, fostering adaptability and improved performance. Excitingly, Salvaris also discusses the potential implications for future AI development and the collaborative relationship between humans and machines.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app