Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
Jul 8, 2020 • 1h 46min

Robert Lange on NN Pruning and Collective Intelligence

In a fascinating conversation, Robert Lange, a PhD student at Technical University Berlin, delves into the realms of multi-agent reinforcement learning and cognitive science. He shares insights on the intersection of economics and machine learning, exploring how behavior influences decision-making. Robert also discusses his groundbreaking work on neural network pruning, highlighting the lottery ticket hypothesis and innovative strategies for optimizing networks. With a knack for making complex ideas accessible, he reflects on the nature of intelligence and the future of AI.
undefined
Jun 30, 2020 • 1h 58min

WelcomeAIOverlords (Zak Jost)

Zak Jost, an ML research scientist at Amazon and the creator behind the WelcomeAIOverlords YouTube channel, shares his insightful journey into machine learning. He discusses contrastive learning methods and the innovative concepts in his 'bring your own latent' paper. Zak highlights the crucial role of knowledge graphs in applications like fraud detection and the intricacies of deploying AutoML solutions. With engaging anecdotes, he contrasts content creation across formats, touching on authenticity in communication and the evolution of machine learning methodologies.
undefined
Jun 24, 2020 • 1h 3min

Facebook Research - Unsupervised Translation of Programming Languages

Marie-Anne Lachaux, Baptiste Roziere, and Guillaume Lample are talented researchers at Facebook AI Research in Paris, specializing in the unsupervised translation of programming languages. They discuss their groundbreaking method that leverages shared embeddings and tokenization to improve programming language interoperability. The conversation highlights the balance between human insight and machine learning in coding, the challenges of structural differences in languages, and the collaborative culture that fuels innovation at FAIR.
undefined
Jun 19, 2020 • 2h 34min

Francois Chollet - On the Measure of Intelligence

Francois Chollet, an AI researcher renowned for creating Keras, dives deep into defining intelligence in both humans and machines. He critiques traditional AI models for their reliance on mere skill rather than true intelligence and proposes a new framework emphasizing generalization. The discussion also touches on the integration of human-like priors into AI, the evolution of intelligence over a century, and the complexities of evaluating AI systems. Chollet's insights challenge listeners to rethink what it means to measure and understand intelligence in a rapidly advancing technological landscape.
undefined
4 snips
Jun 6, 2020 • 1h 52min

OpenAI GPT-3: Language Models are Few-Shot Learners

Yannic Kilcher, a YouTube AI savant, and Connor Shorten, a machine learning contributor, dive into the revolutionary GPT-3 language model. They discuss its jaw-dropping 175 billion parameters and how it performs various NLP tasks with zero fine-tuning. The duo unpacks the differences between autoregressive models like GPT-3 and BERT, as well as the complexities of reasoning versus memorization in language models. Additionally, they tackle the implications of AI bias, the significance of transformer architecture, and the future of generative AI.
undefined
Jun 3, 2020 • 1h 13min

Jordan Edwards: ML Engineering and DevOps on AzureML

Jordan Edwards, Principal Program Manager for AzureML at Microsoft, dives into the world of ML DevOps and the challenges of deploying machine learning models. He discusses how to bridge the gap between science and engineering, emphasizing model governance and testing. Jordan shares insights from the recent Microsoft Build conference, highlighting innovations like FairLearn and GPT-3. He also introduces his maturity model for ML DevOps and explores the complexities of collaboration in machine learning workflows, making for a thought-provoking conversation.
undefined
13 snips
Jun 2, 2020 • 2h 29min

One Shot and Metric Learning - Quadruplet Loss (Machine Learning Dojo)

Join Eric Craeymeersch, a software engineer and innovation director with a focus on machine learning and computer vision, as he dives into the fascinating world of metric learning and one shot learning. Discover the revolutionary shift toward quadruplet loss over triplet loss and its implications for more efficient clustering and classification. Eric discusses the intricacies of Siamese networks, hard mining strategies, and the importance of experimentation in data science, sharing valuable insights that can propel your understanding of cutting-edge machine learning techniques.
undefined
4 snips
May 25, 2020 • 1h 38min

Harri Valpola: System 2 AI and Planning in Model-Based Reinforcement Learning

Harri Valpola, the CEO and Founder of Curious AI, specializes in optimizing industrial processes through advanced AI. In this discussion, he dives into the fascinating world of System 1 and System 2 thinking in AI, illustrating the balance between instinctive and reflective reasoning. Valpola shares insights from his recent research on model-based reinforcement learning, emphasizing the challenges of real-world applications like water treatment. He also highlights innovative approaches using denoising autoencoders to improve planning in uncertain environments.
undefined
May 22, 2020 • 2h 34min

ICLR 2020: Yoshua Bengio and the Nature of Consciousness

Yoshua Bengio, a pioneer in deep learning and Professor at the University of Montreal, dives into the intriguing intersection of AI and consciousness. He discusses the role of attention in conscious processing and explores System 1 and System 2 thinking as outlined by Daniel Kahneman. Bengio raises profound questions about the nature of intelligence and self-awareness in machines. He also addresses the implications of sparse factor graphs and the philosophical dimensions of consciousness, offering fresh insights into how these concepts can enhance AI models.
undefined
32 snips
May 19, 2020 • 2h 12min

ICLR 2020: Yann LeCun and Energy-Based Models

Yann LeCun, a pioneer in machine learning and AI, discusses the latest in self-supervised learning and energy-based models (EBMs). He compares how humans and machines learn concepts, advocating for methods that mimic human cognition. The conversation dives into EBMs' applications in optimizing labels and addresses challenges in traditional models. LeCun also explores the potential of self-supervised learning techniques for enhancing AI capabilities, such as in natural language processing and image recognition.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app