

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Episodes
Mentioned books

Sep 14, 2020 • 1h 28min
SWaV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments (Mathilde Caron)
In a fascinating discussion, Mathilde Caron, a research scientist at Facebook AI Research, dives into her groundbreaking work on the SWaV algorithm for unsupervised visual learning. Joined by Sayak Paul, a machine learning expert, they explore innovative techniques such as online clustering and multi-crop data augmentation. The conversation highlights challenges in reproducing algorithms and the evolving landscape of self-supervised learning. They also discuss the implications of clustering strategies on image recognition and the balance of data versus inductive priors in machine learning.

Sep 7, 2020 • 1h 35min
UK Algoshambles, Neuralink, GPT-3 and Intelligence
Delve into the chaotic impact of algorithmic grading in the UK, where inflated student grades have led to a crisis of trust in educational metrics. Engage in a lively discussion about the balance between traditional schooling and vocational training in shaping job readiness. Explore the fascinating realms of intelligence, both human and artificial, alongside the potential and pitfalls of GPT-3 and Neuralink. The podcast wraps up with a deep dive into philosophical considerations surrounding consciousness, skills, and the evolving nature of learning.

Jul 17, 2020 • 1h 36min
Sayak Paul
Sayak Paul, a prominent figure in deep learning and Google Developer Expert, shares insights from his vibrant career in machine learning. He discusses the AI landscape in India and the nuances of unsupervised representation learning. The conversation dives into data augmentation and contrastive learning techniques, emphasizing their importance in performance improvement. Sayak further explores the complexities of explainability and interpretability in AI, suggesting ethical responsibilities for engineers. The talk wraps up with advanced topics on pruning and the lottery ticket hypothesis in neural networks.

Jul 8, 2020 • 1h 46min
Robert Lange on NN Pruning and Collective Intelligence
In a fascinating conversation, Robert Lange, a PhD student at Technical University Berlin, delves into the realms of multi-agent reinforcement learning and cognitive science. He shares insights on the intersection of economics and machine learning, exploring how behavior influences decision-making. Robert also discusses his groundbreaking work on neural network pruning, highlighting the lottery ticket hypothesis and innovative strategies for optimizing networks. With a knack for making complex ideas accessible, he reflects on the nature of intelligence and the future of AI.

Jun 30, 2020 • 1h 58min
WelcomeAIOverlords (Zak Jost)
Zak Jost, an ML research scientist at Amazon and the creator behind the WelcomeAIOverlords YouTube channel, shares his insightful journey into machine learning. He discusses contrastive learning methods and the innovative concepts in his 'bring your own latent' paper. Zak highlights the crucial role of knowledge graphs in applications like fraud detection and the intricacies of deploying AutoML solutions. With engaging anecdotes, he contrasts content creation across formats, touching on authenticity in communication and the evolution of machine learning methodologies.

Jun 24, 2020 • 1h 3min
Facebook Research - Unsupervised Translation of Programming Languages
Marie-Anne Lachaux, Baptiste Roziere, and Guillaume Lample are talented researchers at Facebook AI Research in Paris, specializing in the unsupervised translation of programming languages. They discuss their groundbreaking method that leverages shared embeddings and tokenization to improve programming language interoperability. The conversation highlights the balance between human insight and machine learning in coding, the challenges of structural differences in languages, and the collaborative culture that fuels innovation at FAIR.

Jun 19, 2020 • 2h 34min
Francois Chollet - On the Measure of Intelligence
Francois Chollet, an AI researcher renowned for creating Keras, dives deep into defining intelligence in both humans and machines. He critiques traditional AI models for their reliance on mere skill rather than true intelligence and proposes a new framework emphasizing generalization. The discussion also touches on the integration of human-like priors into AI, the evolution of intelligence over a century, and the complexities of evaluating AI systems. Chollet's insights challenge listeners to rethink what it means to measure and understand intelligence in a rapidly advancing technological landscape.

4 snips
Jun 6, 2020 • 1h 52min
OpenAI GPT-3: Language Models are Few-Shot Learners
Yannic Kilcher, a YouTube AI savant, and Connor Shorten, a machine learning contributor, dive into the revolutionary GPT-3 language model. They discuss its jaw-dropping 175 billion parameters and how it performs various NLP tasks with zero fine-tuning. The duo unpacks the differences between autoregressive models like GPT-3 and BERT, as well as the complexities of reasoning versus memorization in language models. Additionally, they tackle the implications of AI bias, the significance of transformer architecture, and the future of generative AI.

Jun 3, 2020 • 1h 13min
Jordan Edwards: ML Engineering and DevOps on AzureML
Jordan Edwards, Principal Program Manager for AzureML at Microsoft, dives into the world of ML DevOps and the challenges of deploying machine learning models. He discusses how to bridge the gap between science and engineering, emphasizing model governance and testing. Jordan shares insights from the recent Microsoft Build conference, highlighting innovations like FairLearn and GPT-3. He also introduces his maturity model for ML DevOps and explores the complexities of collaboration in machine learning workflows, making for a thought-provoking conversation.

13 snips
Jun 2, 2020 • 2h 29min
One Shot and Metric Learning - Quadruplet Loss (Machine Learning Dojo)
Join Eric Craeymeersch, a software engineer and innovation director with a focus on machine learning and computer vision, as he dives into the fascinating world of metric learning and one shot learning. Discover the revolutionary shift toward quadruplet loss over triplet loss and its implications for more efficient clustering and classification. Eric discusses the intricacies of Siamese networks, hard mining strategies, and the importance of experimentation in data science, sharing valuable insights that can propel your understanding of cutting-edge machine learning techniques.