
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Latest episodes

Mar 16, 2021 • 37min
#48 Machine Learning Security - Andy Smith
Andy Smith, a cybersecurity expert and YouTube content creator, dives into the often-overlooked realm of security in ML DevOps. He highlights the importance of threat modeling and the complexities posed by adversarial examples. The conversation sheds light on trust boundaries in machine learning systems and the need for a collaborative approach between ML and security teams. Andy also discusses the unpredictability of state space and the essential role of human oversight, advocating for a pragmatic focus on risk management to enhance data integrity.

10 snips
Mar 14, 2021 • 1h 40min
047 Interpretable Machine Learning - Christoph Molnar
Christoph Molnar, an expert in interpretable machine learning and author of a notable book on the subject, dives deep into the complexities of model transparency. He discusses the crucial role of interpretability in enhancing trust and societal acceptance. The conversation critiques common methods like saliency maps and highlights pitfalls of reliance on complex models. Molnar also emphasizes the importance of simplicity and statistical rigor in model predictions, advocating for strategies that improve understanding while addressing ethical considerations in machine learning.

16 snips
Mar 6, 2021 • 1h 40min
#046 The Great ML Stagnation (Mark Saroufim and Dr. Mathew Salvaris)
Mark Saroufim, author of "Machine Learning: The Great Stagnation," joins Mathew Salvaris, a lead ML scientist at iRobot, to dissect the stagnation in machine learning. They discuss how academia’s incentive structures stifle innovation and the implications of 'state-of-the-art' chasing. They highlight the rise of the 'gentleman scientist,' the complexities of achieving measurable success, and the need for a user-focused approach in research. The duo emphasizes collaboration and the importance of embracing failures as part of the learning process.

Feb 28, 2021 • 2h 30min
#045 Microsoft's Platform for Reinforcement Learning (Bonsai)
Scott Stanfield and Megan Bloemsma from Microsoft's Autonomous Systems team dive into the ambitious Project Bonsai. They discuss its goal to simplify reinforcement learning, making it accessible for developers without PhDs. The conversation highlights the role of machine teaching in enhancing AI training, using real-world applications like balancing robots. They emphasize the need for expert guidance and domain knowledge in overcoming traditional challenges in the field. Innovations in simulation and collaboration are also spotlighted, showcasing a future where complex tasks become manageable.

Feb 25, 2021 • 52min
#044 - Data-efficient Image Transformers (Hugo Touvron)
Hugo Touvron, a PhD student at Facebook AI Research and the primary author of the Data-efficient Image Transformers paper, shares insights on revolutionizing vision models. He explains how novel training strategies and a unique distillation token dramatically improve sample efficiency. The conversation dives into the balance of data augmentation, the implications of transformers compared to CNNs, and challenges in achieving data-driven models. Hugo also reflects on his experiences in a corporate PhD program and the future prospects of transformers in computer vision.

12 snips
Feb 19, 2021 • 1h 35min
#043 Prof J. Mark Bishop - Artificial Intelligence Is Stupid and Causal Reasoning won't fix it.
J. Mark Bishop, Professor Emeritus at Goldsmiths, University of London, criticizes the idea of AI achieving consciousness, suggesting that panpsychism posits a mind in all things. He argues that computers cannot comprehend or feel, referencing the limits of computation and the Chinese Room argument. The discussion touches on how language shapes perception, and highlights the philosophical challenges of mimicking human understanding. Bishop provocatively insists that machine intelligence will never reach the complexities of conscious experience.

Feb 11, 2021 • 1h 34min
#042 - Pedro Domingos - Ethics and Cancel Culture
Pedro Domingos, a renowned professor and author of "The Master Algorithm," dives deep into the contentious issues surrounding AI ethics and cancel culture. He critiques how cancel culture stifles necessary dialogue in machine learning, likening it to a modern form of religion. Domingos argues against ideologically driven gatekeeping in AI, cautioning that biases are often embedded in algorithmic design. He also questions the sincerity of current ethical practices in AI, advocating for a more nuanced understanding of fairness and open discourse.

Feb 3, 2021 • 1h 27min
#041 - Biologically Plausible Neural Networks - Dr. Simon Stringer
Dr. Simon Stringer, a Senior Research Fellow at Oxford University, discusses the intricate relationship between brain function and artificial intelligence. He dives into hierarchical feature binding, revealing how biologically inspired neural networks can enhance visual perception. The conversation covers the challenges of replicating human cognitive behaviors using AI and the importance of self-organization and temporal dynamics in learning. Stringer also sheds light on how insights from neuroscience can refine AI models to handle complex tasks more effectively.

Jan 31, 2021 • 1h 36min
#040 - Adversarial Examples (Dr. Nicholas Carlini, Dr. Wieland Brendel, Florian Tramèr)
Join Dr. Nicholas Carlini, a Google Brain research scientist specializing in machine learning security, Dr. Wieland Brendel from the University of Tübingen, and PhD student Florian Tramèr from Stanford as they dive into the world of adversarial examples. They explore how tiny data changes can drastically impact model predictions and discuss the inherent challenges of ensuring robust defenses in neural networks. Insights on the balance between model accuracy and security, alongside the biases present in CNNs, offer a captivating look into this crucial field of AI research.

Jan 23, 2021 • 1h 58min
#039 - Lena Voita - NLP
Lena Voita, a Ph.D. student and former research scientist at Yandex, shares her insights on NLP and machine translation. She discusses her research on the source and target contributions in neural translation models and explores information-theoretic probing using minimum description length. Lena also delves into the evolution of representations in Transformers and the complexities of language models, including challenges like hallucinations and exposure bias. Additionally, she highlights her comprehensive NLP course designed to foster deeper understanding in the field.