
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Latest episodes

Sep 22, 2020 • 1h 14min
Computation, Bayesian Model Selection, Interactive Articles
Join Alex Stenlake, a machine learning expert, as he dives into the fascinating realms of computation and intelligence. The discussion highlights the concept of the intelligence explosion and critiques traditional statistical approaches, showcasing Bayesian model selection's advantages. They also explore the transformative power of interactive articles in science communication, emphasizing how engaging formats can enhance understanding of complex topics. A thought-provoking look at the intersection of AI, human intelligence, and societal implications unfolds throughout the conversation.

Sep 18, 2020 • 1h 37min
Kernels!
Alex Stenlake, an expert in data puzzles and causal inference, dives into the fascinating world of kernel methods. He shares insights on the evolution of kernels and their crucial role before the rise of deep learning. The discussion reveals the significance of the Representer theorem and positive semi-definite kernels. Alex contrasts traditional techniques like SVMs with modern approaches, highlighting the strengths of kernels in tackling small problems. He also connects kernels to neural networks and touches on their applications in various fields.

Sep 16, 2020 • 1h 26min
Explainability, Reasoning, Priors and GPT-3
Dr. Keith Duggar, MIT PhD and AI expert, joins for a captivating discussion on explainability in machine learning. They dive into Christoph Molnar's insights on interpretability and the intricacies of neural networks' reasoning. Duggar contrasts priors with experience, touches on core knowledge, and critiques deep learning through notable figures like Gary Marcus. The conversation culminates in exploring ethical implications and challenges of GPT-3's reasoning, highlighting the broader questions of machine intelligence and the future of AI.

Sep 14, 2020 • 1h 28min
SWaV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments (Mathilde Caron)
In a fascinating discussion, Mathilde Caron, a research scientist at Facebook AI Research, dives into her groundbreaking work on the SWaV algorithm for unsupervised visual learning. Joined by Sayak Paul, a machine learning expert, they explore innovative techniques such as online clustering and multi-crop data augmentation. The conversation highlights challenges in reproducing algorithms and the evolving landscape of self-supervised learning. They also discuss the implications of clustering strategies on image recognition and the balance of data versus inductive priors in machine learning.

Sep 7, 2020 • 1h 35min
UK Algoshambles, Neuralink, GPT-3 and Intelligence
Delve into the chaotic impact of algorithmic grading in the UK, where inflated student grades have led to a crisis of trust in educational metrics. Engage in a lively discussion about the balance between traditional schooling and vocational training in shaping job readiness. Explore the fascinating realms of intelligence, both human and artificial, alongside the potential and pitfalls of GPT-3 and Neuralink. The podcast wraps up with a deep dive into philosophical considerations surrounding consciousness, skills, and the evolving nature of learning.

Jul 17, 2020 • 1h 36min
Sayak Paul
Sayak Paul, a prominent figure in deep learning and Google Developer Expert, shares insights from his vibrant career in machine learning. He discusses the AI landscape in India and the nuances of unsupervised representation learning. The conversation dives into data augmentation and contrastive learning techniques, emphasizing their importance in performance improvement. Sayak further explores the complexities of explainability and interpretability in AI, suggesting ethical responsibilities for engineers. The talk wraps up with advanced topics on pruning and the lottery ticket hypothesis in neural networks.

Jul 8, 2020 • 1h 46min
Robert Lange on NN Pruning and Collective Intelligence
In a fascinating conversation, Robert Lange, a PhD student at Technical University Berlin, delves into the realms of multi-agent reinforcement learning and cognitive science. He shares insights on the intersection of economics and machine learning, exploring how behavior influences decision-making. Robert also discusses his groundbreaking work on neural network pruning, highlighting the lottery ticket hypothesis and innovative strategies for optimizing networks. With a knack for making complex ideas accessible, he reflects on the nature of intelligence and the future of AI.

Jun 30, 2020 • 1h 58min
WelcomeAIOverlords (Zak Jost)
Zak Jost, an ML research scientist at Amazon and the creator behind the WelcomeAIOverlords YouTube channel, shares his insightful journey into machine learning. He discusses contrastive learning methods and the innovative concepts in his 'bring your own latent' paper. Zak highlights the crucial role of knowledge graphs in applications like fraud detection and the intricacies of deploying AutoML solutions. With engaging anecdotes, he contrasts content creation across formats, touching on authenticity in communication and the evolution of machine learning methodologies.

Jun 24, 2020 • 1h 3min
Facebook Research - Unsupervised Translation of Programming Languages
Marie-Anne Lachaux, Baptiste Roziere, and Guillaume Lample are talented researchers at Facebook AI Research in Paris, specializing in the unsupervised translation of programming languages. They discuss their groundbreaking method that leverages shared embeddings and tokenization to improve programming language interoperability. The conversation highlights the balance between human insight and machine learning in coding, the challenges of structural differences in languages, and the collaborative culture that fuels innovation at FAIR.

Jun 19, 2020 • 2h 34min
Francois Chollet - On the Measure of Intelligence
Francois Chollet, an AI researcher renowned for creating Keras, dives deep into defining intelligence in both humans and machines. He critiques traditional AI models for their reliance on mere skill rather than true intelligence and proposes a new framework emphasizing generalization. The discussion also touches on the integration of human-like priors into AI, the evolution of intelligence over a century, and the complexities of evaluating AI systems. Chollet's insights challenge listeners to rethink what it means to measure and understand intelligence in a rapidly advancing technological landscape.