
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Latest episodes

Mar 7, 2022 • 1h 42min
#68 DR. WALID SABA 2.0 - Natural Language Understanding [UNPLUGGED]
Dr. Walid Saba, a Senior Scientist at Sorcero, critiques deep learning's approach to natural language understanding. He argues that reliance on statistical learning leads to failure, akin to memorizing infinity. Saba emphasizes the importance of symbolic logic and human cognitive processes in AI development. He explores the complexities of memory in neural networks, the distinctions between top-down and bottom-up problem-solving, and the need for hybrid models that integrate logic and prior knowledge. His insights challenge conventional methods and advocate for a deeper understanding of cognition in AI.

45 snips
Mar 2, 2022 • 1h 42min
#67 Prof. KARL FRISTON 2.0
In this enlightening discussion, Karl Friston, a leading British neuroscientist from University College London, delves into the intriguing free energy principle and its impact on cognition and consciousness. They explore the challenges of simplifying complex scientific concepts and the balance of order and chaos in existence. Friston also interrogates the essence of consciousness, questioning if it can be recreated in silico, while tackling the complexities of free will in a deterministic universe. A thought-provoking conversation that intertwines science, philosophy, and self-discovery!

Feb 28, 2022 • 51min
#66 ALEXANDER MATTICK - [Unplugged / Community Edition]
Join Alexander Mattick, a prominent voice in Yannic's Discord community and an AI aficionado, as he dives deep into the intricacies of neural networks. He reveals fascinating insights on spline theory and the complexities of abstraction in machine learning. The discussion also touches on the balance between exploration and control in knowledge acquisition, alongside the philosophical implications of causality and discrete versus continuous modeling. Alex champions the value of a broad knowledge base, illustrating how diverse insights can enhance problem-solving.

8 snips
Feb 26, 2022 • 1h 28min
#65 Prof. PEDRO DOMINGOS [Unplugged]
Pedro Domingos, a renowned professor of computer science and author of "The Master Algorithm," dives deep into the fundamentals of machine learning. He emphasizes the need for a solid understanding of AI for both professionals and the public, likening it to learning to drive. Domingos discusses the evolution of generative and discriminative models, critiques existing algorithms, and explores the interplay of entropy and reality. He also proposes a unifying 'master algorithm' while questioning the complexities of causality in learning systems, ultimately advocating for a broader understanding of AI.

Feb 24, 2022 • 52min
#64 Prof. Gary Marcus 3.0
In this engaging conversation, cognitive scientist Gary Marcus, founder of Robust AI, tackles profound questions about AI and consciousness. He expresses skepticism about AI systems claiming self-awareness and the philosophical debates around consciousness. The discussion dives into the challenges of abstract models and the importance of stable symbolic representations, particularly in self-driving technology. Marcus also reveals insights on extrapolation in high dimensions and scaling laws, emphasizing the complexities of true understanding in neural networks.

6 snips
Feb 22, 2022 • 1h 33min
#063 - Prof. YOSHUA BENGIO - GFlowNets, Consciousness & Causality
Yoshua Bengio, a Turing Award recipient and a leader in AI, dives into the fascinating world of GFlowNets, which he believes can revolutionize machine learning by generating diverse training data. The discussion covers the balance between exploration and exploitation in decision-making, particularly in drug discovery and gaming. Bengio also addresses the philosophical implications of consciousness in AI, urging a cautious perspective on claims of AI sentience. His reflections on the evolution of thought in neural networks reveal a journey shaped by key insights into causal representation learning.

Feb 3, 2022 • 1h 30min
#062 - Dr. Guy Emerson - Linguistics, Distributional Semantics
Dr. Guy Emerson, a computational linguist at Cambridge, shares insights into distributional semantics and truth-conditional semantics. The conversations delve into the challenges of representing meaning in machine learning, the importance of grounding language in real-world contexts, and the interplay between cognition and linguistics. Emerson critiques traditional linguistic models, emphasizing the need for flexible frameworks. The discussion also touches on Bayesian inference in language, examining how context influences meaning and the complexities of vocabulary like 'heap' and 'tall'.

55 snips
Jan 4, 2022 • 3h 20min
061: Interpolation, Extrapolation and Linearisation (Prof. Yann LeCun, Dr. Randall Balestriero)
Yann LeCun, Meta's Chief AI Scientist and Turing Award winner, joins Randall Balestriero, a researcher at Meta AI, to dive into the complexities of interpolation and extrapolation in neural networks. They discuss how heavily dimensional data challenges traditional views, presenting their groundbreaking paper on high-dimensional extrapolation. Yann critiques the notion of interpolation in deep learning, while Randall emphasizes the geometric principles that can redefine our understanding of neural network behavior. Expect eye-opening insights into AI's evolving landscape!

64 snips
Sep 19, 2021 • 3h 33min
#60 Geometric Deep Learning Blueprint (Special Edition)
Joining the discussion are Petar Veličković from DeepMind, renowned for his work on graph neural networks, Taco Cohen from Qualcomm AI Research, specializing in geometric deep learning, and Joan Bruna, an influential figure in data science at NYU. They delve into geometric deep learning, exploring its foundations in symmetry and invariance. The conversation highlights innovative mathematical frameworks, the unification of geometries, and their implications in AI. Insights on dimensionality, algorithmic reasoning, and historical perspectives on geometry further enrich this engaging dialogue.

17 snips
Sep 3, 2021 • 2h 35min
#59 - Jeff Hawkins (Thousand Brains Theory)
In this engaging discussion, neuroscientist and entrepreneur Jeff Hawkins, known for his Thousand Brains Theory, joins Connor Leahy to unravel how our brains construct reality through a multitude of models. They dive into the role of the neocortex in intelligence and sensory perception, explore Sparse Distributed Representations and their applications in AI, and highlight the key differences between traditional neural networks and Hawkins' innovative ideas. The conversation also touches on the ethical integration of AI with human values and the philosophical implications of emerging technologies.