
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Latest episodes

62 snips
Jan 15, 2025 • 1h 42min
Yoshua Bengio - Designing out Agency for Safe AI
Yoshua Bengio, a pioneering deep learning researcher and Turing Award winner, delves into the pressing issues of AI safety and design. He warns about the dangers of goal-seeking AIs and emphasizes the need for non-agentic AIs to mitigate existential threats. Bengio discusses reward tampering, the complexity of AI agency, and the importance of global governance. He envisions AI as a transformative tool for science and medicine, exploring how responsible development can harness its potential while maintaining safety.

267 snips
Jan 9, 2025 • 1h 27min
Francois Chollet - ARC reflections - NeurIPS 2024
Francois Chollet, AI researcher and creator of Keras, dives into the 2024 ARC-AGI competition, revealing an impressive accuracy jump from 33% to 55.5%. He emphasizes the importance of combining deep learning with symbolic reasoning in the quest for AGI. Chollet discusses innovative approaches like deep learning-guided program synthesis and the need for continuous learning models. He also highlights the shift towards System 2 reasoning, reflecting on how this could transform AI's future capabilities and the programming landscape.

343 snips
Jan 4, 2025 • 2h
Jeff Clune - Agent AI Needs Darwin
Jeff Clune, an AI professor specializing in open-ended evolutionary algorithms, discusses how AI can push the boundaries of creativity. He shares insights on creating 'Darwin Complete' search spaces that encourage continuous skill development in AI agents. Clune emphasizes the challenging concept of 'interestingness' in innovation and how language models can help identify it. He also touches on ethical concerns and the potential for AI to develop unique languages, underscoring the importance of ethical governance in advanced AI research.

118 snips
Dec 7, 2024 • 3h 43min
Neel Nanda - Mechanistic Interpretability (Sparse Autoencoders)
Neel Nanda, a senior research scientist at Google DeepMind, leads the mechanistic interpretability team. At just 25, he explores the complexities of neural networks and the role of sparse autoencoders in AI safety. Nanda discusses challenges in understanding model behaviors, such as reasoning and deception. He emphasizes the need for deeper insights into the internal structures of AI to enhance safety and interpretability. The conversation also touches on innovative techniques for generating meaningful features and navigating mechanistic interpretability.

65 snips
Dec 1, 2024 • 1h 46min
Jonas Hübotter (ETH) - Test Time Inference
Jonas Hübotter, a PhD student at ETH Zurich specializing in machine learning, delves into his innovative research on test-time computation. He reveals how smaller models can achieve up to 30x efficiency over larger ones by strategically allocating resources during inference. Drawing parallels to Google Earth's dynamic resolution, he discusses the blend of inductive and transductive learning. Hübotter envisions future AI systems that adapt and learn continuously, advocating for hybrid deployment strategies that prioritize intelligent resource management.

32 snips
Nov 25, 2024 • 1h 45min
How AI Could Be A Mathematician's Co-Pilot by 2026 (Prof. Swarat Chaudhuri)
Professor Swarat Chaudhuri, a computer science expert from the University of Texas at Austin and researcher at Google DeepMind, shares fascinating insights into AI's role in mathematics. He discusses his innovative work on COPRA, a GPT-based theorem prover, and emphasizes the significance of neurosymbolic approaches in enhancing AI reasoning. The conversation explores the potential of AI to assist mathematicians in theorem proving and generating conjectures, all while tackling the balance between AI outputs and human interpretability.

9 snips
Nov 17, 2024 • 2h 30min
Nora Belrose - AI Development, Safety, and Meaning
Nora Belrose, Head of Interpretability Research at EleutherAI, dives into the complexities of AI development and safety. She explores concept erasure in neural networks and its role in bias mitigation. Challenging doomsday fears about advanced AI, she critiques current alignment methods and highlights the limitations of traditional approaches. The discussion broadens to consider the philosophical implications of AI's evolution, including a fascinating link between Buddhism and the search for meaning in a future shaped by automation.

28 snips
Nov 13, 2024 • 2h 9min
Why Your GPUs are underutilised for AI - CentML CEO Explains
Gennady Pekhimenko, CEO of CentML and associate professor at the University of Toronto, dives into the intricacies of AI system optimization. He illuminates the challenges of GPU utilization, revealing why many companies only harness 10% efficiency. The conversation also touches on 'dark silicon,' the competition between open-source and proprietary AI, and the need for strategic refinement in enterprise AI infrastructure. Pekhimenko's insights blend technical depth with practical advice for enhancing machine learning applications in modern businesses.

33 snips
Nov 11, 2024 • 4h 19min
Eliezer Yudkowsky and Stephen Wolfram on AI X-risk
Eliezer Yudkowsky, an AI researcher focused on safety, and Stephen Wolfram, the inventor behind Mathematica, tackle the looming existential risks of advanced AI. They debate the challenges of aligning AI goals with human values and ponder the unpredictable nature of AI's evolution. Yudkowsky warns of emergent AI objectives diverging from humanity's best interests, while Wolfram emphasizes understanding AI's computational nature. Their conversation digs deep into ethical implications, consciousness, and the paradox of AI goals.

220 snips
Nov 6, 2024 • 2h 43min
Pattern Recognition vs True Intelligence - Francois Chollet
Francois Chollet, a leading AI expert and creator of ARC-AGI, dives into the nature of intelligence and consciousness. He argues that true intelligence is about adapting to new situations, contrasting it with current AI's memory-based processes. Chollet introduces his 'Kaleidoscope Hypothesis,' positing that complex systems stem from simple patterns. He explores the gradual development of consciousness in children and critiques existing AI benchmarks, emphasizing the need for understanding intelligence beyond mere performance metrics.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.