

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Episodes
Mentioned books

89 snips
Aug 5, 2025 • 58min
DeepMind Genie 3 [World Exclusive] (Jack Parker Holder, Shlomi Fruchter)
Shlomi Fruchter, a Research Director at Google DeepMind, and Jack Parker Holder, a research scientist on the open-endedness team, unveil Genie 3, a revolutionary AI that creates immersive 3D worlds from text prompts. This groundbreaking model can generate environments in seconds, showcasing remarkable consistency in interactions. They discuss the evolution from Genie 2 to Genie 3, emphasizing improvements in memory and human interaction. The hosts dive into the potential applications for game design and robotics, hinting at a future where AI can simulate complex environments with ease.

168 snips
Jul 31, 2025 • 50min
Large Language Models and Emergence: A Complex Systems Perspective (Prof. David C. Krakauer)
David Krakauer, President of the Santa Fe Institute, delves into the distinction between knowledge and intelligence, advocating that true intelligence solves new problems with limited information. He critiques AI's dependence on massive data, labeling it as "really shit programming." Krakauer challenges the tech community's notion of emergence in large language models, emphasizing that genuine emergence involves profound internal changes in systems. He also discusses cultural evolution as a rapid form of adaptation, warning against over-reliance on AI that risks diminishing human cognitive skills.

174 snips
Jul 21, 2025 • 1h 24min
Pushing compute to the limits of physics
Guillaume Verdon, founder of the thermodynamic computing startup Extropic and known as Beff Jezos, shares his fascinating journey from aspiring physicist to innovator. He discusses his pioneering work on thermodynamic computers that harness the natural chaos of electrons for efficient AI tasks. Verdon highlights the concept of Effective Accelerationism, advocating for rapid technological progress to enhance civilization. The importance of embracing exploration and innovation over fear of stagnation takes center stage as they explore the future intersection of humans and AI.

246 snips
Jul 6, 2025 • 2h 16min
The Fractured Entangled Representation Hypothesis (Kenneth Stanley, Akarsh Kumar)
Kenneth Stanley, an AI researcher known for his work on open-endedness, and Akarsh Kumar, an MIT PhD student, explore fascinating themes in AI. They discuss the Fractured Entangled Representation Hypothesis, challenging traditional views on neural networks. The duo emphasizes the significance of creativity in AI and the necessity of human intuition for true innovation. Additionally, they highlight the pitfalls of current models that mimic without understanding, and stress the value of embracing complexity and adaptability to unlock AI's full potential.

54 snips
Jul 5, 2025 • 16min
The Fractured Entangled Representation Hypothesis (Intro)
In this engaging discussion, Kenneth Stanley, SVP of Open Endedness at Lila Sciences and former OpenAI researcher, dives deep into the flaws of current AI training methods. He explains how today's AI is like a brilliant impostor, producing impressive results despite its chaotic inner workings. Stanley introduces a revolutionary approach to AI development inspired by his experiment, 'Picbreeder,' advocating for an understanding-driven method that fosters creativity and modular comprehension. The conversation challenges conventional wisdom and inspires fresh perspectives on AI's potential.

160 snips
Jun 24, 2025 • 2h 7min
Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)
In this enlightening discussion, Gary Marcus, a cognitive scientist and AI skeptic, cautions against existing cognitive issues in AI. Daniel Kokotajlo, a former OpenAI insider, predicts we could see AGI by 2028 based on current trends. Dan Hendrycks, director at the Center for AI Safety, stresses the urgent need for transparency and collaboration to avert potential disasters. The trio explores the alarming psychological dynamics among AI developers and the critical boundaries that must be respected in this rapidly evolving field.

344 snips
Jun 17, 2025 • 1h 8min
How AI Learned to Talk and What It Means - Prof. Christopher Summerfield
Professor Christopher Summerfield from Oxford University, author of "These Strange New Minds," shares insights on how AI learned to communicate just through text, challenging prior assumptions. He discusses the philosophical underpinnings of AI, contrasting empirical and rationalist views, and debunks myths surrounding AI's cognitive capabilities. The conversation touches on the societal implications of personalized AI, exploring the complexities of agency and authenticity, and raises questions about the relationship between AI creativity and human expression.

53 snips
May 26, 2025 • 51min
"Blurring Reality" - Chai's Social AI Platform (SPONSORED)
William Beauchamp, founder of Chai, and engineer Tom Lu explore the fascinating realm of social AI. They discuss how Chai developed one of the largest AI companion ecosystems, revealing the surprising demand for AI companionship. The duo delves into innovative techniques like reinforcement learning from human feedback and model blending. They also examine the ethical challenges of user engagement, emphasizing the importance of responsible AI interactions amidst rapid advancements in conversational technology.

330 snips
May 14, 2025 • 1h 14min
Google AlphaEvolve - Discovering new science (exclusive interview)
Matej Balog and Alexander Novikov from Google DeepMind unveil their groundbreaking work on AlphaEvolve, an AI coding agent designed for advanced algorithm discovery. They discuss its ability to outperform established algorithms like Strassen's for matrix multiplication and adapt to varying problem complexities. The duo explores how AlphaEvolve employs evolutionary processes for continuous improvement in algorithm development, navigating challenges such as the halting problem while emphasizing the necessity of blending AI capabilities with human insights for innovative solutions.

152 snips
Apr 23, 2025 • 35min
Prof. Randall Balestriero - LLMs without pretraining and SSL
Randall Balestriero, an AI researcher renowned for his work on self-supervised learning and geographic bias, explores fascinating findings in AI training. He reveals that large language models can perform well even without extensive pre-training. Randall also highlights the similarities between self-supervised and supervised learning, emphasizing their potential for improvement. Additionally, he discusses biases in climate models, demonstrating the risks of relying on their predictions, particularly for vulnerable regions, which has significant policy implications.