Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
76 snips
Sep 10, 2025 • 1h 22min

Karl Friston - Why Intelligence Can't Get Too Large (Goldilocks principle)

In this enlightening discussion, Professor Karl Friston, a leading neuroscientist and professor known for his pioneering work on the Free Energy Principle, shares his insights into intelligence and consciousness. He delves into the intricacies of epistemic foraging and structure learning, emphasizing the challenges of understanding causal relationships. Friston redefines intelligence, suggesting it transcends biology and includes entities like viruses. The conversation also explores the necessary complexity for consciousness, offering a fascinating glimpse into the future of artificial systems.
undefined
96 snips
Sep 4, 2025 • 1h 35min

The Day AI Solves My Puzzles Is The Day I Worry (Prof. Cristopher Moore)

Cristopher Moore, a Professor at the Santa Fe Institute with expertise in physics and machine learning, shares his insights on AI's capabilities and limitations. He discusses the intriguing complexity of puzzles like Sudoku and how AI struggles with them compared to human creativity. Cristopher emphasizes the strengths of transformer models in recognizing structured data, while also highlighting their challenges in nuanced problem-solving. He explores the philosophical implications of AI's understanding of human-like intelligence and the quest for algorithmic justice.
undefined
149 snips
Aug 28, 2025 • 1h 6min

Michael Timothy Bennett: Defining Intelligence and AGI Approaches

Dr. Michael Timothy Bennett, a computer scientist known for his thought-provoking views on AI and consciousness, challenges conventional ideas of intelligence. He defines intelligence as 'adaptation with limited resources,' steering the conversation away from just scaling AI models. Bennett discusses various frameworks for artificial general intelligence and the importance of understanding causality in intelligent systems. He delves into deep philosophical questions about consciousness, arguing that true adaptability, much like biological systems, is key to understanding life itself.
undefined
173 snips
Aug 14, 2025 • 1h 46min

Superintelligence Strategy (Dan Hendrycks)

In this engaging discussion, Dan Hendrycks, a leading AI safety researcher and co-author of the "Superintelligence Strategy" paper, argues for a cautious approach to AI, likening it to nuclear technology rather than electricity. He critiques the dangerous notion of a U.S. 'Manhattan Project' for AI, citing its risks for global stability. The conversation also dives into the complexities of AI alignment, the need for innovative benchmarks, and the philosophical implications of superintelligence, emphasizing cooperation over competition in this evolving landscape.
undefined
186 snips
Aug 5, 2025 • 58min

DeepMind Genie 3 [World Exclusive] (Jack Parker Holder, Shlomi Fruchter)

Shlomi Fruchter, a Research Director at Google DeepMind, and Jack Parker Holder, a research scientist on the open-endedness team, unveil Genie 3, a revolutionary AI that creates immersive 3D worlds from text prompts. This groundbreaking model can generate environments in seconds, showcasing remarkable consistency in interactions. They discuss the evolution from Genie 2 to Genie 3, emphasizing improvements in memory and human interaction. The hosts dive into the potential applications for game design and robotics, hinting at a future where AI can simulate complex environments with ease.
undefined
262 snips
Jul 31, 2025 • 50min

Large Language Models and Emergence: A Complex Systems Perspective (Prof. David C. Krakauer)

David Krakauer, President of the Santa Fe Institute, delves into the distinction between knowledge and intelligence, advocating that true intelligence solves new problems with limited information. He critiques AI's dependence on massive data, labeling it as "really shit programming." Krakauer challenges the tech community's notion of emergence in large language models, emphasizing that genuine emergence involves profound internal changes in systems. He also discusses cultural evolution as a rapid form of adaptation, warning against over-reliance on AI that risks diminishing human cognitive skills.
undefined
191 snips
Jul 21, 2025 • 1h 24min

Pushing compute to the limits of physics

Guillaume Verdon, founder of the thermodynamic computing startup Extropic and known as Beff Jezos, shares his fascinating journey from aspiring physicist to innovator. He discusses his pioneering work on thermodynamic computers that harness the natural chaos of electrons for efficient AI tasks. Verdon highlights the concept of Effective Accelerationism, advocating for rapid technological progress to enhance civilization. The importance of embracing exploration and innovation over fear of stagnation takes center stage as they explore the future intersection of humans and AI.
undefined
293 snips
Jul 6, 2025 • 2h 16min

The Fractured Entangled Representation Hypothesis (Kenneth Stanley, Akarsh Kumar)

Kenneth Stanley, an AI researcher known for his work on open-endedness, and Akarsh Kumar, an MIT PhD student, explore fascinating themes in AI. They discuss the Fractured Entangled Representation Hypothesis, challenging traditional views on neural networks. The duo emphasizes the significance of creativity in AI and the necessity of human intuition for true innovation. Additionally, they highlight the pitfalls of current models that mimic without understanding, and stress the value of embracing complexity and adaptability to unlock AI's full potential.
undefined
61 snips
Jul 5, 2025 • 16min

The Fractured Entangled Representation Hypothesis (Intro)

In this engaging discussion, Kenneth Stanley, SVP of Open Endedness at Lila Sciences and former OpenAI researcher, dives deep into the flaws of current AI training methods. He explains how today's AI is like a brilliant impostor, producing impressive results despite its chaotic inner workings. Stanley introduces a revolutionary approach to AI development inspired by his experiment, 'Picbreeder,' advocating for an understanding-driven method that fosters creativity and modular comprehension. The conversation challenges conventional wisdom and inspires fresh perspectives on AI's potential.
undefined
170 snips
Jun 24, 2025 • 2h 7min

Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)

In this enlightening discussion, Gary Marcus, a cognitive scientist and AI skeptic, cautions against existing cognitive issues in AI. Daniel Kokotajlo, a former OpenAI insider, predicts we could see AGI by 2028 based on current trends. Dan Hendrycks, director at the Center for AI Safety, stresses the urgent need for transparency and collaboration to avert potential disasters. The trio explores the alarming psychological dynamics among AI developers and the critical boundaries that must be respected in this rapidly evolving field.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app