Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
21 snips
Oct 25, 2025 • 41min

The Universal Hierarchy of Life - Prof. Chris Kempes [SFI]

In this engaging discussion, Prof. Chris Kempes, a quantitative biophysicist at the Santa Fe Institute, explores the search for a universal theory of life that transcends Earth-bound definitions. He introduces a three-level hierarchy: Materials, Constraints, and Principles, highlighting how different life forms could emerge from diverse substrates. Chris delves into the convergence of evolution, using the eye as a compelling example, and raises thought-provoking questions about whether concepts like culture and AI can also be considered forms of life.
undefined
48 snips
Oct 21, 2025 • 60min

Google Researcher Shows Life "Emerges From Code" - Blaise Agüera y Arcas

Blaise Agüera y Arcas, a pioneering scientist and author of "What Is Intelligence?", shares revolutionary ideas on the relationship between life and intelligence. He argues that DNA functions as a computer program, proposing that evolution's complexity comes from merging systems rather than just mutations. Blaise also discusses his BFF experiment, showing how self-replicating programs can emerge from randomness. He explores how both AI and human intelligence are part of a larger collective, reshaping our understanding of purpose and consciousness.
undefined
62 snips
Oct 18, 2025 • 1h 20min

The Secret Engine of AI - Prolific [Sponsored] (Sara Saab, Enzo Blindow)

Sara Saab, VP of Product at Prolific with a background in cognitive science, and Enzo Blindow, VP of Data and AI at Prolific and an expert in economics, discuss the pivotal role of human feedback in AI. They stress that non-deterministic AI systems require human oversight more than ever, as optimizing for benchmarks can mislead usability. Exploring the ecological context of intelligence, they advocate for a participatory approach to evaluation that captures social norms and emphasizes the importance of cultural alignment.
undefined
159 snips
Oct 4, 2025 • 1h 1min

AI Agents Can Code 10,000 Lines of Hacking Tools In Seconds - Dr. Ilia Shumailov (ex-GDM)

Dr. Ilia Shumailov is a former DeepMind AI security researcher now focused on building security tools for AI agents. He delves into the unique challenges posed by AI agents operating 24/7, generating hacking tools at unprecedented speeds. Ilia emphasizes that traditional security measures fall short and discusses new adversarial threats, including prompt injection attacks. He also explores the risks of model collapse and the importance of fine-grained policies for AI behavior, warning that as AI evolves, its unpredictability could lead to significant security vulnerabilities.
undefined
163 snips
Sep 27, 2025 • 1h 8min

New top score on ARC-AGI-2-pub (29.4%) - Jeremy Berman

In this discussion, Jeremy Berman, a research scientist at Reflection AI and recent winner of the ARC-AGI v2 leaderboard, shares his insights on advancing AI reasoning. He advocates for AI systems that can synthesize new knowledge rather than merely memorizing data. Berman explores the limitations of current neural networks, emphasizing the potential of evolutionary program synthesis and natural language approaches. He discusses innovative concepts like knowledge trees and the evolution of AI models capable of true reasoning, pushing boundaries in artificial intelligence.
undefined
282 snips
Sep 19, 2025 • 2h 4min

Deep Learning is Not So Mysterious or Different - Prof. Andrew Gordon Wilson (NYU)

Professor Andrew Gordon Wilson from NYU highlights the misconceptions in AI, particularly around model complexity and the bias-variance trade-off. He challenges the traditional view that complexity leads to overfitting, arguing that larger models can actually prefer simpler functions. Wilson discusses the importance of inductive biases and how they can improve generalization. He shares insights on practical model construction, advocating for a blend of expressiveness and simplicity to enhance performance across different data scales.
undefined
166 snips
Sep 10, 2025 • 1h 22min

Karl Friston - Why Intelligence Can't Get Too Large (Goldilocks principle)

In this enlightening discussion, Professor Karl Friston, a leading neuroscientist and professor known for his pioneering work on the Free Energy Principle, shares his insights into intelligence and consciousness. He delves into the intricacies of epistemic foraging and structure learning, emphasizing the challenges of understanding causal relationships. Friston redefines intelligence, suggesting it transcends biology and includes entities like viruses. The conversation also explores the necessary complexity for consciousness, offering a fascinating glimpse into the future of artificial systems.
undefined
120 snips
Sep 4, 2025 • 1h 35min

The Day AI Solves My Puzzles Is The Day I Worry (Prof. Cristopher Moore)

Cristopher Moore, a Professor at the Santa Fe Institute with expertise in physics and machine learning, shares his insights on AI's capabilities and limitations. He discusses the intriguing complexity of puzzles like Sudoku and how AI struggles with them compared to human creativity. Cristopher emphasizes the strengths of transformer models in recognizing structured data, while also highlighting their challenges in nuanced problem-solving. He explores the philosophical implications of AI's understanding of human-like intelligence and the quest for algorithmic justice.
undefined
163 snips
Aug 28, 2025 • 1h 6min

Michael Timothy Bennett: Defining Intelligence and AGI Approaches

Dr. Michael Timothy Bennett, a computer scientist known for his thought-provoking views on AI and consciousness, challenges conventional ideas of intelligence. He defines intelligence as 'adaptation with limited resources,' steering the conversation away from just scaling AI models. Bennett discusses various frameworks for artificial general intelligence and the importance of understanding causality in intelligent systems. He delves into deep philosophical questions about consciousness, arguing that true adaptability, much like biological systems, is key to understanding life itself.
undefined
221 snips
Aug 14, 2025 • 1h 46min

Superintelligence Strategy (Dan Hendrycks)

In this engaging discussion, Dan Hendrycks, a leading AI safety researcher and co-author of the "Superintelligence Strategy" paper, argues for a cautious approach to AI, likening it to nuclear technology rather than electricity. He critiques the dangerous notion of a U.S. 'Manhattan Project' for AI, citing its risks for global stability. The conversation also dives into the complexities of AI alignment, the need for innovative benchmarks, and the philosophical implications of superintelligence, emphasizing cooperation over competition in this evolving landscape.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app