Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
108 snips
Oct 4, 2025 • 1h 1min

AI Agents Can Code 10,000 Lines of Hacking Tools In Seconds - Dr. Ilia Shumailov (ex-GDM)

Dr. Ilia Shumailov is a former DeepMind AI security researcher now focused on building security tools for AI agents. He delves into the unique challenges posed by AI agents operating 24/7, generating hacking tools at unprecedented speeds. Ilia emphasizes that traditional security measures fall short and discusses new adversarial threats, including prompt injection attacks. He also explores the risks of model collapse and the importance of fine-grained policies for AI behavior, warning that as AI evolves, its unpredictability could lead to significant security vulnerabilities.
undefined
147 snips
Sep 27, 2025 • 1h 8min

New top score on ARC-AGI-2-pub (29.4%) - Jeremy Berman

In this discussion, Jeremy Berman, a research scientist at Reflection AI and recent winner of the ARC-AGI v2 leaderboard, shares his insights on advancing AI reasoning. He advocates for AI systems that can synthesize new knowledge rather than merely memorizing data. Berman explores the limitations of current neural networks, emphasizing the potential of evolutionary program synthesis and natural language approaches. He discusses innovative concepts like knowledge trees and the evolution of AI models capable of true reasoning, pushing boundaries in artificial intelligence.
undefined
234 snips
Sep 19, 2025 • 2h 4min

Deep Learning is Not So Mysterious or Different - Prof. Andrew Gordon Wilson (NYU)

Professor Andrew Gordon Wilson from NYU highlights the misconceptions in AI, particularly around model complexity and the bias-variance trade-off. He challenges the traditional view that complexity leads to overfitting, arguing that larger models can actually prefer simpler functions. Wilson discusses the importance of inductive biases and how they can improve generalization. He shares insights on practical model construction, advocating for a blend of expressiveness and simplicity to enhance performance across different data scales.
undefined
130 snips
Sep 10, 2025 • 1h 22min

Karl Friston - Why Intelligence Can't Get Too Large (Goldilocks principle)

In this enlightening discussion, Professor Karl Friston, a leading neuroscientist and professor known for his pioneering work on the Free Energy Principle, shares his insights into intelligence and consciousness. He delves into the intricacies of epistemic foraging and structure learning, emphasizing the challenges of understanding causal relationships. Friston redefines intelligence, suggesting it transcends biology and includes entities like viruses. The conversation also explores the necessary complexity for consciousness, offering a fascinating glimpse into the future of artificial systems.
undefined
109 snips
Sep 4, 2025 • 1h 35min

The Day AI Solves My Puzzles Is The Day I Worry (Prof. Cristopher Moore)

Cristopher Moore, a Professor at the Santa Fe Institute with expertise in physics and machine learning, shares his insights on AI's capabilities and limitations. He discusses the intriguing complexity of puzzles like Sudoku and how AI struggles with them compared to human creativity. Cristopher emphasizes the strengths of transformer models in recognizing structured data, while also highlighting their challenges in nuanced problem-solving. He explores the philosophical implications of AI's understanding of human-like intelligence and the quest for algorithmic justice.
undefined
156 snips
Aug 28, 2025 • 1h 6min

Michael Timothy Bennett: Defining Intelligence and AGI Approaches

Dr. Michael Timothy Bennett, a computer scientist known for his thought-provoking views on AI and consciousness, challenges conventional ideas of intelligence. He defines intelligence as 'adaptation with limited resources,' steering the conversation away from just scaling AI models. Bennett discusses various frameworks for artificial general intelligence and the importance of understanding causality in intelligent systems. He delves into deep philosophical questions about consciousness, arguing that true adaptability, much like biological systems, is key to understanding life itself.
undefined
221 snips
Aug 14, 2025 • 1h 46min

Superintelligence Strategy (Dan Hendrycks)

In this engaging discussion, Dan Hendrycks, a leading AI safety researcher and co-author of the "Superintelligence Strategy" paper, argues for a cautious approach to AI, likening it to nuclear technology rather than electricity. He critiques the dangerous notion of a U.S. 'Manhattan Project' for AI, citing its risks for global stability. The conversation also dives into the complexities of AI alignment, the need for innovative benchmarks, and the philosophical implications of superintelligence, emphasizing cooperation over competition in this evolving landscape.
undefined
187 snips
Aug 5, 2025 • 58min

DeepMind Genie 3 [World Exclusive] (Jack Parker Holder, Shlomi Fruchter)

Shlomi Fruchter, a Research Director at Google DeepMind, and Jack Parker Holder, a research scientist on the open-endedness team, unveil Genie 3, a revolutionary AI that creates immersive 3D worlds from text prompts. This groundbreaking model can generate environments in seconds, showcasing remarkable consistency in interactions. They discuss the evolution from Genie 2 to Genie 3, emphasizing improvements in memory and human interaction. The hosts dive into the potential applications for game design and robotics, hinting at a future where AI can simulate complex environments with ease.
undefined
298 snips
Jul 31, 2025 • 50min

Large Language Models and Emergence: A Complex Systems Perspective (Prof. David C. Krakauer)

David Krakauer, President of the Santa Fe Institute, delves into the distinction between knowledge and intelligence, advocating that true intelligence solves new problems with limited information. He critiques AI's dependence on massive data, labeling it as "really shit programming." Krakauer challenges the tech community's notion of emergence in large language models, emphasizing that genuine emergence involves profound internal changes in systems. He also discusses cultural evolution as a rapid form of adaptation, warning against over-reliance on AI that risks diminishing human cognitive skills.
undefined
191 snips
Jul 21, 2025 • 1h 24min

Pushing compute to the limits of physics

Guillaume Verdon, founder of the thermodynamic computing startup Extropic and known as Beff Jezos, shares his fascinating journey from aspiring physicist to innovator. He discusses his pioneering work on thermodynamic computers that harness the natural chaos of electrons for efficient AI tasks. Verdon highlights the concept of Effective Accelerationism, advocating for rapid technological progress to enhance civilization. The importance of embracing exploration and innovation over fear of stagnation takes center stage as they explore the future intersection of humans and AI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app