Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Latest episodes

undefined
184 snips
Jul 6, 2025 • 2h 16min

The Fractured Entangled Representation Hypothesis (Kenneth Stanley, Akarsh Kumar)

Kenneth Stanley, an AI researcher known for his work on open-endedness, and Akarsh Kumar, an MIT PhD student, explore fascinating themes in AI. They discuss the Fractured Entangled Representation Hypothesis, challenging traditional views on neural networks. The duo emphasizes the significance of creativity in AI and the necessity of human intuition for true innovation. Additionally, they highlight the pitfalls of current models that mimic without understanding, and stress the value of embracing complexity and adaptability to unlock AI's full potential.
undefined
54 snips
Jul 5, 2025 • 16min

The Fractured Entangled Representation Hypothesis (Intro)

In this engaging discussion, Kenneth Stanley, SVP of Open Endedness at Lila Sciences and former OpenAI researcher, dives deep into the flaws of current AI training methods. He explains how today's AI is like a brilliant impostor, producing impressive results despite its chaotic inner workings. Stanley introduces a revolutionary approach to AI development inspired by his experiment, 'Picbreeder,' advocating for an understanding-driven method that fosters creativity and modular comprehension. The conversation challenges conventional wisdom and inspires fresh perspectives on AI's potential.
undefined
155 snips
Jun 24, 2025 • 2h 7min

Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)

In this enlightening discussion, Gary Marcus, a cognitive scientist and AI skeptic, cautions against existing cognitive issues in AI. Daniel Kokotajlo, a former OpenAI insider, predicts we could see AGI by 2028 based on current trends. Dan Hendrycks, director at the Center for AI Safety, stresses the urgent need for transparency and collaboration to avert potential disasters. The trio explores the alarming psychological dynamics among AI developers and the critical boundaries that must be respected in this rapidly evolving field.
undefined
324 snips
Jun 17, 2025 • 1h 8min

How AI Learned to Talk and What It Means - Prof. Christopher Summerfield

Professor Christopher Summerfield from Oxford University, author of "These Strange New Minds," shares insights on how AI learned to communicate just through text, challenging prior assumptions. He discusses the philosophical underpinnings of AI, contrasting empirical and rationalist views, and debunks myths surrounding AI's cognitive capabilities. The conversation touches on the societal implications of personalized AI, exploring the complexities of agency and authenticity, and raises questions about the relationship between AI creativity and human expression.
undefined
53 snips
May 26, 2025 • 51min

"Blurring Reality" - Chai's Social AI Platform (SPONSORED)

William Beauchamp, founder of Chai, and engineer Tom Lu explore the fascinating realm of social AI. They discuss how Chai developed one of the largest AI companion ecosystems, revealing the surprising demand for AI companionship. The duo delves into innovative techniques like reinforcement learning from human feedback and model blending. They also examine the ethical challenges of user engagement, emphasizing the importance of responsible AI interactions amidst rapid advancements in conversational technology.
undefined
326 snips
May 14, 2025 • 1h 14min

Google AlphaEvolve - Discovering new science (exclusive interview)

Matej Balog and Alexander Novikov from Google DeepMind unveil their groundbreaking work on AlphaEvolve, an AI coding agent designed for advanced algorithm discovery. They discuss its ability to outperform established algorithms like Strassen's for matrix multiplication and adapt to varying problem complexities. The duo explores how AlphaEvolve employs evolutionary processes for continuous improvement in algorithm development, navigating challenges such as the halting problem while emphasizing the necessity of blending AI capabilities with human insights for innovative solutions.
undefined
152 snips
Apr 23, 2025 • 35min

Prof. Randall Balestriero - LLMs without pretraining and SSL

Randall Balestriero, an AI researcher renowned for his work on self-supervised learning and geographic bias, explores fascinating findings in AI training. He reveals that large language models can perform well even without extensive pre-training. Randall also highlights the similarities between self-supervised and supervised learning, emphasizing their potential for improvement. Additionally, he discusses biases in climate models, demonstrating the risks of relying on their predictions, particularly for vulnerable regions, which has significant policy implications.
undefined
269 snips
Apr 8, 2025 • 1h 17min

How Machines Learn to Ignore the Noise (Kevin Ellis + Zenna Tavares)

Prof. Kevin Ellis, an AI and cognitive science expert at Cornell University, and Dr. Zenna Tavares, co-founder of BASIS, explore how AI can learn like humans. They discuss how machines can generate knowledge from minimal data through exploration and experimentation. The duo highlights the importance of compositionality, building complex ideas from simple ones, and the need for AI to grasp abstraction without getting lost in details. By blending different learning methods, they envision smarter AI that can tackle real-world challenges more intuitively.
undefined
281 snips
Apr 2, 2025 • 1h 36min

Eiso Kant (CTO poolside) - Superhuman Coding Is Coming!

Eiso Kant, the CTO of Poolside AI, shares his insights on the future of AI-driven coding. He highlights how their unique approach of reinforcement learning is set to revolutionize software development, aiming for human-level AI in just 18-36 months. Kant discusses the balance between model scaling and effective customization for enterprises. He emphasizes the importance of accessibility in coding and predicts a shift in how developers interact with AI, making coding more intuitive and collaborative for everyone.
undefined
110 snips
Mar 30, 2025 • 1h 37min

The Compendium - Connor Leahy and Gabriel Alfour

Connor Leahy and Gabriel Alfour, AI researchers from Conjecture, dive deep into the critical issues of Artificial Superintelligence (ASI) safety. They discuss the existential risks of uncontrolled AI advancements, warning that a superintelligent AI could dominate humanity as humans do less intelligent species. The conversation also touches on the need for robust institutional support and ethical governance to navigate the complexities of AI alignment with human values while critiquing prevailing ideologies like techno-feudalism.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app