Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Latest episodes

undefined
86 snips
Jan 25, 2025 • 1h 21min

Nicholas Carlini (Google DeepMind)

Nicholas Carlini, a research scientist at Google DeepMind specializing in AI security, delves into compelling insights about the vulnerabilities in machine learning systems. He discusses the unexpected chess-playing prowess of large language models and the broader implications of emergent behaviors. Carlini emphasizes the necessity for robust security designs to combat potential model attacks and the ethical considerations surrounding AI-generated code. He also highlights how language models can significantly enhance programming productivity, urging users to remain skeptical of their limitations.
undefined
28 snips
Jan 23, 2025 • 1h 32min

Subbarao Kambhampati - Do o1 models search?

In this engaging discussion, Professor Subbarao Kambhampati, an expert in AI reasoning systems, dives into OpenAI's O1 model. He explains how it employs reinforcement learning akin to AlphaGo and introduces the concept of 'fractal intelligence,' where models exhibit unpredictable performance. The conversation contrasts single-model approaches with hybrid systems like Google’s, and addresses the balance between AI as an intelligence amplifier versus an autonomous decision-maker, shedding light on the computational costs associated with advanced reasoning systems.
undefined
164 snips
Jan 20, 2025 • 1h 18min

How Do AI Models Actually Think? - Laura Ruis

Laura Ruis, a PhD student at University College London and researcher at Cohere, discusses her groundbreaking work on reasoning capabilities of large language models. She delves into whether these models rely on fact retrieval or procedural knowledge. The conversation highlights the influence of pre-training data on AI behavior and examines the complexities in defining intelligence. Ruis also explores the philosophical implications of AI agency and creativity, raising questions about how AI models mimic human reasoning and the potential risks they pose.
undefined
37 snips
Jan 16, 2025 • 1h 13min

Jurgen Schmidhuber on Humans co-existing with AIs

Jürgen Schmidhuber, a pioneer in generative AI and deep learning, shares his thought-provoking insights on the future of AI and humanity. He argues that superintelligent AIs will prioritize safeguarding life rather than threatening it, envisioning a cosmic collaboration rather than conflict. Schmidhuber also traces the historical roots of AI innovations, pointing out often-overlooked contributions from Ukraine and Japan. He discusses groundbreaking concepts like his 1991 consciousness model and the potential for AI to venture beyond Earth, sparking a future of shared goals between humans and machines.
undefined
56 snips
Jan 15, 2025 • 1h 42min

Yoshua Bengio - Designing out Agency for Safe AI

Yoshua Bengio, a pioneering deep learning researcher and Turing Award winner, delves into the pressing issues of AI safety and design. He warns about the dangers of goal-seeking AIs and emphasizes the need for non-agentic AIs to mitigate existential threats. Bengio discusses reward tampering, the complexity of AI agency, and the importance of global governance. He envisions AI as a transformative tool for science and medicine, exploring how responsible development can harness its potential while maintaining safety.
undefined
231 snips
Jan 9, 2025 • 1h 27min

Francois Chollet - ARC reflections - NeurIPS 2024

Francois Chollet, AI researcher and creator of Keras, dives into the 2024 ARC-AGI competition, revealing an impressive accuracy jump from 33% to 55.5%. He emphasizes the importance of combining deep learning with symbolic reasoning in the quest for AGI. Chollet discusses innovative approaches like deep learning-guided program synthesis and the need for continuous learning models. He also highlights the shift towards System 2 reasoning, reflecting on how this could transform AI's future capabilities and the programming landscape.
undefined
269 snips
Jan 4, 2025 • 2h

Jeff Clune - Agent AI Needs Darwin

Jeff Clune, an AI professor specializing in open-ended evolutionary algorithms, discusses how AI can push the boundaries of creativity. He shares insights on creating 'Darwin Complete' search spaces that encourage continuous skill development in AI agents. Clune emphasizes the challenging concept of 'interestingness' in innovation and how language models can help identify it. He also touches on ethical concerns and the potential for AI to develop unique languages, underscoring the importance of ethical governance in advanced AI research.
undefined
116 snips
Dec 7, 2024 • 3h 43min

Neel Nanda - Mechanistic Interpretability (Sparse Autoencoders)

Neel Nanda, a senior research scientist at Google DeepMind, leads the mechanistic interpretability team. At just 25, he explores the complexities of neural networks and the role of sparse autoencoders in AI safety. Nanda discusses challenges in understanding model behaviors, such as reasoning and deception. He emphasizes the need for deeper insights into the internal structures of AI to enhance safety and interpretability. The conversation also touches on innovative techniques for generating meaningful features and navigating mechanistic interpretability.
undefined
65 snips
Dec 1, 2024 • 1h 46min

Jonas Hübotter (ETH) - Test Time Inference

Jonas Hübotter, a PhD student at ETH Zurich specializing in machine learning, delves into his innovative research on test-time computation. He reveals how smaller models can achieve up to 30x efficiency over larger ones by strategically allocating resources during inference. Drawing parallels to Google Earth's dynamic resolution, he discusses the blend of inductive and transductive learning. Hübotter envisions future AI systems that adapt and learn continuously, advocating for hybrid deployment strategies that prioritize intelligent resource management.
undefined
32 snips
Nov 25, 2024 • 1h 45min

How AI Could Be A Mathematician's Co-Pilot by 2026 (Prof. Swarat Chaudhuri)

Professor Swarat Chaudhuri, a computer science expert from the University of Texas at Austin and researcher at Google DeepMind, shares fascinating insights into AI's role in mathematics. He discusses his innovative work on COPRA, a GPT-based theorem prover, and emphasizes the significance of neurosymbolic approaches in enhancing AI reasoning. The conversation explores the potential of AI to assist mathematicians in theorem proving and generating conjectures, all while tackling the balance between AI outputs and human interpretability.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode