Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Latest episodes

undefined
38 snips
Apr 8, 2025 • 1h 17min

How Machines Learn to Ignore the Noise (Kevin Ellis + Zenna Tavares)

Prof. Kevin Ellis, an AI and cognitive science expert at Cornell University, and Dr. Zenna Tavares, co-founder of BASIS, explore how AI can learn like humans. They discuss how machines can generate knowledge from minimal data through exploration and experimentation. The duo highlights the importance of compositionality, building complex ideas from simple ones, and the need for AI to grasp abstraction without getting lost in details. By blending different learning methods, they envision smarter AI that can tackle real-world challenges more intuitively.
undefined
186 snips
Apr 2, 2025 • 1h 36min

Eiso Kant (CTO poolside) - Superhuman Coding Is Coming!

Eiso Kant, the CTO of Poolside AI, shares his insights on the future of AI-driven coding. He highlights how their unique approach of reinforcement learning is set to revolutionize software development, aiming for human-level AI in just 18-36 months. Kant discusses the balance between model scaling and effective customization for enterprises. He emphasizes the importance of accessibility in coding and predicts a shift in how developers interact with AI, making coding more intuitive and collaborative for everyone.
undefined
101 snips
Mar 30, 2025 • 1h 37min

The Compendium - Connor Leahy and Gabriel Alfour

Connor Leahy and Gabriel Alfour, AI researchers from Conjecture, dive deep into the critical issues of Artificial Superintelligence (ASI) safety. They discuss the existential risks of uncontrolled AI advancements, warning that a superintelligent AI could dominate humanity as humans do less intelligent species. The conversation also touches on the need for robust institutional support and ethical governance to navigate the complexities of AI alignment with human values while critiquing prevailing ideologies like techno-feudalism.
undefined
135 snips
Mar 24, 2025 • 54min

ARC Prize v2 Launch! (Francois Chollet and Mike Knoop)

Francois Chollet, an AI researcher known for Keras and the ARC challenge, joins Mike Knoop, collaborator on the ARC challenge, to launch the new version of the ARC prize. They discuss how ARC v2 integrates human calibration and adversarial selection, ensuring that even top LLMs struggle against it. The conversation highlights the evolution from ARC v1 to v2, the complexities of AI task design, and the urgent need for rigorous testing methods to bridge the gap between human and AI intelligence in the quest for artificial general intelligence.
undefined
176 snips
Mar 22, 2025 • 1h 4min

Test-Time Adaptation: the key to reasoning with DL (Mohamed Osman)

Mohamed Osman, an AI researcher at Tufa Labs in Zurich, discusses the groundbreaking strategies behind his team’s success in the ARC challenge 2024. He highlights the concept of test-time fine-tuning, emphasizing its role in enhancing model performance. The conversation dives into the balance of flexibility and correctness in neural networks, as well as innovative techniques like synthetic data and novel voting mechanisms. Osman also critiques current compute strategies and explores the need for adaptability in AI models, shedding light on the future of machine learning.
undefined
140 snips
Mar 19, 2025 • 1h 11min

GSMSymbolic paper - Iman Mirzadeh (Apple)

Iman Mirzadeh, an AI researcher at Apple, presents fresh insights from his GSM-Symbolic paper. He distinguishes between intelligence and achievement in AI, emphasizing that current methodologies fall short. The conversation explores the limitations of Large Language Models in genuine reasoning and the impact of integrating tools for improved AI performance. Mirzadeh advocates for rethinking benchmarks to capture true intelligence and discusses the importance of active engagement in learning processes, suggesting a paradigm shift is essential for future advancements.
undefined
322 snips
Mar 18, 2025 • 1h 23min

Reasoning, Robustness, and Human Feedback in AI - Max Bartolo (Cohere)

Max Bartolo, a researcher at Cohere, dives into the world of machine learning, focusing on model reasoning and robustness. He highlights the DynaBench platform's role in dynamic benchmarking and the complex challenges of evaluating AI performance. The conversation reveals the limitations of human feedback in training AI and the surprising reliance on distributed knowledge. Bartolo discusses the impact of adversarial examples on model reliability and emphasizes the need for tailored approaches to enhance AI systems, ensuring they align with human values.
undefined
11 snips
Mar 12, 2025 • 1h 41min

Tau Language: The Software Synthesis Future (sponsored)

Mathematician Ohad Asor, a software developer specializing in AI, introduces the innovative Tau language. He highlights the limitations of machine learning in guaranteeing correctness and discusses how Tau provides a logical framework for software development. Asor reveals its potential applications in enhancing blockchain systems and decentralized governance. The conversation touches on program synthesis, user autonomy in software control, and the role of language in AI, advocating for a future where technology aligns more closely with human intent.
undefined
11 snips
Mar 10, 2025 • 55min

John Palazza - Vice President of Global Sales @ CentML ( sponsored)

Join John Palazza, Vice President of Global Sales at CentML, as he delves into the vital role of infrastructure optimization for AI and machine learning. He highlights the shift from innovation to production in enterprises, emphasizing efficient GPU utilization and cost management. The conversation touches on the open-source versus proprietary debate, the rise of AI agents, and the importance of avoiding vendor lock-in. Palazza also discusses strategic partnerships with industry giants like NVIDIA that shape business strategies in a competitive cloud landscape.
undefined
67 snips
Mar 8, 2025 • 1h 1min

Transformers Need Glasses! - Federico Barbero

Federico Barbero, a lead author at DeepMind/Oxford, dives into the quirks of transformers and why large language models falter at tasks like counting. He reveals fascinating architectural bottlenecks that affect their performance. By drawing parallels with graph neural networks, he sheds light on the softmax function's role in limiting decision-making clarity. But not all hope is lost! Federico shares innovative 'glasses' to enhance transformer performance, including input tweaks and structural modifications to boost their clarity and efficiency.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode