
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
Latest episodes

76 snips
Apr 23, 2025 • 54min
Generative Benchmarking with Kelly Hong - #728
Kelly Hong, a researcher at Chroma, delves into generative benchmarking, a vital approach for evaluating retrieval systems with synthetic data. She critiques traditional benchmarks for failing to mimic real-world queries, stressing the importance of aligning LLM judges with human preferences. Kelly explains a two-step process: filtering relevant documents and generating user-like queries to enhance AI performance. The discussion also covers the nuances of chunking strategies and the differences between benchmark and real-world queries, advocating for a more systematic AI evaluation.

116 snips
Apr 14, 2025 • 1h 34min
Exploring the Biology of LLMs with Circuit Tracing with Emmanuel Ameisen - #727
Emmanuel Ameisen, a research engineer at Anthropic specializing in interpretability research, shares insights from his recent studies on large language models. He discusses how mechanistic interpretability methods shed light on internal processes, showing how models plan creative tasks like poetry and calculate math using unique algorithms. The conversation dives into neural pathways, revealing how hallucinations stem from separate recognition circuits. Emmanuel highlights the challenges of accurately interpreting AI behavior and the importance of understanding these systems for safety and reliability.

142 snips
Apr 8, 2025 • 52min
Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726
Maohao Shen, a PhD student at MIT specializing in AI reliability, discusses his groundbreaking work on 'Satori.' He reveals how it enhances language model reasoning through reinforcement learning, enabling self-reflection and exploration. The podcast dives into the innovative Chain-of-Action-Thought approach, which guides models in complex reasoning tasks. Maohao also explains the two-stage training process, including format tuning and self-corrective techniques. The conversation highlights Satori’s impressive performance and its potential to redefine AI reasoning capabilities.

69 snips
Mar 31, 2025 • 1h 9min
Waymo's Foundation Model for Autonomous Driving with Drago Anguelov - #725
In this engaging discussion, Drago Anguelov, VP of AI foundations at Waymo, sheds light on the groundbreaking integration of foundation models in autonomous driving. He explains how Waymo harnesses large-scale machine learning and multimodal sensor data to enhance perception and planning. Drago also addresses safety measures, including rigorous validation frameworks and predictive models. The conversation dives into the challenges of scaling these models across diverse driving environments and the future of AV testing through sophisticated simulations.

51 snips
Mar 24, 2025 • 51min
Dynamic Token Merging for Efficient Byte-level Language Models with Julie Kallini - #724
Join Julie Kallini, a PhD student at Stanford, as she dives into the future of language models. Discover her groundbreaking work on MrT5, a model that tackles tokenization failures and enhances efficiency for multilingual tasks. Julie discusses the creation of 'impossible languages' and the insights they offer into language acquisition and model biases. Hear about innovative architecture improvements and the importance of adapting tokenization methods for underrepresented languages. A fascinating exploration at the intersection of linguistics and AI!

155 snips
Mar 17, 2025 • 59min
Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723
Jonas Geiping, a research group leader at the Ellis Institute and Max Planck Institute for Intelligent Systems, discusses innovative approaches to AI efficiency. He introduces a novel recurrent depth architecture that enables latent reasoning, allowing models to predict tokens with dynamic compute allocation based on difficulty. Geiping contrasts internal and verbalized reasoning in AI, explores challenges in scaling models, and highlights the architectural advantages that enhance performance in reasoning tasks. His insights pave the way for advancements in machine learning efficiency.

34 snips
Mar 10, 2025 • 42min
Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li - #722
Chengzu Li, a PhD student at the University of Cambridge, unpacks his pioneering work on Multimodal Visualization-of-Thought (MVoT). He explores the intersection of spatial reasoning and cognitive science, linking concepts like dual coding theory to AI. The conversation includes insights on token discrepancy loss to enhance visual and language integration, challenges in spatial problem-solving, and real-world applications in robotics and architecture. Chengzu also shares lessons learned from experiments that could redefine how machines navigate and reason about their environment.

103 snips
Mar 3, 2025 • 49min
Inside s1: An o1-Style Reasoning Model That Cost Under $50 to Train with Niklas Muennighoff - #721
Niklas Muennighoff, a PhD student at Stanford, dives into his groundbreaking work on the S1 reasoning model, designed to efficiently mimic OpenAI's O1 while costing under $50 to train. He elaborates on innovative techniques like 'budget forcing' that help the model tackle complex problems more effectively. The discussion highlights the intricacies of test-time scaling, the importance of data curation, and the differences between supervised fine-tuning and reinforcement learning. Niklas also shares insights on the future of open-sourced AI models.

45 snips
Feb 24, 2025 • 1h 7min
Accelerating AI Training and Inference with AWS Trainium2 with Ron Diamant - #720
Ron Diamant, Chief Architect for Trainium at AWS, delves into the revolutionary Trainium2 chip designed for AI and ML acceleration. He discusses its unique systolic array architecture and how it outperforms traditional GPUs in key performance dimensions. The conversation highlights the ecosystem surrounding Trainium, including the Neuron SDK and its various provisioning options. Diamant also touches upon customer adoption, performance benchmarks, and future prospects for Trainium, showcasing its pivotal role in shaping AI training and inference.

80 snips
Feb 18, 2025 • 53min
π0: A Foundation Model for Robotics with Sergey Levine - #719
In this discussion, Sergey Levine, an associate professor at UC Berkeley and co-founder of Physical Intelligence, dives into π0, a groundbreaking general-purpose robotic foundation model. He explains its innovative architecture that combines vision-language models with a novel action expert. The conversation touches on the critical balance of training data, the significance of open-sourcing, and the impressive capabilities of robots like folding laundry effectively. Levine also highlights the exciting future of affordable robotics and the potential for diverse applications.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.