The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
298 snips
May 13, 2025 • 1h 1min

From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731

Mahesh Sathiamoorthy, co-founder and CEO of Bespoke Labs, dives into the innovative world of reinforcement learning (RL) and its impact on AI agents. He highlights the importance of data curation and evaluation, asserting that RL outperforms traditional prompting methods. The conversation touches on limitations of supervised fine-tuning, reward-shaping strategies, and specialized models like MiniCheck for hallucination detection. Mahesh also discusses tools like Curator and the exciting future of automated AI engineering, promising to make powerful solutions accessible to all.
undefined
433 snips
May 6, 2025 • 1h 7min

How OpenAI Builds AI Agents That Think and Act with Josh Tobin - #730

Josh Tobin, a member of the technical staff at OpenAI and co-founder of Gantry, dives into the fascinating world of AI agents. He discusses OpenAI's innovative offerings like Deep Research and Operator, highlighting their ability to manage complex tasks through advanced reasoning. The conversation also explores unexpected use cases for these agents and the future of human-AI collaboration in software development. Additionally, Josh emphasizes the challenges of ensuring trust and safety as AI systems evolve, making for an insightful and thought-provoking discussion.
undefined
135 snips
Apr 30, 2025 • 56min

CTIBench: Evaluating LLMs in Cyber Threat Intelligence with Nidhi Rastogi - #729

In this engaging discussion, Nidhi Rastogi, an assistant professor at the Rochester Institute of Technology specializing in Cyber Threat Intelligence, dives into her project CTIBench. She explores the evolution of AI in cybersecurity, emphasizing how large language models (LLMs) enhance threat detection and defense. Nidhi discusses the challenges of outdated information and the advantages of Retrieval-Augmented Generation for real-time responses. She also highlights how benchmarks can expose model limitations and the vital role of understanding emerging threats in cybersecurity.
undefined
146 snips
Apr 23, 2025 • 54min

Generative Benchmarking with Kelly Hong - #728

Kelly Hong, a researcher at Chroma, delves into generative benchmarking, a vital approach for evaluating retrieval systems with synthetic data. She critiques traditional benchmarks for failing to mimic real-world queries, stressing the importance of aligning LLM judges with human preferences. Kelly explains a two-step process: filtering relevant documents and generating user-like queries to enhance AI performance. The discussion also covers the nuances of chunking strategies and the differences between benchmark and real-world queries, advocating for a more systematic AI evaluation.
undefined
137 snips
Apr 14, 2025 • 1h 34min

Exploring the Biology of LLMs with Circuit Tracing with Emmanuel Ameisen - #727

Emmanuel Ameisen, a research engineer at Anthropic specializing in interpretability research, shares insights from his recent studies on large language models. He discusses how mechanistic interpretability methods shed light on internal processes, showing how models plan creative tasks like poetry and calculate math using unique algorithms. The conversation dives into neural pathways, revealing how hallucinations stem from separate recognition circuits. Emmanuel highlights the challenges of accurately interpreting AI behavior and the importance of understanding these systems for safety and reliability.
undefined
150 snips
Apr 8, 2025 • 52min

Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726

Maohao Shen, a PhD student at MIT specializing in AI reliability, discusses his groundbreaking work on 'Satori.' He reveals how it enhances language model reasoning through reinforcement learning, enabling self-reflection and exploration. The podcast dives into the innovative Chain-of-Action-Thought approach, which guides models in complex reasoning tasks. Maohao also explains the two-stage training process, including format tuning and self-corrective techniques. The conversation highlights Satori’s impressive performance and its potential to redefine AI reasoning capabilities.
undefined
71 snips
Mar 31, 2025 • 1h 9min

Waymo's Foundation Model for Autonomous Driving with Drago Anguelov - #725

In this engaging discussion, Drago Anguelov, VP of AI foundations at Waymo, sheds light on the groundbreaking integration of foundation models in autonomous driving. He explains how Waymo harnesses large-scale machine learning and multimodal sensor data to enhance perception and planning. Drago also addresses safety measures, including rigorous validation frameworks and predictive models. The conversation dives into the challenges of scaling these models across diverse driving environments and the future of AV testing through sophisticated simulations.
undefined
52 snips
Mar 24, 2025 • 51min

Dynamic Token Merging for Efficient Byte-level Language Models with Julie Kallini - #724

Join Julie Kallini, a PhD student at Stanford, as she dives into the future of language models. Discover her groundbreaking work on MrT5, a model that tackles tokenization failures and enhances efficiency for multilingual tasks. Julie discusses the creation of 'impossible languages' and the insights they offer into language acquisition and model biases. Hear about innovative architecture improvements and the importance of adapting tokenization methods for underrepresented languages. A fascinating exploration at the intersection of linguistics and AI!
undefined
157 snips
Mar 17, 2025 • 59min

Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723

Jonas Geiping, a research group leader at the Ellis Institute and Max Planck Institute for Intelligent Systems, discusses innovative approaches to AI efficiency. He introduces a novel recurrent depth architecture that enables latent reasoning, allowing models to predict tokens with dynamic compute allocation based on difficulty. Geiping contrasts internal and verbalized reasoning in AI, explores challenges in scaling models, and highlights the architectural advantages that enhance performance in reasoning tasks. His insights pave the way for advancements in machine learning efficiency.
undefined
34 snips
Mar 10, 2025 • 42min

Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li - #722

Chengzu Li, a PhD student at the University of Cambridge, unpacks his pioneering work on Multimodal Visualization-of-Thought (MVoT). He explores the intersection of spatial reasoning and cognitive science, linking concepts like dual coding theory to AI. The conversation includes insights on token discrepancy loss to enhance visual and language integration, challenges in spatial problem-solving, and real-world applications in robotics and architecture. Chengzu also shares lessons learned from experiments that could redefine how machines navigate and reason about their environment.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app