The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Latest episodes

undefined
96 snips
Jun 24, 2025 • 56min

Building the Internet of Agents with Vijoy Pandey - #737

Vijoy Pandey, SVP at Outshift by Cisco, shares insights on creating the "Internet of Agents" to improve collaboration among diverse agent systems from vendors like Salesforce and Microsoft. He discusses the challenges of integrating these systems and introduces AGNTCY, an open-source project aimed at enhancing interoperability. Vijoy breaks down the four phases of agent collaboration and reveals SLIM, a new transport layer ensuring secure, real-time communication. The conversation sheds light on overcoming semantic challenges and the importance of evolving communication protocols in AI.
undefined
91 snips
Jun 17, 2025 • 60min

LLMs for Equities Feature Forecasting at Two Sigma with Ben Wellington - #736

In this enlightening discussion, Ben Wellington, Deputy Head of Feature Forecasting at Two Sigma, shares his expertise in AI-driven equity feature forecasting. He delves into the intricacies of identifying and quantifying measurable features to improve predictive accuracy. The use of satellite imagery for data points like vehicle counts unveils unique insights. Ben emphasizes the importance of strict data timestamping to avoid temporal leakage and discusses the transformative role of large language models in enhancing data analysis. He also offers a glimpse into the future of agentic AI in finance.
undefined
136 snips
Jun 10, 2025 • 57min

Zero-Shot Auto-Labeling: The End of Annotation for Computer Vision with Jason Corso - #735

Join Jason Corso, co-founder of Voxel51 and University of Michigan professor, as he unpacks the fascinating world of automated labeling in computer vision. Discover FiftyOne, a tool for visualizing datasets and enhancing data quality. Jason reveals how zero-shot auto-labeling can rival human performance, offering significant efficiency gains. He also dives into the challenges of label quality, decision boundaries, and the innovative 'verified auto-labeling' method. Plus, learn about synthetic data generation and the exciting future of agentic behaviors in AI!
undefined
209 snips
Jun 5, 2025 • 1h 25min

Grokking, Generalization Collapse, and the Dynamics of Training Deep Neural Networks with Charles Martin - #734

In this insightful conversation, Charles Martin, the founder of Calculation Consulting and an AI researcher merging physics with machine learning, introduces WeightWatcher, a groundbreaking tool for enhancing Deep Neural Networks. He explores the revolutionary Heavy-Tailed Self-Regularization theory and how it exposes phases like grokking and generalization collapse. The discussion delves into fine-tuning models, the perplexing relationship between model quality and hallucinations, and the challenges of generative AI, providing valuable lessons for real-world applications.
undefined
322 snips
May 28, 2025 • 26min

Google I/O 2025 Special Edition - #733

Logan Kilpatrick and Shrestha Basu Mallick from Google DeepMind dive into groundbreaking advancements from Google I/O 2025. They discuss the Gemini API's impressive features like thinking budgets and thought summaries, enhancing voice AI’s expressiveness with native audio output. The duo shares insights on the challenges of building real-time voice applications, including latency and voice detection. They also send a playful wish list for next year's event, dreamily aiming for enhanced language capabilities to foster global inclusivity.
undefined
129 snips
May 21, 2025 • 57min

RAG Risks: Why Retrieval-Augmented LLMs are Not Safer with Sebastian Gehrmann - #732

Sebastian Gehrmann, head of Responsible AI at Bloomberg, dives into the complexities of AI safety, particularly in retrieval-augmented generation (RAG) systems. He reveals how RAG can unintentionally compromise safety, even leading to unsafe outputs. The conversation highlights unique risks in financial services, emphasizing the need for specific governance frameworks and tailored evaluation methods. Gehrmann also addresses prompt engineering as a strategy for enhancing safety, underscoring the necessity for ongoing collaboration in the AI field to tackle emerging vulnerabilities.
undefined
281 snips
May 13, 2025 • 1h 1min

From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731

Mahesh Sathiamoorthy, co-founder and CEO of Bespoke Labs, dives into the innovative world of reinforcement learning (RL) and its impact on AI agents. He highlights the importance of data curation and evaluation, asserting that RL outperforms traditional prompting methods. The conversation touches on limitations of supervised fine-tuning, reward-shaping strategies, and specialized models like MiniCheck for hallucination detection. Mahesh also discusses tools like Curator and the exciting future of automated AI engineering, promising to make powerful solutions accessible to all.
undefined
392 snips
May 6, 2025 • 1h 7min

How OpenAI Builds AI Agents That Think and Act with Josh Tobin - #730

Josh Tobin, a member of the technical staff at OpenAI and co-founder of Gantry, dives into the fascinating world of AI agents. He discusses OpenAI's innovative offerings like Deep Research and Operator, highlighting their ability to manage complex tasks through advanced reasoning. The conversation also explores unexpected use cases for these agents and the future of human-AI collaboration in software development. Additionally, Josh emphasizes the challenges of ensuring trust and safety as AI systems evolve, making for an insightful and thought-provoking discussion.
undefined
135 snips
Apr 30, 2025 • 56min

CTIBench: Evaluating LLMs in Cyber Threat Intelligence with Nidhi Rastogi - #729

In this engaging discussion, Nidhi Rastogi, an assistant professor at the Rochester Institute of Technology specializing in Cyber Threat Intelligence, dives into her project CTIBench. She explores the evolution of AI in cybersecurity, emphasizing how large language models (LLMs) enhance threat detection and defense. Nidhi discusses the challenges of outdated information and the advantages of Retrieval-Augmented Generation for real-time responses. She also highlights how benchmarks can expose model limitations and the vital role of understanding emerging threats in cybersecurity.
undefined
146 snips
Apr 23, 2025 • 54min

Generative Benchmarking with Kelly Hong - #728

Kelly Hong, a researcher at Chroma, delves into generative benchmarking, a vital approach for evaluating retrieval systems with synthetic data. She critiques traditional benchmarks for failing to mimic real-world queries, stressing the importance of aligning LLM judges with human preferences. Kelly explains a two-step process: filtering relevant documents and generating user-like queries to enhance AI performance. The discussion also covers the nuances of chunking strategies and the differences between benchmark and real-world queries, advocating for a more systematic AI evaluation.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app