

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
Episodes
Mentioned books

35 snips
Oct 28, 2024 • 1h 2min
Building AI Voice Agents with Scott Stephenson - #707
Scott Stephenson, co-founder and CEO of Deepgram, shares his unique journey from particle physics to AI voice technology. He highlights the complexities of building intelligent voice agents, focusing on perception, interaction, and real-time updates. The discussion dives into the transformative potential of AI in customer service, emphasizing federated learning for continuous improvement. Scott also unveils Deepgram's new agent toolkit, showcasing applications across industries like healthcare and food service, and the need for adaptable models in voice interactions.

33 snips
Oct 21, 2024 • 56min
Is Artificial Superintelligence Imminent? with Tim Rocktäschel - #706
Tim Rocktäschel, senior staff research scientist at Google DeepMind and AI professor at UCL, explores the tantalizing prospects of artificial superintelligence. He discusses the journey from narrow AI to superhuman capabilities, stressing the necessity of open-ended system development. The conversation also dives into the transformative impact of AI in science and medicine, alongside its role in enhancing debate automation for truth-seeking. With insights from his recent research, he highlights the importance of evolutionary algorithms and addresses challenges like bias in AI.

12 snips
Oct 14, 2024 • 1h 16min
ML Models for Safety-Critical Systems with Lucas García - #705
Lucas García, Principal Product Manager for Deep Learning at MathWorks, dives into the integration of ML in safety-critical systems. He discusses crucial verification and validation processes, highlighting the V-model and its W-adaptation for ML. The conversation shifts to deep learning in aviation, focusing on data quality, model robustness, and interpretability. Lucas also introduces constrained deep learning and convex neural networks, examining the benefits and trade-offs of these approaches while stressing the importance of safety protocols and regulatory frameworks.

168 snips
Oct 7, 2024 • 54min
AI Agents: Substance or Snake Oil with Arvind Narayanan - #704
Join Arvind Narayanan, a Princeton professor and expert on AI agents and policy, as he unpacks the substance behind AI technology. He discusses the risks of deploying AI agents and the pressing need for better benchmarking to ensure reliability. Delve into his book, which exposes exaggerated AI claims and failed applications. Narayanan also highlights his work on CORE-Bench, aiming to enhance scientific reproducibility and reviews the complex landscape of AI reasoning methods. He wraps up with insights on the tangled web of AI regulation and policy challenges.

80 snips
Sep 30, 2024 • 48min
AI Agents for Data Analysis with Shreya Shankar - #703
Shreya Shankar, a PhD student at UC Berkeley specializing in intelligent data processing, shares her insights on the innovative DocETL system. They discuss how this technology optimizes LLM-powered data pipelines, enhancing analysis of complex documents. Shreya highlights the challenges of data extraction from PDFs, the importance of human feedback in AI systems, and the need for tailored benchmarks in data processing. Real-world applications and the future of agentic systems are also examined, showcasing a visionary path in data management.

18 snips
Sep 23, 2024 • 1h 4min
Stealing Part of a Production Language Model with Nicholas Carlini - #702
Nicholas Carlini, a research scientist at Google DeepMind and winner of the 2024 ICML Best Paper Award, dives into the world of adversarial machine learning. He discusses his groundbreaking work on stealing parts of production language models like ChatGPT. Listeners will learn about the ethical implications of model security, the significance of the embedding layer, and how these advancements raise new security challenges. Carlini also sheds light on differential privacy in AI, questioning its integration with pre-trained models and the future of ethical AI development.

351 snips
Sep 16, 2024 • 1h 14min
Supercharging Developer Productivity with ChatGPT and Claude with Simon Willison - #701
In this discussion, Simon Willison, an independent researcher and creator of Datasette, shares insightful strategies for boosting developer productivity with large language models like ChatGPT and Claude. He reveals how he codes while walking his dog and emphasizes effective prompting and debugging techniques. The conversation dives into the transformative impact of AI on data analysis, the potential of open-source models, and innovative web scraping tools. Listen as he navigates the evolving capabilities and challenges of AI in today's tech landscape!

20 snips
Sep 2, 2024 • 60min
Automated Design of Agentic Systems with Shengran Hu - #700
In this engaging discussion, Shengran Hu, a PhD student at the University of British Columbia, delves into Automated Design of Agentic Systems (ADAS). He shares insights on the spectrum of agentic behaviors and how LLMs can be used for creating novel agent architectures. The conversation highlights the iterative nature of ADAS and its role in revealing emergent behaviors, particularly in complex tasks like the ARC challenge. Shengran also explores practical applications of ADAS in real-world system optimization, emphasizing the balance between innovation and stability.

12 snips
Aug 27, 2024 • 46min
The EU AI Act and Mitigating Bias in Automated Decisioning with Peter van der Putten - #699
In this engaging discussion, Peter van der Putten, director of the AI Lab at Pega and an assistant professor at Leiden University, dives deep into the implications of the newly adopted European AI Act. He explains the ethical principles that motivate this regulation and the complexities of applying fairness metrics in real-world AI applications. The conversation highlights the challenges of mitigating bias, the significance of transparency, and how the Act could shape global AI practices, similarly to GDPR's impact on data privacy.

136 snips
Aug 19, 2024 • 59min
The Building Blocks of Agentic Systems with Harrison Chase - #698
Harrison Chase, co-founder and CEO of LangChain, shares insights from his extensive background in machine learning and MLOps. He discusses the evolution of agentic systems, emphasizing their real-world applications and communication needs. Harrison delves into Retrieval-Augmented Generation (RAG) and the importance of observability tools for enhancing agent development. He also highlights the challenges of transitioning prototypes to production and offers his hot takes on prompting and multi-modal models, providing a glimpse into the future of LLM applications.


