

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
Episodes
Mentioned books

23 snips
Feb 12, 2024 • 1h 6min
Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671
Sanmi Koyejo, an assistant professor at Stanford University, dives into the fascinating world of large language models (LLMs) and their emergent behaviors. He challenges the hype surrounding these models' capabilities, arguing that nonlinear metrics can create illusions of rapid progress. The conversation also discusses his work on trustworthiness in AI, focusing on critical aspects like toxicity and fairness. Sanmi highlights the need for robust evaluation methods as LLMs are integrated into sensitive fields like healthcare and education.

46 snips
Feb 5, 2024 • 1h 10min
AI Trends 2024: Reinforcement Learning in the Age of LLMs with Kamyar Azizzadenesheli - #670
Kamyar Azizzadenesheli, a staff researcher at Nvidia specializing in reinforcement learning, shares exciting insights on the collaboration between RL and large language models. He discusses innovations like ALOHA, a robot learning to fold clothes, and Voyager, an RL agent excelling in Minecraft using GPT-4. The conversation highlights advancements in risk-aware RL, especially in healthcare and finance. Kamyar also predicts how enhanced computational power will shape the future of deep reinforcement learning and facilitate general intelligence.

65 snips
Jan 29, 2024 • 35min
Building and Deploying Real-World RAG Applications with Ram Sriharsha - #669
Ram Sriharsha, VP of Engineering at Pinecone and an expert in large-scale data processing, explores the transformative power of vector databases and retrieval augmented generation (RAG). He discusses the trade-offs between LLMs and vector databases for effective data retrieval. The conversation sheds light on the evolution of RAG applications, the complexities of maintaining fresh enterprise data, and the exciting new features of Pinecone's serverless offering, which enhances scalability and cost efficiency. Ram also shares insights on the future of vector databases in AI.

6 snips
Jan 22, 2024 • 40min
Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668
In this engaging conversation, Ben Zhao, a Neubauer professor of computer science at the University of Chicago, dives into the critical intersection of security and generative AI. He introduces innovative tools like Fawkes, which masks images from facial recognition, and Glaze, designed to protect artists from style mimicry by subtly altering their work. Zhao also unveils Nightshade, a sophisticated defense mechanism that disrupts generative AI's ability to replicate artistic creations, raising vital questions about data poisoning and copyright in the AI era.

13 snips
Jan 15, 2024 • 39min
Learning Transformer Programs with Dan Friedman - #667
Dan Friedman, a PhD student from Princeton's NLP group, dives into his fascinating research on mechanistic interpretability for transformer models. He discusses his innovative paper that modifies transformer architecture to create human-readable programs. The conversation uncovers the challenges of current interpretability methods and contrasts them with his approach. They explore the RASP framework's role in transforming programs and delve into the complexities of optimizing model constraints, highlighting the importance of clarity in understanding AI.

77 snips
Jan 8, 2024 • 1h 5min
AI Trends 2024: Machine Learning & Deep Learning with Thomas Dietterich - #666
Thomas Dietterich, a distinguished professor emeritus at Oregon State University, dives into the latest trends in AI and machine learning. He discusses the strengths and weaknesses of large language models like GPT-4, while exploring their potential limitations in reasoning. The conversation covers topics like uncertainty quantification and the fascinating world of 'hallucinations' in language models. Dietterich also offers predictions for 2024 and motivates newcomers to tap into the field's endless possibilities.

15 snips
Jan 2, 2024 • 52min
AI Trends 2024: Computer Vision with Naila Murray - #665
Naila Murray, Director of AI Research at Meta, discusses the cutting-edge landscape of computer vision. They explore advancements like controllable AI generation, multimodal models, and tools such as Segment Anything for intuitive image segmentation. Naila dives into the possibilities of ControlNet and DINOv2, emphasizing their roles in object recognition and complex scenarios. Looking ahead, she shares insights on opportunities in self-supervised learning and generative models, forecasting exciting innovations for 2024 in AI.

58 snips
Dec 28, 2023 • 48min
Are Vector DBs the Future Data Platform for AI? with Ed Anuff - #664
Joining the conversation is Ed Anuff, Chief Product Officer at DataStax, who brings his extensive experience in startups and technology. He delves into the fascinating world of vector databases, discussing their critical role in handling massive, unstructured datasets. Ed highlights advancements in algorithms like HNSW and explores how embedding models enhance database retrieval. He shares insights on integrating live data into AI applications, the significance of data chunking, and the potential of GPUs to boost performance in generative AI systems.

9 snips
Dec 26, 2023 • 47min
Quantizing Transformers by Helping Attention Heads Do Nothing with Markus Nagel - #663
In this discussion, Markus Nagel, a research scientist at Qualcomm AI Research, shares insights from his recent papers at NeurIPS 2023, focusing on machine learning efficiency. He tackles the challenges of quantizing transformers, particularly in minimizing outlier issues in attention mechanisms. The conversation explores the pros and cons of pruning versus quantization for model weight compression and dives into innovative methods for multitask and multidomain learning. Additionally, the use of geometric algebra in enhancing algorithms for robotics is highlighted.

Dec 22, 2023 • 36min
Responsible AI in the Generative Era with Michael Kearns - #662
Michael Kearns, a professor at the University of Pennsylvania and Amazon scholar, dives into the new challenges of responsible AI in the generative era. He discusses the evolution of service card metrics and their limitations in evaluating AI performance. Kearns also tackles the complexities of evaluating large language models and introduces the concept of clean rooms in machine learning, emphasizing privacy through differential techniques. With insights from his work at AWS, he advocates for collaboration between AI developers and stakeholders to enhance ethical practices.