The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Latest episodes

undefined
Sep 25, 2023 • 44min

Personalization for Text-to-Image Generative AI with Nataniel Ruiz - #648

Nataniel Ruiz, a research scientist at Google, shares insights on personalizing text-to-image AI models. He delves into DreamBooth, an innovative algorithm that enables personalized image generation using few user-provided images. The discussion covers the effectiveness of fine-tuning diffusion models and challenges like language drift, along with solutions like prior preservation loss. Nataniel also discusses advancements in his other projects like HyperDreamBooth and the creation of specialized datasets to enhance language reasoning in generative AI.
undefined
19 snips
Sep 18, 2023 • 41min

Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

Shreya Rajpal, Founder and CEO of Guardrails AI, dives deep into the critical topic of ensuring safety and reliability in language models for production use. She discusses the various risks associated with LLMs, especially the challenges of hallucinations and their implications. The conversation navigates the need for robust evaluation metrics and innovative tools like Guardrails, an open-source project designed to enforce model correctness. Shreya also highlights the importance of validation systems and their role in enhancing the safety of NLP applications.
undefined
33 snips
Sep 11, 2023 • 59min

What’s Next in LLM Reasoning? with Roland Memisevic - #646

In this discussion, Roland Memisevic, Senior Director at Qualcomm AI Research, explores the future of language in AI systems. He highlights the shift from noun-centric to verb-centric datasets, enhancing AI's cognitive learning. Memisevic delves into the creation of Fitness Ally, an interactive fitness AI that integrates sensory feedback for a more human-like interaction. The conversation also covers advancements in visual grounding and reasoning in language models, noting their potential for more robust AI agents. A fascinating glimpse into the evolving landscape of AI!
undefined
11 snips
Sep 4, 2023 • 42min

Is ChatGPT Getting Worse? with James Zou - #645

In this conversation, James Zou, an assistant professor at Stanford known for his work in biomedical data science, dives into the evolving landscape of ChatGPT. He examines its fluctuating performance over recent months, discussing intriguing comparisons between versions. The potential for surgical AI enhancements inspires thoughts on the future of large language models. Zou also shares innovative insights on using Twitter data to build medical imaging datasets, addressing the challenges of data quality and oversight in AI for healthcare applications.
undefined
16 snips
Aug 28, 2023 • 45min

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Sophia Sanborn, a postdoctoral scholar at UC Santa Barbara, blends neuroscience and AI in her groundbreaking research. She dives into the universality of neural representations, showcasing how both biological systems and deep networks can efficiently find consistent features. The conversation also highlights her innovative work on Bispectral Neural Networks, linking Fourier transforms to group theory, and explores the potential of geometric deep learning to transform CNNs. Sanborn reveals the striking similarities between artificial and biological neural structures, presenting a fascinating convergence of insights.
undefined
9 snips
Aug 21, 2023 • 34min

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Gokul Swamy, a Ph.D. student at Carnegie Mellon’s Robotics Institute, dives into the intriguing world of inverse reinforcement learning. He unpacks the challenges of mimicking human decision-making without direct reinforcement signals. Topics include streamlining AI learning through expert guidance and the complexities of medical decision-making with missing data. Gokul also discusses safety in multitask learning, emphasizing the balance between efficiency and safety in AI systems. His insights pave the way for future research in enhancing AI’s learning capabilities.
undefined
19 snips
Aug 14, 2023 • 38min

Explainable AI for Biology and Medicine with Su-In Lee - #642

Su-In Lee, a professor at the University of Washington's Paul G. Allen School of Computer Science, discusses her research on explainable AI in biology and medicine. She emphasizes the importance of interdisciplinary collaboration for improving cancer and Alzheimer's treatments. The conversation delves into the robustness of explainable AI techniques, the challenges of handling biomedical data, and the role of machine learning in drug combination therapies. Su-In also highlights innovative methods for personalized patient care and predictive insights in oncology.
undefined
21 snips
Aug 7, 2023 • 39min

Transformers On Large-Scale Graphs with Bayan Bruss - #641

Bayan Bruss, Vice President of Applied ML Research at Capital One, dives into groundbreaking research on applying machine learning in finance. He discusses two key papers presented at ICML, focusing on interpretability in image representations and the innovative global graph transformer model. Listeners will learn about tackling computational challenges, the balance between model sparsity and performance, and the significance of embedding dimensions. With insights into advancing deep learning techniques, this conversation opens new avenues for efficiency in machine learning.
undefined
39 snips
Jul 31, 2023 • 37min

The Enterprise LLM Landscape with Atul Deo - #640

Atul Deo, General Manager of Amazon Bedrock, brings a wealth of experience in software development and product engineering. He dives into the intricacies of training large language models in enterprises, discussing the challenges and advantages of pre-trained models. The conversation highlights retrieval augmented generation (RAG) for improved query responses, as well as the complexities of implementing LLMs at scale. Atul also unveils insights into Bedrock, a managed service designed to streamline generative AI app development for businesses.
undefined
13 snips
Jul 24, 2023 • 37min

BloombergGPT - an LLM for Finance with David Rosenberg - #639

David Rosenberg, head of the machine learning strategy team at Bloomberg, discusses the fascinating development of BloombergGPT, a tailored large language model for finance. He dives into its unique architecture, validation methods, and performance benchmarks, revealing how it successfully integrates finance-specific data. David also addresses the challenges of processing financial information and the importance of ethical considerations in AI deployment, especially regarding bias and the necessity for human oversight.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner