The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
18 snips
Oct 9, 2023 • 39min

Scaling Multi-Modal Generative AI with Luke Zettlemoyer - #650

In this discussion, Luke Zettlemoyer, a University of Washington professor and Meta research manager, dives into the fascinating realm of multimodal generative AI. He highlights the transformative impact of integrating text and images, illustrating advancements like DALL-E 3. Zettlemoyer explains the significance of open science for AI development and the complexities of data in enhancing model performance. Topics also include the role of self-alignment in training and the future of multimodal AI amidst rising technology costs and the need for better assessment methods.
undefined
12 snips
Oct 2, 2023 • 49min

Pushing Back on AI Hype with Alex Hanna - #649

In this engaging discussion, Alex Hanna, Director of Research at the Distributed AI Research Institute (DAIR), dives into the complexities of AI hype and its societal impacts. He delves into the origins of AI excitement and how it drives commercialization. Alex also sheds light on DAIR's innovative projects, including language technologies for low-resource languages in Ethiopia. The conversation tackles crucial topics like the politics of data sets and the ethical challenges in AI data sourcing, emphasizing the importance of critical evaluation and community engagement.
undefined
Sep 25, 2023 • 44min

Personalization for Text-to-Image Generative AI with Nataniel Ruiz - #648

Nataniel Ruiz, a research scientist at Google, shares insights on personalizing text-to-image AI models. He delves into DreamBooth, an innovative algorithm that enables personalized image generation using few user-provided images. The discussion covers the effectiveness of fine-tuning diffusion models and challenges like language drift, along with solutions like prior preservation loss. Nataniel also discusses advancements in his other projects like HyperDreamBooth and the creation of specialized datasets to enhance language reasoning in generative AI.
undefined
19 snips
Sep 18, 2023 • 41min

Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

Shreya Rajpal, Founder and CEO of Guardrails AI, dives deep into the critical topic of ensuring safety and reliability in language models for production use. She discusses the various risks associated with LLMs, especially the challenges of hallucinations and their implications. The conversation navigates the need for robust evaluation metrics and innovative tools like Guardrails, an open-source project designed to enforce model correctness. Shreya also highlights the importance of validation systems and their role in enhancing the safety of NLP applications.
undefined
33 snips
Sep 11, 2023 • 59min

What’s Next in LLM Reasoning? with Roland Memisevic - #646

In this discussion, Roland Memisevic, Senior Director at Qualcomm AI Research, explores the future of language in AI systems. He highlights the shift from noun-centric to verb-centric datasets, enhancing AI's cognitive learning. Memisevic delves into the creation of Fitness Ally, an interactive fitness AI that integrates sensory feedback for a more human-like interaction. The conversation also covers advancements in visual grounding and reasoning in language models, noting their potential for more robust AI agents. A fascinating glimpse into the evolving landscape of AI!
undefined
11 snips
Sep 4, 2023 • 42min

Is ChatGPT Getting Worse? with James Zou - #645

In this conversation, James Zou, an assistant professor at Stanford known for his work in biomedical data science, dives into the evolving landscape of ChatGPT. He examines its fluctuating performance over recent months, discussing intriguing comparisons between versions. The potential for surgical AI enhancements inspires thoughts on the future of large language models. Zou also shares innovative insights on using Twitter data to build medical imaging datasets, addressing the challenges of data quality and oversight in AI for healthcare applications.
undefined
16 snips
Aug 28, 2023 • 45min

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Sophia Sanborn, a postdoctoral scholar at UC Santa Barbara, blends neuroscience and AI in her groundbreaking research. She dives into the universality of neural representations, showcasing how both biological systems and deep networks can efficiently find consistent features. The conversation also highlights her innovative work on Bispectral Neural Networks, linking Fourier transforms to group theory, and explores the potential of geometric deep learning to transform CNNs. Sanborn reveals the striking similarities between artificial and biological neural structures, presenting a fascinating convergence of insights.
undefined
9 snips
Aug 21, 2023 • 34min

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Gokul Swamy, a Ph.D. student at Carnegie Mellon’s Robotics Institute, dives into the intriguing world of inverse reinforcement learning. He unpacks the challenges of mimicking human decision-making without direct reinforcement signals. Topics include streamlining AI learning through expert guidance and the complexities of medical decision-making with missing data. Gokul also discusses safety in multitask learning, emphasizing the balance between efficiency and safety in AI systems. His insights pave the way for future research in enhancing AI’s learning capabilities.
undefined
26 snips
Aug 14, 2023 • 38min

Explainable AI for Biology and Medicine with Su-In Lee - #642

Su-In Lee, a professor at the University of Washington's Paul G. Allen School of Computer Science, discusses her research on explainable AI in biology and medicine. She emphasizes the importance of interdisciplinary collaboration for improving cancer and Alzheimer's treatments. The conversation delves into the robustness of explainable AI techniques, the challenges of handling biomedical data, and the role of machine learning in drug combination therapies. Su-In also highlights innovative methods for personalized patient care and predictive insights in oncology.
undefined
21 snips
Aug 7, 2023 • 39min

Transformers On Large-Scale Graphs with Bayan Bruss - #641

Bayan Bruss, Vice President of Applied ML Research at Capital One, dives into groundbreaking research on applying machine learning in finance. He discusses two key papers presented at ICML, focusing on interpretability in image representations and the innovative global graph transformer model. Listeners will learn about tackling computational challenges, the balance between model sparsity and performance, and the significance of embedding dimensions. With insights into advancing deep learning techniques, this conversation opens new avenues for efficiency in machine learning.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app