The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Latest episodes

undefined
Oct 9, 2023 • 39min

Scaling Multi-Modal Generative AI with Luke Zettlemoyer - #650

Luke Zettlemoyer, a professor at University of Washington and a research manager at Meta, discusses multimodal generative AI, visual grounding and embodiment in text-based models, advantages of discretization tokenization in image generation, self-alignment with instruction backtranslation, generalizability of language models, model performance and evaluation, the importance of open source and open science, and the future direction of multimodal AI.
undefined
Oct 2, 2023 • 49min

Pushing Back on AI Hype with Alex Hanna - #649

Alex Hanna, Director of Research at DAIR, discusses the AI hype cycle, the need for evaluation tools to mitigate risks, and current research supporting low-resource languages. They also delve into the challenges of data set creation and the impact of AI hype on society.
undefined
Sep 25, 2023 • 44min

Personalization for Text-to-Image Generative AI with Nataniel Ruiz - #648

Nataniel Ruiz, a research scientist at Google, discusses his recent work on personalization for text-to-image AI models, including DreamBooth algorithm for subject-driven generation. He dives into the fine-tuning approach, challenges of diffusion models, and evaluation metrics. Other topics include SuTI, StyleDrop, HyperDreamBooth, and Platypus.
undefined
Sep 18, 2023 • 41min

Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

Shreya Rajpal, founder and CEO of Guardrails AI, discusses the challenges and risks associated with language models in production applications, including hallucinations and failure modes. The podcast explores the use of retrieval augmented generation (RAG) technique and the need for robust evaluation metrics. It also introduces Guardrails, an open-source project for enforcing correctness and reliability in language models.
undefined
Sep 11, 2023 • 59min

What’s Next in LLM Reasoning? with Roland Memisevic - #646

Roland Memisevic, Senior Director at Qualcomm AI Research, discusses the role of language in AI systems, the limitations of autoregressive models like Transformers, and the importance of improving grounding in AI. They also talk about Fitness Ally, visual grounding for language models, state-augmented architectures for AI agents, and using deductive reasoning with ChatGPT.
undefined
Sep 4, 2023 • 42min

Is ChatGPT Getting Worse? with James Zou - #645

In this episode, James Zou, an assistant professor at Stanford University, discusses the changing behavior of ChatGPT, comparing GPT-3.5 and GPT-4 versions. He also shares insights on CRISPR's impact on LLM and AI systems, monitoring behavioral changes in models, and using Twitter data for pathology image analysis.
undefined
Aug 28, 2023 • 45min

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara, discusses the concept of universality between neural representations and deep neural networks. Topics include the implementation of bi-spectral spectrum in achieving invariance, expansion of geometric deep learning, and similarities in the structure of artificial and biological neural networks.
undefined
Aug 21, 2023 • 34min

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Gokul Swamy, a Ph.D. student at Carnegie Mellon, discusses his papers on inverse reinforcement learning without reinforcement learning, complementing policy with a different observation space, and learning shared safety constraints from demonstrations. The podcast covers the challenges, benefits, and potential applications of these approaches, as well as the use of causal modeling techniques and multitask data.
undefined
Aug 14, 2023 • 38min

Explainable AI for Biology and Medicine with Su-In Lee - #642

Today we’re joined by Su-In Lee, a professor at the Paul G. Allen School of Computer Science And Engineering at the University Of Washington. In our conversation, Su-In details her talk from the ICML 2023 Workshop on Computational Biology which focuses on developing explainable AI techniques for the computational biology and clinical medicine fields. Su-In discussed the importance of explainable AI contributing to feature collaboration, the robustness of different explainability approaches, and the need for interdisciplinary collaboration between the computer science, biology, and medical fields. We also explore her recent paper on the use of drug combination therapy, challenges with handling biomedical data, and how they aim to make meaningful contributions to the healthcare industry by aiding in cause identification and treatments for Cancer and Alzheimer's diseases.The complete show notes for this episode can be found at twimlai.com/go/642.
undefined
Aug 7, 2023 • 39min

Transformers On Large-Scale Graphs with Bayan Bruss - #641

Today we’re joined by Bayan Bruss, Vice President of Applied ML Research at Capital One. In our conversation with Bayan, we covered a pair of papers his team presented at this year’s ICML conference. We begin with the paper Interpretable Subspaces in Image Representations, where Bayan gives us a dive deep into the interpretability framework, embedding dimensions, contrastive approaches, and how their model can accelerate image representation in deep learning. We also explore GOAT: A Global Transformer on Large-scale Graphs, a scalable global graph transformer. We talk through the computation challenges, homophilic and heterophilic principles, model sparsity, and how their research proposes methodologies to get around the computational barrier when scaling to large-scale graph models.The complete show notes for this episode can be found at twimlai.com/go/641.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode