The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
Mar 20, 2023 • 51min

Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein - 621

In this discussion, Tom Goldstein, an associate professor at the University of Maryland specializing in AI security and safety, dives into his pioneering research on watermarking large language models. He explains how these watermarks help combat misinformation and the mechanisms behind tracking AI-generated content. Tom also examines the economic and ethical implications of watermarking, blending profit with social responsibility. Additionally, he touches on challenges with data leakage in diffusion models and scaling plagiarism detection across massive datasets.
undefined
47 snips
Mar 13, 2023 • 45min

Does ChatGPT “Think”? A Cognitive Neuroscience Perspective with Anna Ivanova - #620

In a thought-provoking discussion, Anna Ivanova, a postdoctoral researcher at MIT Quest for Intelligence, dives into the relationship between large language models and human cognition. She dissects her paper on formal and functional linguistic competence, exploring the cognitive abilities behind language use. Topics include the limitations of AI in achieving true intelligence, the complexities of reporter bias, and the intriguing phenomena of individual differences in perception and cognition. Ivanova's insights challenge conventional notions of AI and intelligence, making for a captivating listen.
undefined
20 snips
Mar 6, 2023 • 53min

Robotic Dexterity and Collaboration with Monroe Kennedy III - #619

Monroe Kennedy III, an assistant professor at Stanford and director of the Assistive Robotics Lab, discusses the evolving landscape of robotics. He dives into the complexities of robotic dexterity, emphasizing its potential for human assistance. Kennedy also explores collaborative robotics, detailing how machines can effectively partner with humans. Notably, he highlights DenseTact, a groundbreaking sensor that enhances tactile capabilities. The conversation touches on the future of robotics in healthcare and the importance of trust in human-robot teamwork.
undefined
14 snips
Feb 27, 2023 • 43min

Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini - #618

In this discussion, Nicholas Carlini, a research scientist at Google Brain known for his work at the crossroads of machine learning and computer security, dives deep into pressing issues of privacy and security in AI. He explores the vulnerabilities of large models like stable diffusion, particularly the risks of data extraction and adversarial attacks. The conversation also touches on model memorization versus generalization, revealing surprising insights on how these models handle training data. Additionally, Carlini discusses data poisoning and its implications in safeguarding model integrity.
undefined
6 snips
Feb 20, 2023 • 31min

Understanding AI’s Impact on Social Disparities with Vinodkumar Prabhakaran - #617

Vinodkumar Prabhakaran, a Senior Research Scientist at Google Research, delves into the intersection of AI and social disparities. He explores how machine learning, particularly natural language processing, can reveal biases in police-community interactions and employee communications. The discussion highlights the importance of collaboration between data and social scientists to address these issues. Vinod also shares insights on ensuring fairness in AI model building, emphasizing the complexities of annotating human data and the need for diverse perspectives to enhance accountability.
undefined
33 snips
Feb 14, 2023 • 1h 22min

AI Trends 2023: Causality and the Impact on Large Language Models with Robert Osazuwa Ness - #616

Robert Osazuwa Ness, a senior researcher at Microsoft and professor at Northeastern University, dives into exciting trends in causal modeling. He highlights advances in causal discovery and its implications for drug discovery and healthcare. The conversation delves into the significance of causality in large language models, exploring how these models can enhance reasoning about cause and effect. Ness also discusses innovative applications like SayCan, which transforms verbal commands into robotic actions, merging AI with practical tasks.
undefined
Feb 6, 2023 • 33min

Data-Centric Zero-Shot Learning for Precision Agriculture with Dimitris Zermas - #615

Dimitris Zermas, principal scientist at Sentera, shares insights on leveraging machine learning for precision agriculture. He discusses innovative tools like drones and cameras that enhance crop management. The conversation delves into challenges with plant counting and data imbalance, while also unveiling the power of zero-shot learning for efficient data use. Dimitris emphasizes a data-centric approach, detailing how strategic data selection can significantly reduce annotation time and costs, reshaping approaches in agricultural technology.
undefined
31 snips
Jan 30, 2023 • 1h 2min

How LLMs and Generative AI are Revolutionizing AI for Science with Anima Anandkumar - #614

Anima Anandkumar, Bren Professor at Caltech and Sr. Director of AI Research at NVIDIA, discusses how generative AI is reshaping scientific discovery. She highlights breakthrough developments in protein folding and drug design, emphasizing AlphaFold's impact. The conversation moves to neural operators enhancing simulations and AI's role in weather forecasting. Anandkumar also introduces MineDojo, a unique Minecraft framework that advances embodied AI research, showcasing its potential for innovation and decision-making.
undefined
128 snips
Jan 23, 2023 • 1h 46min

AI Trends 2023: Natural Language Proc - ChatGPT, GPT-4 and Cutting Edge Research with Sameer Singh - #613

In a fascinating discussion, Sameer Singh, an NLP expert and associate professor at UC Irvine, dives into the whirlwind world of AI advancements. He unpacks the transformative impact of ChatGPT and large language models, highlighting the importance of structured reasoning and clean data. Sameer also critiques the Galactica model with a nod to the public's expectations, explores practical uses like Copilot, and shares predictions for AI trends in 2023. His insights shed light on the evolving landscape of voice assistants and the intricate relationship between AI and search.
undefined
27 snips
Jan 16, 2023 • 60min

AI Trends 2023: Reinforcement Learning - RLHF, Robotic Pre-Training, and Offline RL with Sergey Levine - #612

Sergey Levine, an associate professor at UC Berkeley, dives into cutting-edge advancements in reinforcement learning. He explores the impact of RLHF on language models and discusses innovations in offline RL and robotics. They also examine how language models can enhance diplomatic strategies and tackle ethical concerns. Sergey sheds light on manipulation in RL, the challenges of integrating robots with language models, and offers exciting predictions for 2023's developments. This is a must-listen for anyone interested in the future of AI!

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app