Michael Kearns, Professor at the University of Pennsylvania, discusses the challenges of responsible AI in the generative era. Topics include service card metrics, privacy, hallucinations, RLHF, LLM evaluation benchmarks, Clean Rooms ML, and secure data handling in machine learning.
Read more
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Generative AI models present challenges for responsible AI, such as hallucinations, toxicity, and intellectual property issues.
Adapting service cards to generative AI models requires new considerations like hallucination and toxicity.
Deep dives
AWS ML&AI Services
AWS offers a range of ML&AI services at different layers of the machine learning technology stack, including cost-efficient generative AI with Amazon EC2-inf2 instances, easier app development with Amazon Bedrock, and Amazon Code Whisperer with support for multiple programming languages.
Challenges of Responsible AI in the Generative AI Era
The power of generative AI models lies in their open-endedness, which presents challenges for responsible AI. The models are not limited to numerical predictions or classifications, but are truly generative, leading to considerations of hallucinations, toxicity, and intellectual property issues.
Evolution of Service Cards
Service cards have evolved to become more sophisticated and informative summaries of model properties, use cases, performance metrics, and responsible AI metrics. Challenges arise in adapting service cards to generative AI models, requiring new considerations like hallucination and toxicity.
Quantitative Evaluation of LLM Performance
The challenge of evaluating large language models (LLMs), such as their performance and hallucination, has led to the development of model evaluation features. These features provide metrics to measure LLM performance, including the evaluation of LLMs against specific use cases and synthetic data generation.
Today we’re joined by Michael Kearns, professor in the Department of Computer and Information Science at the University of Pennsylvania and an Amazon scholar. In our conversation with Michael, we discuss the new challenges to responsible AI brought about by the generative AI era. We explore Michael’s learnings and insights from the intersection of his real-world experience at AWS and his work in academia. We cover a diverse range of topics under this banner, including service card metrics, privacy, hallucinations, RLHF, and LLM evaluation benchmarks. We also touch on Clean Rooms ML, a secured environment that balances accessibility to private datasets through differential privacy techniques, offering a new approach for secure data handling in machine learning.
The complete show notes for this episode can be found at twimlai.com/go/662.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode