

Responsible AI in the Generative Era with Michael Kearns - #662
Dec 22, 2023
Michael Kearns, a professor at the University of Pennsylvania and Amazon scholar, dives into the new challenges of responsible AI in the generative era. He discusses the evolution of service card metrics and their limitations in evaluating AI performance. Kearns also tackles the complexities of evaluating large language models and introduces the concept of clean rooms in machine learning, emphasizing privacy through differential techniques. With insights from his work at AWS, he advocates for collaboration between AI developers and stakeholders to enhance ethical practices.
AI Snips
Chapters
Transcript
Episode notes
Generative AI: Power and Challenge
- Generative AI models are powerful due to their open-ended nature, not just numerical predictions.
- This open-endedness, however, introduces new responsible AI challenges.
Utilizing Service Cards
- Use service cards to understand model properties, recommended uses, and RAI metrics.
- AWS now provides service cards, including for their latest generative LLMs like Titan Text.
New RAI Challenges
- Generative AI introduces new responsible AI challenges like hallucinations and toxicity.
- These necessitate adapting our thinking about responsible AI and developing new metrics.