The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Responsible AI in the Generative Era with Michael Kearns - #662

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

00:00

Navigating Performance Trade-offs in AI Alignment

This chapter explores the complexities of Reinforcement Learning from Human Feedback (RLHF) in language models, emphasizing the performance trade-offs associated with safety measures like toxicity suppression. It provides examples of how alignment principles can affect functionalities, particularly in targeted advertising, illustrating the delicate balance between operational efficiency and ethical considerations.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app