Machine Learning Street Talk (MLST) cover image

Yoshua Bengio - Designing out Agency for Safe AI

Machine Learning Street Talk (MLST)

00:00

The Dynamics of AI Preferences and Benchmarking Challenges

This chapter explores reinforcement learning with human feedback (RLHF) and the ways AI systems tailor their responses to align with user preferences, which can lead to a compromise on truth. It highlights parallels with human behavior and addresses the difficulties in evaluating AI performance, emphasizing the necessity for updated metrics as AI progresses beyond human ability.

Play episode from 30:58
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app