AXRP - the AI X-risk Research Podcast cover image

33 - RLHF Problems with Scott Emmons

AXRP - the AI X-risk Research Podcast

00:00

Exploring Alignment and Incentives in Reinforcement Learning

The chapter delves into research projects focusing on the theory of the ideal case related to reinforcement learning and the alignment problem. They discuss exploring incentives, potential issues like deception and sensor tampering, and formalizing perfect alignment through cooperative inverse reinforcement learning.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app