AXRP - the AI X-risk Research Podcast cover image

33 - RLHF Problems with Scott Emmons

AXRP - the AI X-risk Research Podcast

00:00

Navigating Belief Calibration in Human-Robot Interactions

The chapter explores the challenges of finding equilibrium in human belief distribution calibration, emphasizing the impact on human-robot interactions. It discusses the strategic choice of belief functions to influence robot behavior positively, highlighting the delicate balance between extreme beliefs. The conversation touches on trade-offs between deceptive inflation and overjustification in artificial intelligence systems like RLHF, emphasizing the complexities of navigating belief calibration in human-agent interactions.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app