LessWrong (30+ Karma) cover image

“6 reasons why “alignment-is-hard” discourse seems alien to human intuitions, and vice-versa” by Steven Byrnes

LessWrong (30+ Karma)

00:00

TL;DR: Approval Reward vs. Utility Maximizers

TYPE III AUDIO summarizes the culture clash: whether future AIs will share human-like approval reward or be ruthless maximizers.

Play episode from 00:17
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app