LessWrong (Curated & Popular) cover image

“Notes on fatalities from AI takeover” by ryan_greenblatt

LessWrong (Curated & Popular)

00:00

How Training Proxies Could Cause Antihuman Preferences

Ryan examines whether training salience could cause AIs to actively want to kill many humans.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app