LessWrong (Curated & Popular) cover image

"Evaluating the historical value misspecification argument" by Matthew Barnett

LessWrong (Curated & Popular)

00:00

The Beliefs of Miri People on the Challenges of Instructing AI and Specifying Objectives

Exploring the difficulties of instructing AI and aligning its objectives, including the fictional portrayal of value misalignment in fantasyia where a magical broom causes unintended consequences. The chapter emphasizes the importance of accurately specifying AI's objectives to prevent accidents and unintended outcomes.

Play episode from 07:15
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app