LessWrong (30+ Karma) cover image

“Problems I’ve Tried to Legibilize” by Wei Dai

LessWrong (30+ Karma)

00:00

Human-AI Safety and Value Conflicts

Wei Dai outlines human-AI safety risks arising from human value differences, status-driven morality, positional values, and distributional shifts.

Play episode from 01:31
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app