LessWrong (30+ Karma) cover image

“Legible vs. Illegible AI Safety Problems” by Wei Dai

LessWrong (30+ Karma)

00:00

Personal Motivation for This Framework

Wei Dai recounts how this insight emerged and how it clarified his discomfort about certain hires in alignment research.

Play episode from 01:40
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app