LessWrong (Curated & Popular) cover image

"Thomas Kwa's MIRI research experience" by Thomas Kwa and others

LessWrong (Curated & Popular)

00:00

Deconfusion, Alignment, and Research Experience

This chapter explores the research methodology of deconfusion, particularly in the context of alignment problems in AI. The speaker discusses tensions between conflicting intuitions and the importance of concrete cases and theorems. They also reflect on the value of research experience and mentorship, as well as the challenges of turning conceptual problems into concrete ones.

Play episode from 21:43
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app