
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
Dwarkesh Podcast
00:00
Navigating AI Alignment Challenges
This chapter addresses the complexities individuals face in the AI alignment field, highlighting the shortcomings of existing educational programs and the measurement of success. It discusses the necessity for critical thinking and informal mentorship to cultivate scientific creativity, drawing parallels with evolutionary biology and the role of fiction in inspiring innovative thought.
Transcript
Play full episode