Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

CHAPTER

Navigating AI Alignment Challenges

This chapter addresses the complexities individuals face in the AI alignment field, highlighting the shortcomings of existing educational programs and the measurement of success. It discusses the necessity for critical thinking and informal mentorship to cultivate scientific creativity, drawing parallels with evolutionary biology and the role of fiction in inspiring innovative thought.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner