
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Lex Fridman Podcast
The Dangers of Superintelligence
This chapter explores the existential risks and complexities surrounding advanced artificial intelligence (AGI). It raises critical questions about the control and alignment of AGI systems with human values and the potential outcomes of misalignment. Through philosophical discussions and thought experiments, the chapter examines the intricate relationship between intelligence and survival, reflecting on the unpredictable nature of superintelligent entities.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.