Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

CHAPTER

Intro

This chapter explores the critical importance of aligning artificial intelligence with human values and the potential consequences of neglecting this alignment. It emphasizes the need for cautious progression in AI development while addressing public perception and the advocacy for responsible regulatory measures.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner