Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

00:00

Intro

This chapter explores the critical importance of aligning artificial intelligence with human values and the potential consequences of neglecting this alignment. It emphasizes the need for cautious progression in AI development while addressing public perception and the advocacy for responsible regulatory measures.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app