2min chapter

Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

CHAPTER

The Evolution of Humans

In order to predict human output really well It needs humans around just to give it the sort of like raw data from which to improve its predictions, right? Or something like that. I'm confused So so look like you can always develop arbitrary fanciful fanciful scenarios in which the AI has some contrived motive That it can only possibly satisfy by keeping humans alive in good health and comfort. You know like turning all the nearby galaxies into happy cheerful places full of you know high functioning galactic civilizations. But as soon as your your thing your your sentence has more than Like five words in it its probability has dropped to basically zero because of all the extra details you're patting in Maybe let's return

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode