2min chapter

Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

CHAPTER

The Impossible Thing to Verify Is Whack

All computations That can like be run over Configurations of solar system are equally likely to be maximized. If it is as similar as humans are now to our loss function from which we evolved that would be like that honestly might not be that terrible world and it might in fact be a very good world Okay, so it's like the Where do you get where do you get good world out of maximum prediction of text? plus our lhf plus Plus all the whatever alignment stuff that might work Results in something that kind of just like does what you ask it to the way.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode