2min chapter

Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

CHAPTER

The Constraint of Human Power-Seeking

Natural selection regularizes so much harder than gradient descent in that way It's got an enormously stronger information bottleneck The L to put in the L2 norm on a bunch of weights has nothing on the tiny amounts of information that can make its way into the genome per generation. I think that works better when the things you're breeding are stupider than you as opposed to when they are smarter than you is my concern there. This goes back to the earlier question about like and as they stay inside exactly the same environment where you bred them We're not some weird fact about the cognitive system it's a fact about the environment about the structure of reality, he says.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode