3min chapter

Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

CHAPTER

The Importance of Human Intelligence in AI Training

Recursive self-improvement is much less likely because like they're human level intelligence. They got to like train a billion dollar another run to scale up. Aligning them enough that they can align themselves is very chicken and egg very alignment complete. There's just like so many dangerous domains you've got to operate in to do alignment. Okay. So here's one claim somebody could make like listen if these things hang around human level And if they're training the way in which they are Recursion isn't an option. At some point they get smart enough  that they can roll their own AI systems and Are better at it than humans. Boom could start before then for some reasons,

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode