2min snip

Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

NOTE

Human Intelligence Enhancement and other Hail Mary Hypotheticals.

Human intelligence enhancement has a realistic chance of going right, unlike creating an extremely smart AI. Despite the uncertainties, it would be worth considering shutting down AI development and focusing on enhancing human intelligence. There may be a small chance of survival for humanity in a world where human enhancement exists, even if it's just 1%. Human intelligence enhancement is considered a Hail Mary pass, along with other hypotheticals like using MRIs and neurofeedback to train people to be more rational and not rationalize as much, using GPT-4 systems to spread sanity on platforms like Twitter, simulating and upgrading brain uploads, and running brain uploads faster. While these ideas may not be the most profitable use of technology, they offer potential alternatives to the risks associated with artificial intelligence.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode