Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

NOTE

Human Intelligence Enhancement and other Hail Mary Hypotheticals.

Human intelligence enhancement has a realistic chance of going right, unlike creating an extremely smart AI. Despite the uncertainties, it would be worth considering shutting down AI development and focusing on enhancing human intelligence. There may be a small chance of survival for humanity in a world where human enhancement exists, even if it's just 1%. Human intelligence enhancement is considered a Hail Mary pass, along with other hypotheticals like using MRIs and neurofeedback to train people to be more rational and not rationalize as much, using GPT-4 systems to spread sanity on platforms like Twitter, simulating and upgrading brain uploads, and running brain uploads faster. While these ideas may not be the most profitable use of technology, they offer potential alternatives to the risks associated with artificial intelligence.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner