Dwarkesh Podcast cover image

Dwarkesh Podcast

Latest episodes

undefined
Oct 31, 2023 • 3h 7min

Paul Christiano - Preventing an AI Takeover

Paul Christiano, world's leading AI safety researcher, discusses regretting inventing RLHF, modest timelines for AGI development, post-AGI world vision, his research solving alignment as a major discovery, push for responsible scaling policies, preventing an AI coup or bioweapon, and more.
undefined
Oct 26, 2023 • 44min

Shane Legg (DeepMind Founder) - 2028 AGI, New Architectures, Aligning Superhuman Models

Shane Legg, Founder and Chief AGI Scientist of Google DeepMind, discusses the timeline for AGI reaching 2028 and the need for new architectures. They explore how to align superhuman models and the impact of DeepMind on safety versus capabilities. The future of AI is also discussed, specifically the importance of multimodality in processing images, videos, and other modalities.
undefined
Oct 12, 2023 • 1h 31min

Grant Sanderson (3Blue1Brown) - Past, Present, & Future of Mathematics

Grant Sanderson, creator of the 3Blue1Brown YouTube channel, discusses the future of math with topics including the role of AGI in advanced math, career paths for mathematically talented students, his plans as a high school teacher, tips for self-teaching, the significance of Godel's incompleteness theorem, the difficulty in finding good explanations, and his process of making math videos.
undefined
Oct 4, 2023 • 2h 25min

Sarah C. M. Paine - WW2, Taiwan, Ukraine, & Maritime vs Continental Powers

Guest Sarah C. M. Paine, Professor of History and Strategy at the Naval War College, discusses how continental vs maritime powers think, why the British Empire fell apart, lessons from WW2 and Cold War, friendly debate on Taiwan and Ukraine, and if the US is ready for a war with China. She also explains how to study history properly and why leaders keep making the same mistakes.
undefined
Aug 8, 2023 • 1h 59min

Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress

Here is my conversation with Dario Amodei, CEO of Anthropic.Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00:00) - Introduction(00:01:00) - Scaling(00:15:46) - Language(00:22:58) - Economic Usefulness(00:38:05) - Bioterrorism(00:43:35) - Cybersecurity(00:47:19) - Alignment & mechanistic interpretability(00:57:43) - Does alignment research require scale?(01:05:30) - Misuse vs misalignment(01:09:06) - What if AI goes well?(01:11:05) - China(01:15:11) - How to think about alignment(01:31:31) - Is modern security good enough?(01:36:09) - Inefficiencies in training(01:45:53) - Anthropic’s Long Term Benefit Trust(01:51:18) - Is Claude conscious?(01:56:14) - Keeping a low profile Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
undefined
Jul 12, 2023 • 2h 23min

Andy Matuschak - Self-Teaching, Spaced Repetition, & Why Books Don’t Work

A few weeks ago, I sat beside Andy Matuschak to record how he reads a textbook.Even though my own job is to learn things, I was shocked with how much more intense, painstaking, and effective his learning process was.So I asked if we could record a conversation about how he learns and a bunch of other topics:* How he identifies and interrogates his confusion (much harder than it seems, and requires an extremely effortful and slow pace)* Why memorization is essential to understanding and decision-making* How come some people (like Tyler Cowen) can integrate so much information without an explicit note taking or spaced repetition system.* How LLMs and video games will change education* How independent researchers and writers can make money* The balance of freedom and discipline in education* Why we produce fewer von Neumann-like prodigies nowadays* How multi-trillion dollar companies like Apple (where he was previously responsible for bedrock iOS features) manage to coordinate millions of different considerations (from the cost of different components to the needs of users, etc) into new products designed by 10s of 1000s of people.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.To see Andy’s process in action, check out the video where we record him studying a quantum physics textbook, talking aloud about his thought process, and using his memory system prototype to internalize the material.You can check out his website and personal notes, and follow him on Twitter.CometeerVisit cometeer.com/lunar for $20 off your first order on the best coffee of your life!If you want to sponsor an episode, contact me at dwarkesh.sanjay.patel@gmail.com.Timestamps(00:00:52) - Skillful reading(00:02:30) - Do people care about understanding?(00:06:52) - Structuring effective self-teaching(00:16:37) - Memory and forgetting(00:33:10) - Andy’s memory practice(00:40:07) - Intellectual stamina(00:44:27) - New media for learning (video, games, streaming)(00:58:51) - Schools are designed for the median student(01:05:12) - Is learning inherently miserable?(01:11:57) - How Andy would structure his kids’ education(01:30:00) - The usefulness of hypertext(01:41:22) - How computer tools enable iteration(01:50:44) - Monetizing public work(02:08:36) - Spaced repetition(02:10:16) - Andy’s personal website and notes(02:12:44) - Working at Apple(02:19:25) - Spaced repetition 2 Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
undefined
Jun 26, 2023 • 3h 7min

Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future

The second half of my 7 hour conversation with Carl Shulman is out!My favorite part! And the one that had the biggest impact on my worldview.Here, Carl lays out how an AI takeover might happen:* AI can threaten mutually assured destruction from bioweapons,* use cyber attacks to take over physical infrastructure,* build mechanical armies,* spread seed AIs we can never exterminate,* offer tech and other advantages to collaborating countries, etcPlus we talk about a whole bunch of weird and interesting topics which Carl has thought about:* what is the far future best case scenario for humanity* what it would look like to have AI make thousands of years of intellectual progress in a month* how do we detect deception in superhuman models* does space warfare favor defense or offense* is a Malthusian state inevitable in the long run* why markets haven't priced in explosive economic growth* & much moreCarl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Catch part 1 hereTimestamps(0:00:00) - Intro (0:00:47) - AI takeover via cyber or bio (0:32:27) - Can we coordinate against AI? (0:53:49) - Human vs AI colonizers (1:04:55) - Probability of AI takeover (1:21:56) - Can we detect deception? (1:47:25) - Using AI to solve coordination problems (1:56:01) - Partial alignment (2:11:41) - AI far future (2:23:04) - Markets & other evidence (2:33:26) - Day in the life of Carl Shulman (2:47:05) - Space warfare, Malthusian long run, & other rapid fire Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
undefined
Jun 14, 2023 • 2h 44min

Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment

In terms of the depth and range of topics, this episode is the best I’ve done.No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of.We ended up talking for 8 hours, so I'm splitting this episode into 2 parts.This part is about Carl’s model of an intelligence explosion, which integrates everything from:* how fast algorithmic progress & hardware improvements in AI are happening,* what primate evolution suggests about the scaling hypothesis,* how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,* how quickly robots produced from existing factories could take over the economy.We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer.The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff.Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00:00) - Intro(00:01:32) - Intelligence Explosion(00:18:03) - Can AIs do AI research?(00:39:00) - Primate evolution(01:03:30) - Forecasting AI progress(01:34:20) - After human-level AGI(02:08:39) - AI takeover scenarios Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
undefined
May 23, 2023 • 2h 38min

Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes

It was a tremendous honor & pleasure to interview Richard Rhodes, Pulitzer Prize winning author of The Making of the Atomic BombWe discuss- similarities between AI progress & Manhattan Project (developing a powerful, unprecedented, & potentially apocalyptic technology within an uncertain arms-race situation)- visiting starving former Soviet scientists during fall of Soviet Union- whether Oppenheimer was a spy, & consulting on the Nolan movie- living through WW2 as a child- odds of nuclear war in Ukraine, Taiwan, Pakistan, & North Korea- how the US pulled of such a massive secret wartime scientific & industrial projectWatch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(0:00:00) - Oppenheimer movie(0:06:22) - Was the bomb inevitable?(0:29:10) - Firebombing vs nuclear vs hydrogen bombs(0:49:44) - Stalin & the Soviet program(1:08:24) - Deterrence, disarmament, North Korea, Taiwan(1:33:12) - Oppenheimer as lab director(1:53:40) - AI progress vs Manhattan Project(1:59:50) - Living through WW2(2:16:45) - Secrecy(2:26:34) - Wisdom & war Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
undefined
Apr 6, 2023 • 4h 3min

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(0:00:00) - TIME article(0:09:06) - Are humans aligned?(0:37:35) - Large language models(1:07:15) - Can AIs help with alignment?(1:30:17) - Society’s response to AI(1:44:42) - Predictions (or lack thereof)(1:56:55) - Being Eliezer(2:13:06) - Othogonality(2:35:00) - Could alignment be easier than we think?(3:02:15) - What will AIs want?(3:43:54) - Writing fiction & whether rationality helps you win Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode