The Valmy cover image

The Valmy

Latest episodes

undefined
Dec 13, 2022 • 3h 49min

#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

Podcast: 80,000 Hours Podcast Episode: #112 – Carl Shulman on the common-sense case for existential risk work and its practical implicationsRelease date: 2021-10-05Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPreventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster.According to Carl Shulman, research associate at Oxford University's Future of Humanity Institute, that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future.Links to learn more, summary and full transcript. The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs: • The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American. • So saving all US citizens at any given point in time would be worth $1,300 trillion. • If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone. • Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today. This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein. If the case is clear enough, why hasn't it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve? Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds. Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we've never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on. Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover: • A few reasons Carl isn't excited by 'strong longtermism' • How x-risk reduction compares to GiveWell recommendations • Solutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate change • The history of bioweapons • Whether gain-of-function research is justifiable • Successes and failures around COVID-19 • The history of existential risk • And much moreChapters:Rob’s intro (00:00:00)The interview begins (00:01:34)A few reasons Carl isn't excited by strong longtermism (00:03:47)Longtermism isn’t necessary for wanting to reduce big x-risks (00:08:21)Why we don’t adequately prepare for disasters (00:11:16)International programs to stop asteroids and comets (00:18:55)Costs and political incentives around COVID (00:23:52)How x-risk reduction compares to GiveWell recommendations (00:34:34)Solutions for asteroids, comets, and supervolcanoes (00:50:22)Solutions for climate change (00:54:15)Solutions for nuclear weapons (01:02:18)The history of bioweapons (01:22:41)Gain-of-function research (01:34:22)Solutions for bioweapons and natural pandemics (01:45:31)Successes and failures around COVID-19 (01:58:26)Who to trust going forward (02:09:09)The history of existential risk (02:15:07)The most compelling risks (02:24:59)False alarms about big risks in the past (02:34:22)Suspicious convergence around x-risk reduction (02:49:31)How hard it would be to convince governments (02:57:59)Defensive epistemology (03:04:34)Hinge of history debate (03:16:01)Technological progress can’t keep up for long (03:21:51)Strongest argument against this being a really pivotal time (03:37:29)How Carl unwinds (03:45:30)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
undefined
Dec 3, 2022 • 1h 30min

Byrne Hobart - FTX, Drugs, Twitter, Taiwan, & Monasticism

Podcast: Dwarkesh Podcast Episode: Byrne Hobart - FTX, Drugs, Twitter, Taiwan, & MonasticismRelease date: 2022-12-01Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPerhaps the most interesting episode so far.Byrne Hobart writes at thediff.co, analyzing inflections in finance and tech.He explains:* What happened at FTX* How drugs have induced past financial bubbles* How to be long AI while hedging Taiwan invasion* Whether Musk’s Twitter takeover will succeed* Where to find the next Napoleon and LBJ* & ultimately how society can deal with those who seek domination and recognitionWatch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Follow me on Twitter for updates on future episodes.Timestamps:(0:00:50) - What the hell happened at FTX?(0:07:03) - How SBF Faked Being a Genius: (0:12:23) - Drugs Explain Financial Bubbles(0:17:12) - On Founder Physiognomy(0:21:02) - Indexing Parental Involvement in Raising Talented Kids(0:30:35) - Where are all the Caro-level Biographers?(0:39:03) - Where are today's Great Founders? (0:48:29) - Micro Writing -> Macro Understanding(0:51:48) - Elon's Twitter Takeover(1:00:50) - Does Big Tech & West Have Great People?(1:11:34) - Philosophical Fanatics and Effective Altruism (1:17:17) - What Great Founders Have In Common(1:19:56) - Thinkers vs. Analyzers(1:25:40) - Taiwan Invasion bets & AI Timelines Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
undefined
Nov 25, 2022 • 1h 12min

Johnathan Bi on Mimesis and René Girard

Podcast: EconTalk Episode: Johnathan Bi on Mimesis and René GirardRelease date: 2022-11-21Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationWhen the 20-year-old overachiever Johnathan Bi's first startup crashed and burned, he headed to a Zen retreat in the Catskills to "debug himself." He discovered René Girard and his mimetic theory--the idea that imitation is a key and often unconscious driver of human behavior. Listen as entrepreneur and philosopher Bi shares with EconTalk host Russ Roberts what he learned from Girard and Girard's insights into how we meet our primal need for money, fame, and power. The conversation includes the contrasts between economics and Girard's perspective.
undefined
Nov 25, 2022 • 52min

Robin Hanson on Predicting the Future of Artificial Intelligence

Podcast: Future of Life Institute Podcast Episode: Robin Hanson on Predicting the Future of Artificial IntelligenceRelease date: 2022-11-24Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationRobin Hanson joins the podcast to discuss AI forecasting methods and metrics. Timestamps: 00:00 Introduction 00:49 Robin's experience working with AI 06:04 Robin's views on AI development 10:41 Should we care about metrics for AI progress? 16:56 Is it useful to track AI progress? 22:02 When should we begin worrying about AI safety? 29:16 The history of AI development 39:52 AI progress that deviates from current trends 43:34 Is this AI boom different than past booms? 48:26 Different metrics for predicting AI
undefined
Nov 24, 2022 • 60min

Robin Hanson on Grabby Aliens and When Humanity Will Meet Them

Podcast: Future of Life Institute Podcast Episode: Robin Hanson on Grabby Aliens and When Humanity Will Meet ThemRelease date: 2022-11-17Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationRobin Hanson joins the podcast to explain his theory of grabby aliens and its implications for the future of humanity. Learn more about the theory here: https://grabbyaliens.com Timestamps: 00:00 Introduction 00:49 Why should we care about aliens? 05:58 Loud alien civilizations and quiet alien civilizations 08:16 Why would some alien civilizations be quiet? 14:50 The moving parts of the grabby aliens model 23:57 Why is humanity early in the universe? 28:46 Could't we just be alone in the universe? 33:15 When will humanity expand into space? 46:05 Will humanity be more advanced than the aliens we meet? 49:32 What if we discovered aliens tomorrow? 53:44 Should the way we think about aliens change our actions? 57:48 Can we reasonably theorize about aliens? 53:39 The next episode
undefined
Nov 20, 2022 • 46min

Peter Thiel – The End of The Future

Release date: 2022-11-20Notes from The Valmy:Source: YouTube (Stanford Academic Freedom Conference) https://www.youtube.com/@stanfordcli Release date: 2022-11-04Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarization
undefined
Nov 7, 2022 • 2h 5min

Bryan Caplan - Feminists, Billionaires, and Demagogues

Podcast: Dwarkesh Podcast Episode: Bryan Caplan - Feminists, Billionaires, and DemagoguesRelease date: 2022-10-20Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIt was a fantastic pleasure to welcome Bryan Caplan back for a third time on the podcast! His most recent book is Don't Be a Feminist: Essays on Genuine Justice.He explains why he thinks:- Feminists are mostly wrong,- We shouldn’t overtax our centi-billionaires,- Decolonization should have emphasized human rights over democracy,- Eastern Europe shows that we could accept millions of refugees.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Follow me on Twitter for updates on future episodes.More really cool guests coming up; subscribe to find out about future episodes!You may also enjoy my interviews with Tyler Cowen (about talent, collapse, & pessimism of sex), Charles Mann (about the Americas before Columbus & scientific wizardry), and Steve Hsu (about intelligence and embryo selection).Timestamps(00:12) - Don’t Be a Feminist (16:53) - Western Feminism Ignores Infanticide(19:59) - Why The Universe Hates Women(32:02) - Women's Tears Have Too Much Power(45:40) - Bryan Performs Standup Comedy!(51:02) - Affirmative Action is Philanthropic Propaganda(54:13) - Peer-effects as the Only Real Education(58:24) - The Idiocy of Student Loan Forgiveness(1:07:57) - Why Society is Becoming Mentally Ill(1:10:50) - Open Borders & the Ultra-long Term(1:14:37) - Why Cowen’s Talent Scouting Strategy is Ludicrous(1:22:06) - Surprising Immigration Victories(1:36:06) - The Most Successful Revolutions(1:54:20) - Anarcho-Capitalism is the Ultimate Government(1:55:40) - Billionaires Deserve their Wealth Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
undefined
Nov 5, 2022 • 28min

Can Effective Altruism really change the world?

Podcast: Analysis Episode: Can Effective Altruism really change the world?Release date: 2022-10-24Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIf you want to do good in the world, should you be a doctor, or an aid worker? Or should you make a billion or two any way you can, and give it to good causes? Billionaire Sam Bankman-Fried argues this is the best use of his vast wealth. But philosophers argue charitable giving is often driven not by logic, but by a sense of personal attachment. David Edmonds traces the latest developments in the effective altruism movement, examining the questions they pose, and looking at the successes and limitations.
undefined
Nov 5, 2022 • 54min

Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe

Podcast: Future of Life Institute Podcast Episode: Ajeya Cotra on how Artificial Intelligence Could Cause CatastropheRelease date: 2022-11-03Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAjeya Cotra joins us to discuss how artificial intelligence could cause catastrophe. Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:53 AI safety research in general 02:04 Realistic scenarios for AI catastrophes 06:51 A dangerous AI model developed in the near future 09:10 Assumptions behind dangerous AI development 14:45 Can AIs learn long-term planning? 18:09 Can AIs understand human psychology? 22:32 Training an AI model with naive safety features 24:06 Can AIs be deceptive? 31:07 What happens after deploying an unsafe AI system? 44:03 What can we do to prevent an AI catastrophe? 53:58 The next episode
undefined
Oct 16, 2022 • 44min

Peter Thiel on the Bible

Podcast: Meeting of Minds Podcast Episode: Peter Thiel on the BibleRelease date: 2021-05-17Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPeter Thiel, the highly successful tech entrepreneur and author, discusses his mentor Rene Girard; the Bible, how we read it, and how it reads us; Jesus’ death and resurrection; atheism; and the limitless escalation of violence towards apocalypse. Timestamps: 0:43 The Bible reads us 2:02 Cain and Abel vs. Romulus and Remus6:05 Cross vs Resurrection7:26 The Gospels are different from Death of Socrates9:04 The Bible is discontinuous from pagan classics11:30 "The idea that victims exist comes from Judeo-Christianity and nowhere else."14:54 Was Nietzsche somehow extremely close to the truth of Christianity?17:18 Pagan Pharmakoi, the ancient sacrificial medicine19:48 Fascism and Communism23:00 Girard on the Woes against the Pharisees26:02 The cycle that leads to apocalypse31:11 Steven Pinker and the story of progress32:19 Is an apocalypse, such as a nuclear war, inevitable?35:10 Being too sanguine about apocalypse makes it more likely42:08 Is there an off-ramp? What would it look like? If we don't know, shouldn't we at least try to figure it out?See omnystudio.com/listener for privacy information.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app