

The Valmy
Peter Hartree
https://thevalmy.com/
Episodes
Mentioned books

Jan 17, 2023 • 52min
Ex-Logger Aims to Beat Elon Musk in Electric Trucks
Podcast: Odd Lots Episode: Ex-Logger Aims to Beat Elon Musk in Electric TrucksRelease date: 2023-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationWhile electric vehicle use is growing rapidly, the internal combustion engine remains completely dominant in the world of heavy trucks. At some point in the future, Tesla has a plan to commercialize an electric semi, but nobody really knows when. Meanwhile, other entities are looking to compete in the world of industrial vehicles. Chace Barber is a former trucker in the logging industry, which has some very different characteristics than the type of freight trucking you typically see on a highway. When you're driving over the Rocky Mountains, without easy proximity to mechanics, tow trucks or service stations, you need power and reliability. His company, Edison Motors, is building its own trucks with a hybrid diesel-electric approach that it sees as a better path forward. On this episode, we discuss the challenges of hauling logs, as well as how it's possible for a small entity to get in the game of building such large industrial equipment.See omnystudio.com/listener for privacy information.

Jan 13, 2023 • 1h 19min
Tyler Cowen on Effective Altruism (University of St Andrews)
Release date: 2023-01-13Notes from The Valmy:Source: YouTube https://youtu.be/ZzV7ty1DW_c Release date: 2022-12-15Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarization

Jan 13, 2023 • 2h 44min
#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well
Podcast: 80,000 Hours Podcast Episode: #141 – Richard Ngo on large language models, OpenAI, and striving to make the future go wellRelease date: 2022-12-13Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationLarge language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow.
But do they really 'understand' what they're saying, or do they just give the illusion of understanding?
Today's guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from 'acting out' as they become more powerful, are deployed and ultimately given power in society.
Links to learn more, summary and full transcript.
One way to think about 'understanding' is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer.
However, as Richard explains, another way to think about 'understanding' is as a functional matter. If you really understand an idea you're able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable.
Richard argues that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve.
We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck, or at least something sufficiently close to a duck it doesn't matter.
In today's conversation we discuss the above, as well as:
• Could speeding up AI development be a bad thing?
• The balance between excitement and fear when it comes to AI advances
• What OpenAI focuses its efforts where it does
• Common misconceptions about machine learning
• How many computer chips it might require to be able to do most of the things humans do
• How Richard understands the 'alignment problem' differently than other people
• Why 'situational awareness' may be a key concept for understanding the behaviour of AI models
• What work to positively shape the development of AI Richard is and isn't excited about
• The AGI Safety Fundamentals course that Richard developed to help people learn more about this field
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.
Producer: Keiran Harris
Audio mastering: Milo McGuire and Ben Cordell
Transcriptions: Katy Moore

Jan 3, 2023 • 1h 22min
Nadia Asparouhova — Tech elites, democracy, open source, & philanthropy
Podcast: Dwarkesh Podcast Episode: Nadia Asparouhova — Tech elites, democracy, open source, & philanthropyRelease date: 2022-12-15Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationNadia Asparouhova is currently researching what the new tech elite will look like at nadia.xyz. She is also the author of Working in Public: The Making and Maintenance of Open Source Software.We talk about how:* American philanthropy has changed from Rockefeller to Effective Altruism* SBF represented the Davos elite rather than the Silicon Valley elite,* Open source software reveals the limitations of democratic participation,* & much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Timestamps(0:00:00) - Intro(0:00:26) - SBF was Davos elite(0:09:38) - Gender sociology of philanthropy(0:16:30) - Was Shakespeare an open source project?(0:22:00) - Need for charismatic leaders(0:33:55) - Political reform(0:40:30) - Why didn’t previous wealth booms lead to new philanthropic movements?(0:53:35) - Creating a 10,000 year endowment(0:57:27) - Why do institutions become left wing?(1:02:27) - Impact of billionaire intellectual funding(1:04:12) - Value of intellectuals(1:08:53) - Climate, AI, & Doomerism(1:18:04) - Religious philanthropy Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Dec 22, 2022 • 1h 26min
Bethany McLean — Enron, FTX, 2008, Musk, frauds, & visionaries
Podcast: Dwarkesh Podcast Episode: Bethany McLean — Enron, FTX, 2008, Musk, frauds, & visionariesRelease date: 2022-12-21Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationThis was one of my favorite episodes ever.Bethany McLean was the first reporter to question Enron’s earnings, and she has written some of the best finance books out there.We discuss:* The astounding similarities between Enron & FTX,* How visionaries are just frauds who succeed (and which category describes Elon Musk),* What caused 2008, and whether we are headed for a new crisis,* Why there’s too many venture capitalists and not enough short sellers,* And why history keeps repeating itself.McLean is a contributing editor at Vanity Fair (see her articles here and the author of The Smartest Guys in the Room, All the Devils Are Here, Saudi America, and Shaky Ground.Watch on YouTube. Listen on Spotify, Apple Podcasts, or your favorite podcast platform.Follow McLean on Twitter. Follow me on Twitter for updates on future episodes.Timestamps(0:04:37) - Is Fraud Over?(0:11:22) - Shortage of Shortsellers(0:19:03) - Elon Musk - Fraud or Visionary?(0:23:00) - Intelligence, Fake Deals, & Culture(0:33:40) - Rewarding Leaders for Long Term Thinking(0:37:00) - FTX Mafia?(0:40:17) - Is Finance Too Big?(0:44:09) - 2008 Collapse, Fannie & Freddie(0:49:25) - The Big Picture(1:00:12) - Frackers Vindicated?(1:03:40) - Rating Agencies(1:07:05) - Lawyers Getting Rich Off Fraud(1:15:09) - Are Some People Fundamentally Deceptive?(1:19:25) - Advice for Big Picture Thinkers Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Dec 13, 2022 • 3h 49min
#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications
Podcast: 80,000 Hours Podcast Episode: #112 – Carl Shulman on the common-sense case for existential risk work and its practical implicationsRelease date: 2021-10-05Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPreventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster.According to Carl Shulman, research associate at Oxford University's Future of Humanity Institute, that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future.Links to learn more, summary and full transcript. The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs: • The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American. • So saving all US citizens at any given point in time would be worth $1,300 trillion. • If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone. • Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today. This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein. If the case is clear enough, why hasn't it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve? Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds. Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we've never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on. Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover: • A few reasons Carl isn't excited by 'strong longtermism' • How x-risk reduction compares to GiveWell recommendations • Solutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate change • The history of bioweapons • Whether gain-of-function research is justifiable • Successes and failures around COVID-19 • The history of existential risk • And much moreChapters:Rob’s intro (00:00:00)The interview begins (00:01:34)A few reasons Carl isn't excited by strong longtermism (00:03:47)Longtermism isn’t necessary for wanting to reduce big x-risks (00:08:21)Why we don’t adequately prepare for disasters (00:11:16)International programs to stop asteroids and comets (00:18:55)Costs and political incentives around COVID (00:23:52)How x-risk reduction compares to GiveWell recommendations (00:34:34)Solutions for asteroids, comets, and supervolcanoes (00:50:22)Solutions for climate change (00:54:15)Solutions for nuclear weapons (01:02:18)The history of bioweapons (01:22:41)Gain-of-function research (01:34:22)Solutions for bioweapons and natural pandemics (01:45:31)Successes and failures around COVID-19 (01:58:26)Who to trust going forward (02:09:09)The history of existential risk (02:15:07)The most compelling risks (02:24:59)False alarms about big risks in the past (02:34:22)Suspicious convergence around x-risk reduction (02:49:31)How hard it would be to convince governments (02:57:59)Defensive epistemology (03:04:34)Hinge of history debate (03:16:01)Technological progress can’t keep up for long (03:21:51)Strongest argument against this being a really pivotal time (03:37:29)How Carl unwinds (03:45:30)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore

Dec 3, 2022 • 1h 30min
Byrne Hobart - FTX, Drugs, Twitter, Taiwan, & Monasticism
Podcast: Dwarkesh Podcast Episode: Byrne Hobart - FTX, Drugs, Twitter, Taiwan, & MonasticismRelease date: 2022-12-01Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPerhaps the most interesting episode so far.Byrne Hobart writes at thediff.co, analyzing inflections in finance and tech.He explains:* What happened at FTX* How drugs have induced past financial bubbles* How to be long AI while hedging Taiwan invasion* Whether Musk’s Twitter takeover will succeed* Where to find the next Napoleon and LBJ* & ultimately how society can deal with those who seek domination and recognitionWatch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Follow me on Twitter for updates on future episodes.Timestamps:(0:00:50) - What the hell happened at FTX?(0:07:03) - How SBF Faked Being a Genius: (0:12:23) - Drugs Explain Financial Bubbles(0:17:12) - On Founder Physiognomy(0:21:02) - Indexing Parental Involvement in Raising Talented Kids(0:30:35) - Where are all the Caro-level Biographers?(0:39:03) - Where are today's Great Founders? (0:48:29) - Micro Writing -> Macro Understanding(0:51:48) - Elon's Twitter Takeover(1:00:50) - Does Big Tech & West Have Great People?(1:11:34) - Philosophical Fanatics and Effective Altruism (1:17:17) - What Great Founders Have In Common(1:19:56) - Thinkers vs. Analyzers(1:25:40) - Taiwan Invasion bets & AI Timelines Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Nov 25, 2022 • 1h 12min
Johnathan Bi on Mimesis and René Girard
Podcast: EconTalk Episode: Johnathan Bi on Mimesis and René GirardRelease date: 2022-11-21Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationWhen the 20-year-old overachiever Johnathan Bi's first startup crashed and burned, he headed to a Zen retreat in the Catskills to "debug himself." He discovered René Girard and his mimetic theory--the idea that imitation is a key and often unconscious driver of human behavior. Listen as entrepreneur and philosopher Bi shares with EconTalk host Russ Roberts what he learned from Girard and Girard's insights into how we meet our primal need for money, fame, and power. The conversation includes the contrasts between economics and Girard's perspective.

Nov 20, 2022 • 46min
Peter Thiel – The End of The Future
Release date: 2022-11-20Notes from The Valmy:Source: YouTube (Stanford Academic Freedom Conference) https://www.youtube.com/@stanfordcli Release date: 2022-11-04Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarization

Nov 7, 2022 • 2h 5min
Bryan Caplan - Feminists, Billionaires, and Demagogues
Podcast: Dwarkesh Podcast Episode: Bryan Caplan - Feminists, Billionaires, and DemagoguesRelease date: 2022-10-20Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIt was a fantastic pleasure to welcome Bryan Caplan back for a third time on the podcast! His most recent book is Don't Be a Feminist: Essays on Genuine Justice.He explains why he thinks:- Feminists are mostly wrong,- We shouldn’t overtax our centi-billionaires,- Decolonization should have emphasized human rights over democracy,- Eastern Europe shows that we could accept millions of refugees.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Follow me on Twitter for updates on future episodes.More really cool guests coming up; subscribe to find out about future episodes!You may also enjoy my interviews with Tyler Cowen (about talent, collapse, & pessimism of sex), Charles Mann (about the Americas before Columbus & scientific wizardry), and Steve Hsu (about intelligence and embryo selection).Timestamps(00:12) - Don’t Be a Feminist (16:53) - Western Feminism Ignores Infanticide(19:59) - Why The Universe Hates Women(32:02) - Women's Tears Have Too Much Power(45:40) - Bryan Performs Standup Comedy!(51:02) - Affirmative Action is Philanthropic Propaganda(54:13) - Peer-effects as the Only Real Education(58:24) - The Idiocy of Student Loan Forgiveness(1:07:57) - Why Society is Becoming Mentally Ill(1:10:50) - Open Borders & the Ultra-long Term(1:14:37) - Why Cowen’s Talent Scouting Strategy is Ludicrous(1:22:06) - Surprising Immigration Victories(1:36:06) - The Most Successful Revolutions(1:54:20) - Anarcho-Capitalism is the Ultimate Government(1:55:40) - Billionaires Deserve their Wealth Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe


