

The Valmy
Peter Hartree
https://thevalmy.com/
Episodes
Mentioned books

Feb 18, 2023 • 60min
The 1000x Developer
Podcast: a16z Podcast Episode: The 1000x DeveloperRelease date: 2023-02-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationA small minority – likely less than 1% – of the world can code. Yet also widely known that the skillset tends to yield outsized returns, with developers generating some of the highest paying salaries out there.But the field is quickly shifting, especially with the advent of wide-scale AI. In this podcast, we get to chat with Amjad Masad, founder of Replit, about these foundational shifts.We cover how Replit has integrated AI into its platform and the implications on both current and future developers. It’s easier than ever to learn to code, but is it still worthwhile? Listen in to find out.Timestamps:00:00 - Introduction02:04 - What is Replit?04:15 - Stories behind Replit11:10 - The software hero’s journey13:09 - Making coding fun15:58 - AI powering software19:37 - Training your own models22:36 - Building UX around AI24:16 - The developer landscape26:23 - The 1000x engineer30:40 - Should you still learn to code?34:41 - What does AI enable?40:54 - Developing on mobile43:24 - A software labor market45:53 - Differentiating a marketplace48:23 - Building new market dynamics50:45 - Looking aheadResources: Replit: https://replit.com/Replit Ghostwriter: https://replit.com/site/ghostwriterReplit Bounties: https://replit.com/bountiesFind Amjad on Twitter: https://twitter.com/amasad Stay Updated: Find us on Twitter: https://twitter.com/a16zFind us on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. For more details please see a16z.com/disclosures.

Feb 5, 2023 • 1h 46min
Why Are Most Humans Religious? Professor Robin Dunbar
Podcast: ROCKING OUR PRIORS Episode: Why Are Most Humans Religious? Professor Robin DunbarRelease date: 2023-01-17Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationWhy are most humans religious?
How much can be explained by evolutionary psychology?
Why do we cooperate? Is it religious injunctions or more emotional?
Is religiosity really about cooperation? What about legitimising hierarchy, control, and female self-sacrifice.
Muslim women are less likely to go to Friday prayers, but they are still devout. So perhaps group rituals are not so essential?
Why did all doctrinal religions emerge within a narrow latitudinal band?
Are groups necessarily small? Don’t films and social media scale-up solidarity? What about online mobs viciously attacking their favoured celebrity’s boyfriend’s new girlfriend?
Interview with Professor Robin Dunbar, Professor of Evolutionary Psychology and Anthropology at the University of Oxford
https://www.psy.ox.ac.uk/people/robin-dunbar
Robin's latest book is on Religion. He has also published excellent books on the science of love and betrayal; the evolution of language; and friendships.

Feb 3, 2023 • 1h 5min
Connor Leahy on AI Safety and Why the World is Fragile
Podcast: Future of Life Institute Podcast Episode: Connor Leahy on AI Safety and Why the World is FragileRelease date: 2023-01-26Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationConnor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev
Timestamps:
00:00 Introduction
00:47 What is the best way to understand AI safety?
09:50 Why is the world relatively stable?
15:18 Is the main worry human misuse of AI?
22:47 Can humanity solve AI safety?
30:06 Can we slow down AI development?
37:13 How should governments regulate AI?
41:09 How do we avoid misallocating AI safety government grants?
51:02 Should AI safety research be done by for-profit companies?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Feb 3, 2023 • 1h 6min
Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education
Podcast: Future of Life Institute Podcast Episode: Connor Leahy on Aliens, Ethics, Economics, Memetics, and EducationRelease date: 2023-02-02Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationConnor Leahy from Conjecture joins the podcast for a lightning round on a variety of topics ranging from aliens to education. Learn more about Connor's work at https://conjecture.dev
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Jan 28, 2023 • 1h 14min
“Bibi: My Story,” Benjamin Netanyahu On His Life And Times | Peter Robinson | Hoover Institution
Podcast: Uncommon Knowledge Episode: “Bibi: My Story,” Benjamin Netanyahu On His Life And Times | Peter Robinson | Hoover InstitutionRelease date: 2022-12-09Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationBenjamin Netanyahu is the past and soon to be again prime minister of Israel. In his new book, Bibi: My Story, Netanyahu describes how he went from an Israeli American high school student in Philadelphia to a member of the Israeli Defense Force, detouring along the way to study architecture and get a master’s degree from the MIT Sloan School of Management in 1976. His studies were interrupted when his brother Yoni was killed in the raid on Entebbe, Uganda, which inspired Bibi to return to Israel and dedicate his life to protecting that state. This interview covers those events as well as his rise to the top of Israeli politics—multiple times.
Note to viewers: Be sure to watch to the end of the show after the end credits for some additional content that was shot after the interview concluded.

Jan 26, 2023 • 1h 4min
Can effective altruism be redeemed?
Podcast: The Gray Area with Sean Illing Episode: Can effective altruism be redeemed?Release date: 2023-01-23Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationGuest host Sigal Samuel talks with Holden Karnofsky about effective altruism, a movement flung into public scrutiny with the collapse of Sam Bankman-Fried and his crypto exchange, FTX. They discuss EA’s approach to charitable giving, the relationship between effective altruism and the moral philosophy of utilitarianism, and what reforms might be needed for the future of the movement.Note: In August 2022, Bankman-Fried’s philanthropic family foundation, Building a Stronger Future, awarded Vox’s Future Perfect a grant for a 2023 reporting project. That project is now on pause.Host: Sigal Samuel (@SigalSamuel, Senior Reporter, VoxGuest: Holden Karnofsky, co-founder of GiveWell; CEO of Open PhilanthropyReferences:
"Effective altruism gave rise to Sam Bankman-Fried. Now it's facing a moral reckoning" by Sigal Samuel (Vox; Nov. 16, 2022)
"The Reluctant Prophet of Effective Altruism" by Gideon Lewis-Kraus (New Yorker; Aug. 8, 2022)
"Sam Bankman-Fried tries to explain himself" by Kelsey Piper (Vox; Nov. 16, 2022)
"EA is about maximization, and maximization is perilous" by Holden Karnofsky (Effective Altruism Forum; Sept. 2, 2022)
"Defending One-Dimensional Ethics" by Holden Karnofsky (Cold Takes blog; Feb. 15, 2022)
"Future-proof ethics" by Holden Karnofsky (Cold Takes blog; Feb. 2, 2022)
"Bayesian mindset" by Holden Karnofsky (Cold Takes blog; Dec. 21, 2021)
"EA Structural Reform Ideas" by Carla Zoe Cremer (Nov. 12, 2022)
"Democratising Risk: In Search of a Methodology to Study Existential Risk" by Carla Cremer and Luke Kemp (SSRN; Dec. 28, 2021)
Enjoyed this episode? Rate The Gray Area ⭐⭐⭐⭐⭐ and leave a review on Apple Podcasts.Subscribe for free. Be the first to hear the next episode of The Gray Area. Subscribe in your favorite podcast app.Support The Gray Area by making a financial contribution to Vox! bit.ly/givepodcastsThis episode was made by:
Producer: Erikk Geannikis
Editor: Amy Drozdowska
Engineer: Patrick Boyd
Editorial Director, Vox Talk: A.M. Hall
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Jan 26, 2023 • 2h 40min
#143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons
Podcast: 80,000 Hours Podcast Episode: #143 – Jeffrey Lewis on the most common misconceptions about nuclear weaponsRelease date: 2022-12-29Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAmerica aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.As today's guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official 'OPLANs' (military operation plans), the US is committed to 'dominating' in a nuclear war with Russia. How would they do that? "That is redacted." Links to learn more, summary and full transcript. We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint. As Jeffrey tells it, 'mutually assured destruction' was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn't it still the de facto reality? Yes and no. Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US' plan to prevail in a nuclear war and conclude that "it's freaking madness." They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won't use the weapons. But Jeffrey thinks that's a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It's what the generals and admirals have all prepared for. What matters is the 'not calm moment': the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There's only minutes to decide. Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn't want to take because of how information and options were processed and presented to them. In the heat of the moment, it's natural to reach for the plan you've prepared — however mad it might sound. In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining: • Why inter-service rivalry is one of the biggest constraints on US nuclear policy • Two times the US sabotaged nuclear nonproliferation among great powers • How his field uses jargon to exclude outsiders • How the US could prevent the revival of mass nuclear testing by the great powers • Why nuclear deterrence relies on the possibility that something might go wrong • Whether 'salami tactics' render nuclear weapons ineffective • The time the Navy and Air Force switched views on how to wage a nuclear war, just when it would allow *them* to have the most missiles • The problems that arise when you won't talk to people you think are evil • Why missile defences are politically popular despite being strategically foolish • How open source intelligence can prevent arms races • And much more.Chapters:Rob’s intro (00:00:00)The interview begins (00:02:49)Misconceptions in the effective altruism community (00:05:42)Nuclear deterrence (00:17:36)Dishonest rituals (00:28:17)Downsides of generalist research (00:32:13)“Mutual assured destruction” (00:38:18)Budgetary considerations for competing parts of the US military (00:51:53)Where the effective altruism community can potentially add the most value (01:02:15)Gatekeeping (01:12:04)Strengths of the nuclear security community (01:16:14)Disarmament (01:26:58)Nuclear winter (01:38:53)Attacks against US allies (01:41:46)Most likely weapons to get used (01:45:11)The role of moral arguments (01:46:40)Salami tactics (01:52:01)Jeffrey's disagreements with Thomas Schelling (01:57:00)Why did it take so long to get nuclear arms agreements? (02:01:11)Detecting secret nuclear facilities (02:03:18)Where Jeffrey would give $10M in grants (02:05:46)The importance of archival research (02:11:03)Jeffrey's policy ideas (02:20:03)What should the US do regarding China? (02:27:10)What should the US do regarding Russia? (02:31:42)What should the US do regarding Taiwan? (02:35:27)Advice for people interested in working on nuclear security (02:37:23)Rob’s outro (02:39:13)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore

Jan 17, 2023 • 52min
Ex-Logger Aims to Beat Elon Musk in Electric Trucks
Podcast: Odd Lots Episode: Ex-Logger Aims to Beat Elon Musk in Electric TrucksRelease date: 2023-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationWhile electric vehicle use is growing rapidly, the internal combustion engine remains completely dominant in the world of heavy trucks. At some point in the future, Tesla has a plan to commercialize an electric semi, but nobody really knows when. Meanwhile, other entities are looking to compete in the world of industrial vehicles. Chace Barber is a former trucker in the logging industry, which has some very different characteristics than the type of freight trucking you typically see on a highway. When you're driving over the Rocky Mountains, without easy proximity to mechanics, tow trucks or service stations, you need power and reliability. His company, Edison Motors, is building its own trucks with a hybrid diesel-electric approach that it sees as a better path forward. On this episode, we discuss the challenges of hauling logs, as well as how it's possible for a small entity to get in the game of building such large industrial equipment.See omnystudio.com/listener for privacy information.

Jan 13, 2023 • 1h 19min
Tyler Cowen on Effective Altruism (University of St Andrews)
Release date: 2023-01-13Notes from The Valmy:Source: YouTube https://youtu.be/ZzV7ty1DW_c Release date: 2022-12-15Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarization

Jan 13, 2023 • 2h 44min
#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well
Podcast: 80,000 Hours Podcast Episode: #141 – Richard Ngo on large language models, OpenAI, and striving to make the future go wellRelease date: 2022-12-13Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationLarge language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow.
But do they really 'understand' what they're saying, or do they just give the illusion of understanding?
Today's guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from 'acting out' as they become more powerful, are deployed and ultimately given power in society.
Links to learn more, summary and full transcript.
One way to think about 'understanding' is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer.
However, as Richard explains, another way to think about 'understanding' is as a functional matter. If you really understand an idea you're able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable.
Richard argues that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve.
We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck, or at least something sufficiently close to a duck it doesn't matter.
In today's conversation we discuss the above, as well as:
• Could speeding up AI development be a bad thing?
• The balance between excitement and fear when it comes to AI advances
• What OpenAI focuses its efforts where it does
• Common misconceptions about machine learning
• How many computer chips it might require to be able to do most of the things humans do
• How Richard understands the 'alignment problem' differently than other people
• Why 'situational awareness' may be a key concept for understanding the behaviour of AI models
• What work to positively shape the development of AI Richard is and isn't excited about
• The AGI Safety Fundamentals course that Richard developed to help people learn more about this field
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.
Producer: Keiran Harris
Audio mastering: Milo McGuire and Ben Cordell
Transcriptions: Katy Moore