80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
Jul 1, 2022 • 2h 58min

#133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them. That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly-capable AI systems seriously. Links to learn more, summary and full transcript. Max's primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity's future including nuclear war, synthetic biology, and AI. Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his 'put up or shut up' resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a website called 'Improve The News' to help readers separate facts from spin. But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind. You can now give an AI system like GPT-3 the text: "I'm going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that's in?" And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago. So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them. He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?” Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What's the potential? What are the threats? How might this story play out? What should we be doing to prepare? Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem. They then spend roughly the last third talking about Max's current big passion: improving the news we consume — where Rob has a few reservations. They also cover: • Whether we could understand what superintelligent systems were doing • The value of encouraging people to think about the positive future they want • How to give machines goals • Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’ • Whether we’re sleepwalking into disaster • Whether people actually just want their biases confirmed • Why Max is worried about government-backed fact-checking • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
undefined
Jun 14, 2022 • 2h 42min

#132 – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it's a fair bet that they don't want it stolen in two seconds and uploaded to the web where anyone can use it for free. This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops. Today's guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic. One of her jobs is to stop hackers exfiltrating Anthropic's incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge. Links to learn more, summary and full transcript. The worries aren't purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we'll develop so-called artificial 'general' intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society. If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately. If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally 'go rogue,' breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can't be shut off. As Nova explains, in either case, we don't want such models disseminated all over the world before we've confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic -- perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly. If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world. We'll soon need the ability to 'sandbox' (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough. In today's conversation, Rob and Nova cover: • How good or bad is information security today • The most secure computer systems that exist • How to design an AI training compute centre for maximum efficiency • Whether 'formal verification' can help us design trustworthy systems • How wide the gap is between AI capabilities and AI safety • How to disincentivise hackers • What should listeners do to strengthen their own security practices • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Beppe Rådvik Transcriptions: Katy Moore
undefined
Jun 3, 2022 • 1h 6min

#131 – Lewis Dartnell on getting humanity to bounce back faster in a post-apocalyptic world

“We’re leaving these 16 contestants on an island with nothing but what they can scavenge from an abandoned factory and apartment block. Over the next 365 days, they’ll try to rebuild as much of civilisation as they can — from glass, to lenses, to microscopes. This is: The Knowledge!”If you were a contestant on such a TV show, you'd love to have a guide to how basic things you currently take for granted are done — how to grow potatoes, fire bricks, turn wood to charcoal, find acids and alkalis, and so on.Today’s guest Lewis Dartnell has gone as far compiling this information as anyone has with his bestselling book The Knowledge: How to Rebuild Civilization in the Aftermath of a Cataclysm. Links to learn more, summary and full transcript. But in the aftermath of a nuclear war or incredibly deadly pandemic that kills most people, many of the ways we do things today will be impossible — and even some of the things people did in the past, like collect coal from the surface of the Earth, will be impossible the second time around. As Lewis points out, there’s “no point telling this band of survivors how to make something ultra-efficient or ultra-useful or ultra-capable if it's just too damned complicated to build in the first place. You have to start small and then level up, pull yourself up by your own bootstraps.” So it might sound good to tell people to build solar panels — they’re a wonderful way of generating electricity. But the photovoltaic cells we use today need pure silicon, and nanoscale manufacturing — essentially the same technology as microchips used in a computer — so actually making solar panels would be incredibly difficult. Instead, you’d want to tell our group of budding engineers to use more appropriate technologies like solar concentrators that use nothing more than mirrors — which turn out to be relatively easy to make. A disaster that unravels the complex way we produce goods in the modern world is all too possible. Which raises the question: why not set dozens of people to plan out exactly what any survivors really ought to do if they need to support themselves and rebuild civilisation? Such a guide could then be translated and distributed all around the world. The goal would be to provide the best information to speed up each of the many steps that would take survivors from rubbing sticks together in the wilderness to adjusting a thermostat in their comfy apartments. This is clearly not a trivial task. Lewis's own book (at 300 pages) only scratched the surface of the most important knowledge humanity has accumulated, relegating all of mathematics to a single footnote. And the ideal guide would offer pretty different advice depending on the scenario. Are survivors dealing with a radioactive ice age following a nuclear war? Or is it an eerily intact but near-empty post-pandemic world with mountains of goods to scavenge from the husks of cities? As a brand-new parent, Lewis couldn’t do one of our classic three- or four-hour episodes — so this is an unusually snappy one-hour interview, where Rob and Lewis are joined by Luisa Rodriguez to continue the conversation from her episode of the show last year. Chapters:Rob’s intro (00:00:00)The interview begins (00:00:59)The biggest impediments to bouncing back (00:03:18)Can we do a serious version of The Knowledge? (00:14:58)Recovering without much coal or oil (00:29:56)Most valuable pro-resilience adjustments we can make today (00:40:23)Feeding the Earth in disasters (00:47:45)The reality of humans trying to actually do this (00:53:54)Most exciting recent findings in astrobiology (01:01:00)Rob’s outro (01:03:37)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
undefined
May 23, 2022 • 2h 17min

#130 – Will MacAskill on balancing frugality with ambition, whether you need longtermism, & mental health under pressure

Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you're all bunched up on a few tables in a basement office. But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You're the same group of people committed to making sacrifices for the cause — but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP. You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have. This is roughly the situation faced by today's guest Will MacAskill — University of Oxford philosopher, author of the forthcoming book What We Owe The Future, and founding figure in the effective altruism movement. Links to learn more, summary and full transcript. Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing. While surely a huge success, it brings with it risks that he's never had to consider before: • Will and his colleagues might try to spend a lot of money trying to get more things done more quickly — but actually just waste it. • Being seen as profligate could strike onlookers as selfish and disreputable. • Folks might start pretending to agree with their agenda just to get grants. • People working on nearby issues that are less flush with funding may end up resentful. • People might lose their focus on helping others as they get seduced by the prospect of earning a nice living. • Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely. But all these 'risks of commission' have to be weighed against 'risk of omission': the failure to achieve all you could have if you'd been truly ambitious. People looking askance at you for paying high salaries to attract the staff you want is unpleasant. But failing to prevent the next pandemic because you didn't have the necessary medical experts on your grantmaking team is worse than unpleasant — it's a true disaster. Yet few will complain, because they'll never know what might have been if you'd only set frugality aside. Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition. In today's episode, Rob and Will discuss the above as well as: • Will humanity likely converge on good values as we get more educated and invest more in moral philosophy — or are the things we care about actually quite arbitrary and contingent? • Why are so many nonfiction books full of factual errors? • How does Will avoid anxiety and depression with more responsibility on his shoulders than ever? • What does Will disagree with his colleagues on? • Should we focus on existential risks more or less the same way, whether we care about future generations or not? • Are potatoes one of the most important technologies ever developed? • And plenty more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
undefined
May 9, 2022 • 3h 20min

#129 – James Tibenderana on the state of the art in malaria control and elimination

The good news is deaths from malaria have been cut by a third since 2005. The bad news is it still causes 250 million cases and 600,000 deaths a year, mostly among young children in sub-Saharan Africa.We already have dirt-cheap ways to prevent and treat malaria, and the fraction of the Earth's surface where the disease exists at all has been halved since 1900. So why is it such a persistent problem in some places, even rebounding 15% since 2019?That's one of many questions I put to today's guest, James Tibenderana — doctor, medical researcher, and technical director at a major global health nonprofit known as Malaria Consortium. James studies the cutting edge of malaria control and treatment in order to optimise how Malaria Consortium spends £100 million a year across countries like Uganda, Nigeria, and Chad.Links to learn more, summary and full transcript. In sub-Saharan Africa, where 90% of malaria deaths occur, the infection is spread by a few dozen species of mosquito that are ideally suited to the local climatic conditions and have thus been impossible to eliminate so far. While COVID-19 may have an 'R' (reproduction number) of 5, in some situations malaria has a reproduction number in the 1,000s. A single person with malaria can pass the parasite to hundreds of mosquitoes, which themselves each go on to bite dozens of people each, allowing cases to quickly explode. The nets and antimalarial drugs Malaria Consortium distributes have been highly effective where distributed, but there are tens of millions of young children who are yet to be covered simply due to a lack of funding. Despite the success of these approaches, given how challenging it will be to create a malaria-free world, there's enthusiasm to find new approaches to throw at the problem. Two new interventions have recently generated buzz: vaccines and genetic approaches to control the mosquito species that carry malaria. The RTS,S vaccine is the first-ever vaccine that attacks a protozoa as opposed to a virus or bacteria. It's a great scientific achievement. But James points out that even after three doses, it's still only about 30% effective. Unless future vaccines are substantially more effective, they will remain just a complement to nets and antimalarial drugs, which are cheaper and each cut mortality by more than half. On the other hand, the latest mosquito-control technologies are almost too effective. It is possible to insert genes into specific mosquito populations that reduce their ability to reproduce. By using a 'gene drive,' you can ensure mosquitoes hand these detrimental genes down to 100% of their offspring. If deployed, these genes would spread and ultimately eliminate the mosquitoes that carry malaria at low cost, thereby largely ridding the world of the disease. Because a single country embracing this method would have global effects, James cautions that it's important to get buy-in from all the countries involved, and to have a way of reversing the intervention if we realise we've made a mistake. In this comprehensive conversation, Rob and James discuss all of the above, as well as most of what you could reasonably want to know about the state of the art in malaria control today, including: • How malaria spreads and the symptoms it causes • The use of insecticides and poison baits • How big a problem insecticide resistance is • How malaria was eliminated in North America and Europe • The key strategic choices faced by Malaria Consortium in its efforts to create a malaria-free world • And much moreChapters:Rob’s intro (00:00:00)The interview begins (00:02:06)Malaria basics (00:06:56)Malaria vaccines (00:12:37)Getting rid of mosquitos (00:32:20)Gene drives (00:38:06)Symptoms (00:58:00)Preventing the spread (01:06:00)Why we haven’t gotten rid of malaria yet (01:15:07)What James is responsible for as technical director (01:30:52)Malaria Consortium's current strategy (01:39:59)Elimination vs. control (02:01:49)Delivery and practicalities (02:16:23)Relationships with governments (02:26:38)Funding gap (02:31:03)Access and use gap (02:39:10)The value of local researchers (02:49:26)Past research findings (02:57:10)How to help (03:06:30)How James ended up where he is today (03:13:45)Producer: Keiran HarrisAudio mastering: Ryan KesslerTranscriptions: Katy Moore
undefined
Apr 28, 2022 • 2h 47min

#128 – Chris Blattman on the five reasons wars happen

In nature, animals roar and bare their teeth to intimidate adversaries — but one side usually backs down, and real fights are rare. The wisdom of evolution is that the risk of violence is just too great.Which might make one wonder: if war is so destructive, why does it happen? The question may sound naïve, but in fact it represents a deep puzzle. If a war will cost trillions and kill tens of thousands, it should be easy for either side to make a peace offer that both they and their opponents prefer to actually fighting it out.The conundrum of how humans can engage in incredibly costly and protracted conflicts has occupied academics across the social sciences for years. In today's episode, we speak with economist Chris Blattman about his new book, Why We Fight: The Roots of War and the Paths to Peace, which summarises what they think they've learned. Links to learn more, summary and full transcript. Chris's first point is that while organised violence may feel like it's all around us, it's actually very rare in humans, just as it is with other animals. Across the world, hundreds of groups dislike one another — but knowing the cost of war, they prefer to simply loathe one another in peace. In order to understand what’s wrong with a sick patient, a doctor needs to know what a healthy person looks like. And to understand war, social scientists need to study all the wars that could have happened but didn't — so they can see what a healthy society looks like and what's missing in the places where war does take hold. Chris argues that social scientists have generated five cogent models of when war can be 'rational' for both sides of a conflict: 1. Unchecked interests — such as national leaders who bear few of the costs of launching a war. 2. Intangible incentives — such as an intrinsic desire for revenge. 3. Uncertainty — such as both sides underestimating each other's resolve to fight. 4. Commitment problems — such as the inability to credibly promise not to use your growing military might to attack others in future. 5. Misperceptions — such as our inability to see the world through other people's eyes. In today's interview, we walk through how each of the five explanations work and what specific wars or actions they might explain. In the process, Chris outlines how many of the most popular explanations for interstate war are wildly overused (e.g. leaders who are unhinged or male) or misguided from the outset (e.g. resource scarcity). The interview also covers: • What Chris and Rob got wrong about the war in Ukraine • What causes might not fit into these five categories • The role of people's choice to escalate or deescalate a conflict • How great power wars or nuclear wars are different, and what can be done to prevent them • How much representative government helps to prevent war • And much more Chapters:Rob’s intro (00:00:00)The interview begins (00:01:43)What people get wrong about violence (00:04:40)Medellín gangs (00:11:48)Overrated causes of violence (00:23:53)Cause of war #1: Unchecked interests (00:36:40)Cause of war #2: Intangible incentives (00:41:40)Cause of war #3: Uncertainty (00:53:04)Cause of war #4: Commitment problems (01:02:24)Cause of war #5: Misperceptions (01:12:18)Weaknesses of the model (01:26:08)Dancing on the edge of a cliff (01:29:06)Confusion around escalation (01:35:26)Applying the model to the war between Russia and Ukraine (01:42:34)Great power wars (02:01:46)Preventing nuclear war (02:18:57)Why undirected approaches won't work (02:22:51)Democratic peace theory (02:31:10)Exchanging hostages (02:37:21)What you can actually do to help (02:41:25)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
undefined
Apr 14, 2022 • 3h 20min

#127 – Sam Bankman-Fried on taking a high-risk approach to crypto and doing good

On this episode of the show, host Rob Wiblin interviews Sam Bankman-Fried. This interview was recorded in February 2022, and released in April 2022. But on November 11 2022, Sam Bankman-Fried's company, FTX, filed for bankruptcy, and all staff at the Future Fund resigned — and the surrounding events led Rob to record a new intro on December 1st 2022 for this episode. • Read 80,000 Hours' statement on these events here. • You can also listen to host Rob’s reaction to the collapse of FTX on this podcast feed, above episode 140, or here. • Rob has shared some clarifications on his views about diminishing returns and risk aversion, and weaknesses in how it was discussed in this episode, here. • And you can read the original blog post associated with the episode here.
undefined
Apr 5, 2022 • 2h 15min

#126 – Bryan Caplan on whether lazy parenting is OK, what really helps workers, and betting on beliefs

Everybody knows that good parenting has a big impact on how kids turn out. Except that maybe they don't, because it doesn't.Incredible though it might seem, according to today's guest — economist Bryan Caplan, the author of Selfish Reasons To Have More Kids, The Myth of the Rational Voter, and The Case Against Education — the best evidence we have on the question suggests that, within reason, what parents do has little impact on how their children's lives play out once they're adults.Links to learn more, summary and full transcript. Of course, kids do resemble their parents. But just as we probably can't say it was attentive parenting that gave me my mother's nose, perhaps we can't say it was attentive parenting that made me succeed at school. Both the social environment we grow up in and the genes we receive from our parents influence the person we become, and looking at a typical family we can't really distinguish the impact of one from the other. But nature does offer us up a random experiment that can let us tell the difference: identical twins share all their genes, while fraternal twins only share half their genes. If you look at how much more similar outcomes are for identical twins than fraternal twins, you see the effect of sharing 100% of your genetic material, rather than the usual 50%. Double that amount, and you've got the full effect of genetic inheritance. Whatever unexplained variation remains is still up for grabs — and might be down to different experiences in the home, outside the home, or just random noise. The crazy thing about this research is that it says for a range of adult outcomes (e.g. years of education, income, health, personality, and happiness), it's differences in the genes children inherit rather than differences in parental behaviour that are doing most of the work. Other research suggests that differences in “out-of-home environment” take second place. Parenting style does matter for something, but it comes in a clear third. Bryan is quick to point out that there are several factors that help reconcile these findings with conventional wisdom about the importance of parenting. First, for some adult outcomes, parenting was a big deal (i.e. the quality of the parent/child relationship) or at least a moderate deal (i.e. drug use, criminality, and religious/political identity). Second, parents can and do influence you quite a lot — so long as you're young and still living with them. But as soon as you move out, the influence of their behaviour begins to wane and eventually becomes hard to spot. Third, this research only studies variation in parenting behaviour that was common among the families studied. And fourth, research on international adoptions shows they can cause massive improvements in health, income and other outcomes. But the findings are still remarkable, and imply many hyper-diligent parents could live much less stressful lives without doing their kids any harm at all. In this extensive interview Rob interrogates whether Bryan can really be right, or whether the research he's drawing on has taken a wrong turn somewhere. And that's just one topic we cover, some of the others being: • People’s biggest misconceptions about the labour market • Arguments against open borders • Whether most people actually vote based on self-interest • Whether philosophy should stick to common sense or depart from it radically • Personal autonomy vs. the possible benefits of government regulation • Bryan's perfect betting record • And much more Chapters:Rob’s intro (00:00:00)The interview begins (00:01:15)Labor Econ Versus the World (00:04:55)Open Borders (00:20:30)How much parenting matters (00:35:49)Self-Interested Voter Hypothesis (01:00:31)Why Bryan and Rob disagree so much on philosophy (01:12:04)Libertarian free will (01:25:10)The effective altruism community (01:38:46)Bryan’s betting record (01:48:19)Individual autonomy vs. welfare (01:59:06)Arrogant hedgehogs (02:10:43)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
undefined
Mar 29, 2022 • 2h 14min

#125 – Joan Rohlfing on how to avoid catastrophic nuclear blunders

Since the Soviet Union split into different countries in 1991, the pervasive fear of catastrophe that people lived with for decades has gradually faded from memory, and nuclear warhead stockpiles have declined by 83%. Nuclear brinksmanship, proxy wars, and the game theory of mutually assured destruction (MAD) have come to feel like relics of another era. Russia's invasion of Ukraine has changed all that. According to Joan Rohlfing — President of the Nuclear Threat Initiative, a Washington, DC-based nonprofit focused on reducing threats from nuclear and biological weapons — the annual risk of a ‘global catastrophic nuclear event'’ never fell as low as people like to think, and for some time has been on its way back up. Links to learn more, summary and full transcript. At the same time, civil society funding for research and advocacy around nuclear risks is being cut in half over a period of years — despite the fact that at $60 million a year, it was already just a thousandth as much as the US spends maintaining its nuclear deterrent. If new funding sources are not identified to replace donors that are withdrawing, the existing pool of talent will have to leave for greener pastures, and most of the next generation will see a career in the field as unviable. While global poverty is on the decline and life expectancy increasing, the chance of a catastrophic nuclear event is probably trending in the wrong direction. Ukraine gave up its nuclear weapons in 1994 in exchange for security guarantees that turned out not to be worth the paper they were written on. States that have nuclear weapons (such as North Korea), states that are pursuing them (such as Iran), and states that have pursued nuclear weapons but since abandoned them (such as Libya, Syria, and South Africa) may take this as a valuable lesson in the importance of military power over promises. China has been expanding its arsenal and testing hypersonic glide missiles that can evade missile defences. Japan now toys with the idea of nuclear weapons as a way to ensure its security against its much larger neighbour. India and Pakistan both acquired nuclear weapons in the late 1980s and their relationship continues to oscillate from hostile to civil and back. At the same time, the risk that nuclear weapons could be interfered with due to weaknesses in computer security is far higher than during the Cold War, when systems were simpler and less networked. In the interview, Joan discusses several steps that can be taken in the immediate term, such as renewed efforts to extend and expand arms control treaties, changes to nuclear use policy, and the retirement of what they see as vulnerable delivery systems, such as land-based silos. In the bigger picture, NTI seeks to keep hope alive that a better system than deterrence through mutually assured destruction remains possible. The threat of retaliation does indeed make nuclear wars unlikely, but it necessarily means the system fails in an incredibly destructive way: with the death of hundreds of millions if not billions. In the long run, even a tiny 1 in 500 risk of a nuclear war each year adds up to around an 18% chance of catastrophe over the century. In this conversation we cover all that, as well as: • How arms control treaties have evolved over the last few decades • Whether lobbying by arms manufacturers is an important factor shaping nuclear strategy • The Biden Nuclear Posture Review • How easily humanity might recover from a nuclear exchange • Implications for the use of nuclear energy Chapters:Rob’s intro (00:00:00)Joan’s EAG presentation (00:01:40)The interview begins (00:27:06)Nuclear security funding situation (00:31:09)Policy solutions for addressing a one-person or one-state risk factor (00:36:46)Key differences in the nuclear security field (00:40:44)Scary scenarios (00:47:02)Why the US shouldn’t expand its nuclear arsenal (00:52:56)The evolution of nuclear risk over the last 10 years (01:03:41)The interaction between nuclear weapons and cybersecurity (01:10:18)The chances of humanity bouncing back after nuclear war (01:13:52)What we should actually do (01:17:57)Could sensors be a game-changer? (01:22:39)Biden Nuclear Posture Review (01:27:50)Influence of lobbying firms (01:33:58)What NTI might do with an additional $20 million (01:36:38)Nuclear energy tradeoffs (01:43:55)Why we can’t rely on Stanislav Petrovs (01:49:49)Preventing war vs. building resilience for recovery (01:52:15)Places to donate other than NTI (01:54:25)Career advice (02:00:15)Why this problem is solvable (02:09:27)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
undefined
Mar 21, 2022 • 3h 10min

#124 – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Karen Levy — deworming pioneer and veteran of Innovations for Poverty Action, Evidence Action, and Y Combinator — each of those three concepts has become so fashionable that they're at risk of being seriously overrated and applied where they don't belong. Links to learn more, summary and full transcript. Such concepts might even cause harm — trying to make a project embody all three is as likely to ruin it as help it flourish. First, what do people mean by 'sustainability'? Usually they mean something like the programme will eventually be able to continue without needing further financial support from the donor. But how is that possible? Governments, nonprofits, and aid agencies aim to provide health services, education, infrastructure, financial services, and so on — and all of these require ongoing funding to pay for materials and staff to keep them running. Given that someone needs to keep paying, Karen tells us that in practice, 'sustainability' is usually a euphemism for the programme at some point being passed on to someone else to fund — usually the national government. And while that can be fine, the national government of Kenya only spends $400 per person to provide each and every government service — just 2% of what the US spends on each resident. Incredibly tight budgets like that are typical of low-income countries. 'Participatory' also sounds nice, and inasmuch as it means leaders are accountable to the people they're trying to help, it probably is. But Karen tells us that in the field, ‘participatory’ usually means that recipients are expected to be involved in planning and delivering services themselves. While that might be suitable in some situations, it's hardly something people in rich countries always want for themselves. Ideally we want government healthcare and education to be high quality without us having to attend meetings to keep it on track — and people in poor countries have as many or more pressures on their time. While accountability is desirable, an expectation of participation can be as much a burden as a blessing. Finally, making a programme 'holistic' could be smart, but as Karen lays out, it also has some major downsides. For one, it means you're doing lots of things at once, which makes it hard to tell which parts of the project are making the biggest difference relative to their cost. For another, when you have a lot of goals at once, it's hard to tell whether you're making progress, or really put your mind to focusing on making one thing go extremely well. And finally, holistic programmes can be impractically expensive — Karen tells the story of a wonderful 'holistic school health' programme that, if continued, was going to cost 3.5 times the entire school's budget. In today's in-depth conversation, Karen Levy and I chat about the above, as well as: • Why it pays to figure out how you'll interpret the results of an experiment ahead of time • The trouble with misaligned incentives within the development industry • Projects that don't deliver value for money and should be scaled down • How Karen accidentally became a leading figure in the push to deworm tens of millions of schoolchildren • Logistical challenges in reaching huge numbers of people with essential services • Lessons from Karen's many-decades career • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode