80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
Apr 15, 2020 • 1h 4min

Article: Reducing global catastrophic biological risks

In a few days we'll be putting out a conversation with Dr Greg Lewis, who studies how to prevent global catastrophic biological risks at Oxford's Future of Humanity Institute. Greg also wrote a new problem profile on that topic for our website, and reading that is a good lead-in to our interview with him. So in a bit of an experiment we decided to make this audio version of that article, narrated by the producer of the 80,000 Hours Podcast, Keiran Harris. We’re thinking about having audio versions of other important articles we write, so it’d be great if you could let us know if you’d like more of these. You can email us your view at podcast@80000hours.org. If you want to check out all of Greg’s graphs and footnotes that we didn’t include, and get links to learn more about GCBRs - you can find those here. And if you want to read more about COVID-19, the 80,000 Hours team has produced a fantastic package of 10 pieces about how to stop the pandemic. You can find those here.
undefined
Mar 19, 2020 • 1h 52min

Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might do to help

From home isolation Rob and Howie just recorded an episode on: 1. How many could die in the crisis, and the risk to your health personally. 2. What individuals might be able to do help tackle the coronavirus crisis. 3. What we suspect governments should do in response to the coronavirus crisis. 4. The importance of personally not spreading the virus, the properties of the SARS-CoV-2 virus, and how you can personally avoid it. 5. The many places society screwed up, how we can avoid this happening again, and why be optimistic.  We have rushed this episode out to share information as quickly as possible in a fast-moving situation. If you would prefer to read you can find the transcript here. We list a wide range of valuable resources and links in the blog post attached to the show (over 60, including links to projects you can join). See our 'problem profile' on global catastrophic biological risks for information on these grave threats and how you can contribute to preventing them. We have also just added a COVID-19 landing page on our site. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris.
undefined
Mar 17, 2020 • 2h 35min

#73 – Phil Trammell on patient philanthropy and waiting to do good

To do good, most of us look to use our time and money to affect the world around us today. But perhaps that's all wrong. If you took $1,000 you were going to donate and instead put it in the stock market — where it grew on average 5% a year — in 100 years you'd have $125,000 to give away instead. And in 200 years you'd have $17 million. This astonishing fact has driven today's guest, economics researcher Philip Trammell at Oxford's Global Priorities Institute, to investigate the case for and against so-called 'patient philanthropy' in depth. If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now. He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, they'll also be able to rely on the much broader knowledge available to future generations. A donor two hundred years ago couldn't have known distributing anti-malarial bed nets was a good idea. Not only did bed nets not exist — we didn't even know about germs, and almost nothing in medicine was justified by science. ADDED: Does the COVID-19 emergency mean we should actually use resources right now? See Phil's first thoughts on this question here. • Links to learn more, summary and full transcript.  What similar leaps will our descendants have made in 200 years, allowing your now vast foundation to benefit more people in even greater ways?  And there's a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? It's possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own.  Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse?  Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended?  Or perhaps it could fail for the reverse reason, by staying true to your original vision — if that vision turns out to be as deeply morally mistaken as the Rhodes' Scholarships initial charter, which limited it to 'white Christian men'.  Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good.  Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? In today's conversation with researcher Phil Trammell and my 80,000 Hours colleague Howie Lempel, we try to answer that, and also discuss:  • Real attempts at patient philanthropy in history and how they worked out  • Should we have a mixed strategy, where some altruists are patient and others impatient?  • Which causes most need money now, and which later?  • What is the research frontier here?  • What does this all mean for what listeners should do differently?  Chapters:Rob’s intro (00:00:00)The interview begins (00:02:23)Consequences for getting this question wrong (00:06:03)What have people had to say about this question in the past? (00:07:22)The case for saving (00:11:51)Hundred year leases (00:29:28)Should we be concerned about one group taking control of the world? (00:34:51)Finding better interventions in the future (00:37:20)The hinge of history (00:43:46)Does uncertainty lead us to wanting to wait? (01:01:52)Counterarguments (01:11:36)What about groups who have a particular sense of urgency? (01:40:46)How much should we actually save? (02:01:35)Implications for career choices (02:19:49)  Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq. 
undefined
Mar 7, 2020 • 3h 14min

#72 - Toby Ord on the precipice and humanity's potential futures

This week Oxford academic and 80,000 Hours trustee Dr Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It's about how our long-term future could be better than almost anyone believes, but also how humanity's recklessness is putting that future at grave risk — in Toby's reckoning, a 1 in 6 chance of being extinguished this century. I loved the book and learned a great deal from it (buy it here, US and audiobook release March 24). While preparing for this interview I copied out 87 facts that were surprising, shocking or important. Here's a sample of 16: 1. The probability of a supervolcano causing a civilisation-threatening catastrophe in the next century is estimated to be 100x that of asteroids and comets combined. 2. The Biological Weapons Convention — a global agreement to protect humanity — has just four employees, and a smaller budget than an average McDonald’s. 3. In 2008 a 'gamma ray burst' reached Earth from another galaxy, 10 billion light years away. It was still bright enough to be visible to the naked eye. We aren't sure what generates gamma ray bursts but one cause may be two neutron stars colliding. 4. Before detonating the first nuclear weapon, scientists in the Manhattan Project feared that the high temperatures in the core, unprecedented for Earth, might be able to ignite the hydrogen in water. This would set off a self-sustaining reaction that would burn off the Earth’s oceans, killing all life above ground. They thought this was unlikely, but many atomic scientists feared their calculations could be missing something. As far as we know, the US President was never informed of this possibility, but similar risks were one reason Hitler stopped… N.B. I've had to cut off this list as we only get 4,000 characters in these show notes, so: Click here to read the whole list, see a full transcript, and find related links. And if you like the list, you can get a free copy of the introduction and first chapter by joining our mailing list. While I've been studying these topics for years and known Toby for the last eight, a remarkable amount of what's in The Precipice was new to me. Of course the book isn't a series of isolated amusing facts, but rather a systematic review of the many ways humanity's future could go better or worse, how we might know about them, and what might be done to improve the odds. And that's how we approach this conversation, first talking about each of the main threats, then how we can learn about things that have never happened before, then finishing with what a great future for humanity might look like and how it might be achieved. Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so as expected this was a great interview, and one which Arden Koehler and I barely even had to work for. Some topics Arden and I ask about include: • What Toby changed his mind about while writing the book • Are people exaggerating when they say that climate change could actually end civilization? • What can we learn from historical pandemics? • Toby’s estimate of unaligned AI causing human extinction in the next century • Is this century the most important time in human history, or is that a narcissistic delusion? • Competing vision for humanity's ideal future • And more. Get this episode by subscribing: type '80,000 Hours' into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
undefined
Mar 2, 2020 • 2h 57min

#71 - Benjamin Todd on the key ideas of 80,000 Hours

The 80,000 Hours Podcast is about “the world’s most pressing problems and how you can use your career to solve them”, and in this episode we tackle that question in the most direct way possible. Last year we published a summary of all our key ideas, which links to many of our other articles, and which we are aiming to keep updated as our opinions shift. All of us added something to it, but the single biggest contributor was our CEO and today's guest, Ben Todd, who founded 80,000 Hours along with Will MacAskill back in 2012. This key ideas page is the most read on the site. By itself it can teach you a large fraction of the most important things we've discovered since we started investigating high impact careers. • Links to learn more, summary and full transcript. But it's perhaps more accurate to think of it as a mini-book, as it weighs in at over 20,000 words. Fortunately it's designed to be highly modular and it's easy to work through it over multiple sessions, scanning over the articles it links to on each topic. Perhaps though, you'd prefer to absorb our most essential ideas in conversation form, in which case this episode is for you. If you want to have a big impact with your career, and you say you're only going to read one article from us, we recommend you read our key ideas page. And likewise, if you're only going to listen to one of our podcast episodes, it should be this one. We have fun and set a strong pace, running through: • Common misunderstandings of our advice • A high level overview of what 80,000 Hours generally recommends • Our key moral positions • What are the most pressing problems to work on and why? • Which careers effectively contribute to solving those problems? • Central aspects of career strategy like how to weigh up career capital, personal fit, and exploration • As well as plenty more. One benefit of this podcast over the article is that we can more easily communicate uncertainty, and dive into the things we're least sure about, or didn’t yet cover within the article. Note though that our what’s in the article is more precisely stated, our advice is going to keep shifting, and we're aiming to keep the key ideas page current as our thinking evolves over time. This episode was recorded in November 2019, so if you notice a conflict between the page and this episode in the future, go with the page! Get the episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
undefined
Feb 25, 2020 • 44min

Arden & Rob on demandingness, work-life balance & injustice (80k team chat #1)

Today's bonus episode of the podcast is a quick conversation between me and my fellow 80,000 Hours researcher Arden Koehler about a few topics, including the demandingness of morality, work-life balance, and emotional reactions to injustice. Arden is about to graduate with a philosophy PhD from New York University, so naturally we dive right into some challenging implications of utilitarian philosophy and how it might be applied to real life. Issues we talk about include: • If you’re not going to be completely moral, should you try being a bit more ethical, or give up? • Should you feel angry if you see an injustice, and if so, why? • How much should we ask people to live frugally? So far the feedback on the post-episode chats that we've done have been positive, so we thought we'd go ahead and try out this freestanding one. But fair warning: it's among the more difficult episodes to follow, and probably not the best one to listen to first, as you'll benefit from having more context! If you'd like to listen to more of Arden you can find her in episode 67, David Chalmers on the nature and ethics of consciousness, or episode 66, Peter Singer on being provocative, EA, and how his moral views have changed. Here's more information on some of the issues we touch on: • Consequentialism on Wikipedia • Appropriate dispositions on the Stanford Encyclopaedia of Philosophy • Demandingness objection on Wikipedia • And a paper on epistemic normativity. ——— I mention the call for papers of the Academic Workshop on Global Priorities in the introduction — you can learn more here. And finally, Toby Ord — one of our founding Trustees and a Senior Research Fellow in Philosophy at Oxford University — has his new book The Precipice: Existential Risk and the Future of Humanity coming out next week. I've read it and very much enjoyed it. Find out where you can pre-order it here. We'll have an interview with him coming up soon.
undefined
Feb 13, 2020 • 2h 27min

#70 - Dr Cassidy Nelson on the 12 best ways to stop the next pandemic (and limit nCoV)

nCoV is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places. But bad though it is, it's much closer to a warning shot than a worst case scenario. The next emerging infectious disease could easily be more contagious, more fatal, or both. Despite improvements in the last few decades, humanity is still not nearly prepared enough to contain new diseases. We identify them too slowly. We can't do enough to reduce their spread. And we lack vaccines or drugs treatments for at least a year, if they ever arrive at all. • Links to learn more, summary and full transcript. This is a precarious situation, especially with advances in biotechnology increasing our ability to modify viruses and bacteria as we like. In today's episode, Cassidy Nelson, a medical doctor and research scholar at Oxford University's Future of Humanity Institute, explains 12 things her research group think urgently need to happen if we're to keep the risk at acceptable levels. The ideas are: Science 1. Roll out genetic sequencing tests that lets you test someone for all known and unknown pathogens in one go. 2. Fund research into faster ‘platform’ methods for going from pathogen to vaccine, perhaps using innovation prizes. 3. Fund R&D into broad-spectrum drugs, especially antivirals, similar to how we have generic antibiotics against multiple types of bacteria. Response 4. Develop a national plan for responding to a severe pandemic, regardless of the cause. Have a backup plan for when things are so bad the normal processes have stopped working entirely. 5. Rigorously evaluate in what situations travel bans are warranted. (They're more often counterproductive.) 6. Coax countries into more rapidly sharing their medical data, so that during an outbreak the disease can be understood and countermeasures deployed as quickly as possible. 7. Set up genetic surveillance in hospitals, public transport and elsewhere, to detect new pathogens before an outbreak — or even before patients develop symptoms. 8. Run regular tabletop exercises within governments to simulate how a pandemic response would play out. Oversight  9. Mandate disclosure of accidents in the biosafety labs which handle the most dangerous pathogens. 10. Figure out how to govern DNA synthesis businesses, to make it harder to mail order the DNA of a dangerous pathogen. 11. Require full cost-benefit analysis of 'dual-use' research projects that can generate global risks.  12. And finally, to maintain momentum, it's necessary to clearly assign responsibility for the above to particular individuals and organisations. These advances can be pursued by politicians and public servants, as well as academics, entrepreneurs and doctors, opening the door for many listeners to pitch in to help solve this incredibly pressing problem. In the episode Rob and Cassidy also talk about: • How Cassidy went from clinical medicine to a PhD studying novel pathogens with pandemic potential. • The pros, and significant cons, of travel restrictions. • Whether the same policies work for natural and anthropogenic pandemics. • Ways listeners can pursue a career in biosecurity. • Where we stand with nCoV as of today.Chapters:Rob’s intro (00:00:00)The interview begins (00:03:27)Where we stand with nCov today (00:07:24)Policy idea 1: A drastic change to diagnostic testing (00:34:58)Policy idea 2: Vaccine platforms (00:47:08)Policy idea 3: Broad-spectrum therapeutics (00:54:48)Policy idea 4: Develop a national plan for responding to a severe pandemic, regardless of the cause (01:02:15)Policy idea 5: A different approach to travel bans (01:15:59)Policy idea 6: Data sharing (01:16:48)Policy idea 7: Prevention (01:24:45)Policy idea 8: transparency around lab accidents (01:33:58)Policy idea 9: DNA synthesis screening (01:39:22)Policy idea 10: Dual Use Research oversight (01:48:47)Policy idea 11: Pandemic tabletop exercises (02:00:00)Policy idea 12: Coordination (02:12:20) Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Transcriptions: Zakee Ulhaq.
undefined
Feb 6, 2020 • 1h 37min

#69 – Jeffrey Ding on China, its AI dream, and what we get wrong about both

The State Council of China's 2017 AI plan was the starting point of China’s AI planning; China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; and there is little to no discussion of issues of AI ethics and safety in China. How many of these ideas have you heard? In his paper Deciphering China's AI Dream, today's guest, PhD student Jeff Ding, outlines why he believes none of these claims are true. • Links to learn more, summary and full transcript. • What’s the best charity to donate to? He first places China’s new AI strategy in the context of its past science and technology plans, as well as other countries’ AI plans. What is China actually doing in the space of AI development? Jeff emphasises that China's AI strategy did not appear out of nowhere with the 2017 state council AI development plan, which attracted a lot of overseas attention. Rather that was just another step forward in a long trajectory of increasing focus on science and technology. It's connected with a plan to develop an 'Internet of Things', and linked to a history of strategic planning for technology in areas like aerospace and biotechnology. And it was not just the central government that was moving in this space; companies were already pushing forward in AI development, and local level governments already had their own AI plans. You could argue that the central government was following their lead in AI more than the reverse. What are the different levers that China is pulling to try to spur AI development? Here, Jeff wanted to challenge the myth that China's AI development plan is based on a monolithic central plan requiring people to develop AI. In fact, bureaucratic agencies, companies, academic labs, and local governments each set up their own strategies, which sometimes conflict with the central government. Are China's AI capabilities especially impressive? In the paper Jeff develops a new index to measure and compare the US and China's progress in AI. Jeff’s AI Potential Index — which incorporates trends and capabilities in data, hardware, research and talent, and the commercial AI ecosystem — indicates China’s AI capabilities are about half those of America. His measure, though imperfect, dispels the notion that China's AI capabilities have surpassed the US or make it the world's leading AI power. Following that 2017 plan, a lot of Western observers thought that to have a good national AI strategy we'd need to figure out how to play catch-up with China. Yet Chinese strategic thinkers and writers at the time actually thought that they were behind — because the Obama administration had issued a series of three white papers in 2016. Finally, Jeff turns to the potential consequences of China’s AI dream for issues of national security, economic development, AI safety and social governance.  He claims that, despite the widespread belief to the contrary, substantive discussions about AI safety and ethics are indeed emerging in China. For instance, a new book from Tencent’s Research Institute is proactive in calling for stronger awareness of AI safety issues.  In today’s episode, Rob and Jeff go through this widely-discussed report, and also cover:  • The best analogies for thinking about the growing influence of AI  • How do prominent Chinese figures think about AI?  • Coordination with China • China’s social credit system  • Suggestions for people who want to become professional China specialists  • And more. Chapters:Rob’s intro (00:00:00)The interview begins (00:01:02)Deciphering China’s AI Dream (00:04:17)Analogies for thinking about AI (00:12:30)How do prominent Chinese figures think about AI? (00:16:15)Cultural cliches in the West and China (00:18:59)Coordination with China on AI (00:24:03)Private companies vs. government research (00:28:55)Compute (00:31:58)China’s social credit system (00:41:26)Relationship between China and other countries beyond AI (00:43:51)Careers advice (00:54:40)Jeffrey’s talk at EAG (01:16:01)Rob’s outro (01:37:12)  Producer: Keiran Harris.Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
undefined
Feb 3, 2020 • 1h 19min

Rob & Howie on what we do and don't know about 2019-nCoV

Two 80,000 Hours researchers, Robert Wiblin and Howie Lempel, record an experimental bonus episode about the new 2019-nCoV virus.See this list of resources, including many discussed in the episode, to learn more.In the 1h15m conversation we cover:• What is it? • How many people have it? • How contagious is it? • What fraction of people who contract it die?• How likely is it to spread out of control?• What's the range of plausible fatalities worldwide?• How does it compare to other epidemics?• What don't we know and why? • What actions should listeners take, if any?• How should the complexities of the above be communicated by public health professionals?Here's a link to the hygiene advice from Laurie Garrett mentioned in the episode.Recorded 2 Feb 2020.The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Jan 24, 2020 • 3h 26min

#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

You’re given a box with a set of dice in it. If you roll an even number, a person's life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it? A committed consequentialist might say, "Sure! Free money!" But most will think it obvious that you should say no. You've only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die. And yet, according to today’s return guest, philosophy Prof Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others. • Links to learn more, summary and full transcript. • Job opportunities at the Global Priorities Institute. To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you've probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you've changed the identity of a future person. That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. After 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies. As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the 'new' people will cause car crashes that wouldn't have occurred in their absence, including crashes that prematurely kill people alive today. Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise. So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie, worth $10. Should you do it? This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers. Because most 'non-consequentialists' endorse an act/omission distinction… post truncated due to character limit, finish reading the full explanation here. So what's the best way to fix this strange conclusion? We discuss a few options, but the most promising might bring people a lot closer to full consequentialism than is immediately apparent. In this episode Will and I also cover: • Are, or are we not, living in the most influential time in history? • The culture of the effective altruism community • Will's new lower estimate of the risk of human extinction • Why Will is now less focused on AI • The differences between Americans and Brits • Why feeling guilty about characteristics you were born with is crazy • And plenty more. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode