80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
Jun 8, 2018 • 1h 32min

Rob Wiblin on the art/science of a high impact career

Today's episode is a cross-post of an interview I did with The Jolly Swagmen Podcast which came out this week. I recommend regular listeners skip to 24 minutes in to avoid hearing things they already know. Later in the episode I talk about my contrarian views, utilitarianism, how 80,000 Hours has changed and will change in the future, where I think EA is performing worst, how to use social media most effectively, and whether or not effective altruism is any sacrifice. Subscribe and get the episode by searching for '80,000 Hours' in your podcasting app. Blog post of the episode to share, including a list of topics and links to learn more. "Most people want to help others with their career, but what’s the best way to do that? Become a doctor? A politician? Work at a non-profit? How can any of us figure out the best way to use our skills to improve the world? Rob Wiblin is the Director of Research at 80,000 Hours, an organisation founded in Oxford in 2011, which aims to answer just this question and help talented people find their highest-impact career path. He hosts a popular podcast on ‘the world’s most pressing problems and how you can use your career to solve them’. After seven years of research, the 80,000 Hours team recommends against becoming a teacher, or a doctor, or working at most non-profits. And they claim their research shows some common careers do 10 or 100x as much good as others. 80,000 Hours was one of the organisations that kicked off the effective altruism movement, was a Y Combinator-backed non-profit, and has already shifted over 80 million career hours through its advice. Joe caught up with Rob in Berkeley, California, to discuss how 80,000 Hours assesses which of the world’s problems are most pressing, how you can build career capital and succeed in any role, and why you could easily save more lives than a doctor - if you think carefully about your impact." Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Jun 1, 2018 • 2h 19min

#34 - We use the worst voting system that exists. Here's how Aaron Hamlin is going to fix it.

In 1991 Edwin Edwards won the Louisiana gubernatorial election. In 2001, he was found guilty of racketeering and received a 10 year invitation to Federal prison. The strange thing about that election? By 1991 Edwards was already notorious for his corruption. Actually, that’s not it. The truly strange thing is that Edwards was clearly the good guy in the race. How is that possible? His opponent was former Ku Klux Klan Grand Wizard David Duke. How could Louisiana end up having to choose between a criminal and a Nazi sympathiser? It’s not like they lacked other options: the state’s moderate incumbent governor Buddy Roemer ran for re-election. Polling showed that Roemer was massively preferred to both the career criminal and the career bigot, and would easily win a head-to-head election against either. Unfortunately, in Louisiana every candidate from every party competes in the first round, and the top two then go on to a second - a so-called ‘jungle primary’. Vote splitting squeezed out the middle, and meant that Roemer was eliminated in the first round. Louisiana voters were left with only terrible options, in a run-off election mostly remembered for the proliferation of bumper stickers reading “Vote for the Crook. It’s Important.” We could look at this as a cultural problem, exposing widespread enthusiasm for bribery and racism that will take generations to overcome. But according to Aaron Hamlin, Executive Director of The Center for Election Science (CES), there’s a simple way to make sure we never have to elect someone hated by more than half the electorate: change how we vote. He advocates an alternative voting method called approval voting, in which you can vote for as many candidates as you want, not just one. That means that you can always support your honest favorite candidate, even when an election seems like a choice between the lesser of two evils. Full transcript, links to learn more, and summary of key points. If you'd like to meet Aaron he's doing events for CES in San Francisco, DC, Philadelphia, New York and Brooklyn over the next two weeks - RSVP here. While it might not seem sexy, this single change could transform politics. Approval voting is adored by voting researchers, who regard it as the best simple voting system available. Which do they regard as unquestionably the worst? First-past-the-post - precisely the disastrous system used and exported around the world by the US and UK. Aaron has a practical plan to spread approval voting across the US using ballot initiatives - and it just might be our best shot at making politics a bit less unreasonable. The Center for Election Science is a U.S. non-profit which aims to fix broken government by helping the world adopt smarter election systems. They recently received a $600,000 grant from the Open Philanthropy Project to scale up their efforts. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
May 29, 2018 • 1h 25min

#33 - Anders Sandberg on what if we ended ageing, solar flares & the annual risk of nuclear war

Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded?  According to our last guest, Bryan Caplan, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees. Like Stalin he has eyes for his own immortality - including an insurance plan that will cover the cost of cryogenically freezing himself after he dies - and thinks the technology to achieve it might be around the corner. Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University. Full transcript of the conversation, summary, and links to learn more. The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever - among many other questions. Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford. His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base. ***Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type *80,000 Hours* into your podcasting app.*** Last time we asked him why we don’t see aliens, and how to most efficiently colonise the universe. In today’s episode we ask about Anders’ other recent papers, including: * Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done? * How much is our perception of the risk of nuclear war biased by the fact that we wouldn’t be alive to think about it had one happened? * If biomedical research lets us slow down ageing would culture stagnate under the crushing weight of centenarians? * What long-shot drugs can people take in their 70s to stave off death? * Can science extend human (waking) life by cutting our need to sleep? * How bad would it be if a solar flare took down the electricity grid? Could it happen? * If you’re a scientist and you discover something exciting but dangerous, when should you keep it a secret and when should you share it? * Will lifelike robots make us more inclined to dehumanise one another? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
May 22, 2018 • 2h 25min

#32 - Bryan Caplan on whether his Case Against Education holds up, totalitarianism, & open borders

Bryan Caplan’s claim in *The Case Against Education* is striking: education doesn’t teach people much, we use little of what we learn, and college is mostly about trying to seem smarter than other people - so the government should slash education funding. It’s a dismaying - almost profane - idea, and one people are inclined to dismiss out of hand. But having read the book, I have to admit that Bryan can point to a surprising amount of evidence in his favour. After all, imagine this dilemma: you can have either a Princeton education without a diploma, or a Princeton diploma without an education. Which is the bigger benefit of college - learning or convincing people you’re smart? It’s not so easy to say. For this interview, I searched for the best counterarguments I could find and challenged Bryan on what seem like his weakest or most controversial claims. Wouldn’t defunding education be especially bad for capable but low income students? If you reduced funding for education, wouldn’t that just lower prices, and not actually change the number of years people study? Is it really true that students who drop out in their final year of college earn about the same as people who never go to college at all? What about studies that show that extra years of education boost IQ scores? And surely the early years of primary school, when you learn reading and arithmetic, *are* useful even if college isn’t. I then get his advice on who should study, what they should study, and where they should study, if he’s right that college is mostly about separating yourself from the pack. Full transcript, links to learn more, and summary of key points. We then venture into some of Bryan’s other unorthodox views - like that immigration restrictions are a human rights violation, or that we should worry about the risk of global totalitarianism. Bryan is a Professor of Economics at George Mason University, and a blogger at *EconLog*. He is also the author of *Selfish Reasons to Have More Kids: Why Being a Great Parent is Less Work and More Fun Than You Think*, and *The Myth of the Rational Voter: Why Democracies Choose Bad Policies*. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type *80,000 Hours* into your podcasting app. In this lengthy interview, Rob and Bryan cover: * How worried should we be about China’s new citizen ranking system as a means of authoritarian rule? * How will advances in surveillance technology impact a government’s ability to rule absolutely? * Does more global coordination make us safer, or more at risk? * Should the push for open borders be a major cause area for effective altruism? * Are immigration restrictions a human rights violation? * Why aren’t libertarian-minded people more focused on modern slavery? * Should altruists work on criminal justice reform or reducing land use regulations? * What’s the greatest art form: opera, or Nicki Minaj? * What are the main implications of Bryan’s thesis for society? * Is elementary school more valuable than university? * What does Bryan think are the best arguments against his view? * Do years of education affect political affiliation? * How do people really improve themselves and their circumstances? * Who should and who shouldn’t do a masters or PhD? * The value of teaching foreign languages in school * Are there some skills people can develop that have wide applicability? Get this episode by subscribing: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
May 18, 2018 • 48min

#31 - Allan Dafoe on defusing the political & economic risks posed by existing AI capabilities

The debate around the impacts of artificial intelligence often centres on ‘superintelligence’ - a general intellect that is much smarter than the best humans, in practically every field. But according to Allan Dafoe - Assistant Professor of Political Science at Yale University - even if we stopped at today's AI technology and simply collected more data, built more sensors, and added more computing capacity, extreme systemic risks could emerge, including: * Mass labor displacement, unemployment, and inequality; * The rise of a more oligopolistic global market structure, potentially moving us away from our liberal economic world order; * Imagery intelligence and other mechanisms for revealing most of the ballistic missile-carrying submarines that countries rely on to be able to respond to nuclear attack; * Ubiquitous sensors and algorithms that can identify individuals through face recognition, leading to universal surveillance; * Autonomous weapons with an independent chain of command, making it easier for authoritarian regimes to violently suppress their citizens. Allan is Co-Director of the Governance of AI Program, at the Future of Humanity Institute within Oxford University. His goals have been to understand the causes of world peace and stability, which in the past has meant studying why war has declined, the role of reputation and honor as drivers of war, and the motivations behind provocation in crisis escalation. Full transcript, links to learn more, and summary of key points. His current focus is helping humanity safely navigate the invention of advanced artificial intelligence. I ask Allan: * What are the distinctive characteristics of artificial intelligence from a political or international governance point of view? * Is Allan’s work just a continuation of previous research on transformative technologies, like nuclear weapons? * How can AI be well-governed? * How should we think about the idea of arms races between companies or countries? * What would you say to people skeptical about the importance of this topic? * How urgently do we need to figure out solutions to these problems? When can we expect artificial intelligence to be dramatically better than today? * What’s the most urgent questions to deal with in this field? * What can people do if they want to get into the field? * Is there anything unusual that people can look for in themselves to tell if they're a good fit to do this kind of research? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
May 15, 2018 • 2h 1min

#30 - Eva Vivalt on how little social science findings generalize from one study to another

If we have a study on the impact of a social program in a particular place and time, how confident can we be that we’ll get a similar result if we study the same program again somewhere else? Dr Eva Vivalt is a lecturer in the Research School of Economics at the Australian National University. She compiled a huge database of impact evaluations in global development - including 15,024 estimates from 635 papers across 20 types of intervention - to help answer this question. Her finding: not confident at all. The typical study result differs from the average effect found in similar studies so far by almost 100%. That is to say, if all existing studies of a particular education program find that it improves test scores by 10 points - the next result is as likely to be negative or greater than 20 points, as it is to be between 0-20 points. She also observed that results from smaller studies done with an NGO - often pilot studies - were more likely to look promising. But when governments tried to implement scaled-up versions of those programs, their performance would drop considerably.  For researchers hoping to figure out what works and then take those programs global, these failures of generalizability and ‘external validity’ should be disconcerting. Is ‘evidence-based development’ writing a cheque its methodology can’t cash? Should this make us invest less in empirical research, or more to get actually reliable results? Or as some critics say, is interest in impact evaluation distracting us from more important issues, like national or macroeconomic reforms that can’t be easily trialled? We discuss this as well as Eva’s other research, including Y Combinator’s basic income study where she is a principal investigator. Full transcript, links to related papers, and highlights from the conversation. Links mentioned at the start of the show: * 80,000 Hours Job Board * 2018 Effective Altruism Survey **Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type *80,000 Hours* into your podcasting app.** Questions include: * What is the YC basic income study looking at, and what motivates it? * How do we get people to accept clean meat? * How much can we generalize from impact evaluations? * How much can we generalize from studies in development economics? * Should we be running more or fewer studies? * Do most social programs work or not? * The academic incentives around data aggregation * How much can impact evaluations inform policy decisions? * How often do people change their minds? * Do policy makers update too much or too little in the real world? * How good or bad are the predictions of experts? How does that change when looking at individuals versus the average of a group? * How often should we believe positive results? * What’s the state of development economics? * Eva’s thoughts on our article on social interventions * How much can we really learn from being empirical? * How much should we really value RCTs? * Is an Economics PhD overrated or underrated? Get this episode by subscribing to our podcast: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
May 8, 2018 • 1h 21min

#29 - Anders Sandberg on 3 new resolutions for the Fermi paradox & how to colonise the universe

Part 2 out now: #33 - Dr Anders Sandberg on what if we ended ageing, solar flares & the annual risk of nuclear war The universe is so vast, yet we don’t see any alien civilizations. If they exist, where are they? Oxford University’s Anders Sandberg has an original answer: they’re ‘sleeping’, and for a very compelling reason. Because of the thermodynamics of computation, the colder it gets, the more computations you can do. The universe is getting exponentially colder as it expands, and as the universe cools, one Joule of energy gets worth more and more. If they wait long enough this can become a 10,000,000,000,000,000,000,000,000,000,000x gain. So, if a civilization wanted to maximize its ability to perform computations – its best option might be to lie in wait for trillions of years. Why would a civilization want to maximise the number of computations they can do? Because conscious minds are probably generated by computation, so doing twice as many computations is like living twice as long, in subjective time. Waiting will allow them to generate vastly more science, art, pleasure, or almost anything else they are likely to care about. Full transcript, related links, and key quotes. But there’s no point waking up to find another civilization has taken over and used up the universe’s energy. So they’ll need some sort of monitoring to protect their resources from potential competitors like us. It’s plausible that this civilization would want to keep the universe’s matter concentrated, so that each part would be in reach of the other parts, even after the universe’s expansion. But that would mean changing the trajectory of galaxies during this dormant period. That we don’t see anything like that makes it more likely that these aliens have local outposts throughout the universe, and we wouldn’t notice them until we broke their rules. But breaking their rules might be our last action as a species. This ‘aestivation hypothesis’ is the invention of Dr Sandberg, a Senior Research Fellow at the Future of Humanity Institute at Oxford University, where he looks at low-probability, high-impact risks, predicting the capabilities of future technologies and very long-range futures for humanity. In this incredibly fun conversation we cover this and other possible explanations to the Fermi paradox, as well as questions like: * Should we want optimists or pessimists working on our most important problems? * How should we reason about low probability, high impact risks? * Would a galactic civilization want to stop the stars from burning? * What would be the best strategy for exploring and colonising the universe? * How can you stay coordinated when you’re spread across different galaxies? * What should humanity decide to do with its future? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Apr 27, 2018 • 1h 3min

#28 - Owen Cotton-Barratt on why scientists should need insurance, PhD strategy & fast AI progresses

A researcher is working on creating a new virus – one more dangerous than any that exist naturally. They believe they’re being as careful as possible. After all, if things go wrong, their own life and that of their colleagues will be in danger. But if an accident is capable of triggering a global pandemic – hundreds of millions of lives might be at risk. How much additional care will the researcher actually take in the face of such a staggering death toll? In a new paper Dr Owen Cotton-Barratt, a Research Fellow at Oxford University’s Future of Humanity Institute, argues it’s impossible to expect them to make the correct adjustments. If they have an accident that kills 5 people – they’ll feel extremely bad. If they have an accident that kills 500 million people, they’ll feel even worse – but there’s no way for them to feel 100 million times worse. The brain simply doesn’t work that way. So, rather than relying on individual judgement, we could create a system that would lead to better outcomes: research liability insurance.  Links to learn more, summary and full transcript. Once an insurer assesses how much damage a particular project is expected to cause and with what likelihood – in order to proceed, the researcher would need to take out insurance against the predicted risk. In return, the insurer promises that they’ll pay out – potentially tens of billions of dollars – if things go really badly. This would force researchers think very carefully about the cost and benefits of their work – and incentivize the insurer to demand safety standards on a level that individual researchers can’t be expected to impose themselves. ***Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.*** Owen is currently hiring for a selective, two-year research scholars programme at Oxford. In this wide-ranging conversation Owen and I also discuss: * Are academics wrong to value personal interest in a topic over its importance? * What fraction of research has very large potential negative consequences? * Why do we have such different reactions to situations where the risks are known and unknown? * The downsides of waiting for tenure to do the work you think is most important. * What are the benefits of specifying a vague problem like ‘make AI safe’ more clearly? * How should people balance the trade-offs between having a successful career and doing the most important work? * Are there any blind alleys we’ve gone down when thinking about AI safety? * Why did Owen give to an organisation whose research agenda he is skeptical of? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Apr 18, 2018 • 2h 17min

#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detection systems contains the strain at its source. Ten minutes into the movie, we see the results of her work – nothing happens. Life goes on as usual. She continues to be amazingly competent, and nothing continues to go wrong. Fade to black. Roll credits. If your job is to prevent catastrophes, success is when nobody has to pay attention to you. But without regular disasters to remind authorities why they hired you in the first place, they can’t tell if you’re actually achieving anything. And when budgets come under pressure you may find that success condemns you to the chopping block. Dr Tom Inglesby, Director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health, worries this may be about to happen to the scientists working on the ‘Global Health Security Agenda’. In 2014 Ebola showed the world why we have to detect and contain new diseases before they spread, and that when it comes to contagious diseases the nations of the world sink or swim together. Fifty countries decided to work together to make sure all their health systems were up to the challenge. Back then Congress provided 5 years’ funding to help some of the world’s poorest countries build the basic health security infrastructure necessary to control pathogens before they could reach the US. Links to learn more, job opportunities, and full transcript. But with Ebola fading from public memory and no recent tragedies to terrify us, Congress may not renew that funding and the project could fall apart. (Learn more about how you can help: http://www.nti.org/analysis/articles/protect-us-investments-global-health-security/ ) But there are positive signs as well - the center Inglesby leads recently received a $16 million grant from the Open Philanthropy Project to further their work preventing global catastrophes. It also runs the [Emerging Leaders in Biosecurity Fellowship](http://www.centerforhealthsecurity.org/our-work/emergingbioleaders/) to train the next generation of biosecurity experts for the US government. And Inglesby regularly testifies to Congress on the threats we all face and how to address them. In this in-depth interview we try to provide concrete guidance for listeners who want to to pursue a career in health security. Some of the topics we cover include: * Should more people in medicine work on security? * What are the top jobs for people who want to improve health security and how do they work towards getting them? * What people can do to protect funding for the Global Health Security Agenda. * Should we be more concerned about natural or human caused pandemics? Which is more neglected? * Should we be allocating more attention and resources to global catastrophic risk scenarios? * Why are senior figures reluctant to prioritize one project or area at the expense of another? * What does Tom think about the idea that in the medium term, human-caused pandemics will pose a far greater risk than natural pandemics, and so we should focus on specific counter-measures? * Are the main risks and solutions understood, and it’s just a matter of implementation? Or is the principal task to identify and understand them? * How is the current US government performing in these areas? * Which agencies are empowered to think about low probability high magnitude events? And more... Get this episode by subscribing: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Apr 10, 2018 • 1h 44min

#26 - Marie Gibbons on how exactly clean meat is made & what's needed to get it in every supermarket

Learn about the innovative process of clean meat production from selecting animal types to cell cultivation. Marie Gibbons discusses the challenges and opportunities in developing large bioreactors. Explore the possibilities of clean meat beyond traditional species, including panda and dinosaurs. Discover the balance between academic and commercial research and the role of the Good Food Institute in advancing clean meat alternatives.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode