80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
May 18, 2018 • 48min

#31 - Allan Dafoe on defusing the political & economic risks posed by existing AI capabilities

The debate around the impacts of artificial intelligence often centres on ‘superintelligence’ - a general intellect that is much smarter than the best humans, in practically every field. But according to Allan Dafoe - Assistant Professor of Political Science at Yale University - even if we stopped at today's AI technology and simply collected more data, built more sensors, and added more computing capacity, extreme systemic risks could emerge, including: * Mass labor displacement, unemployment, and inequality; * The rise of a more oligopolistic global market structure, potentially moving us away from our liberal economic world order; * Imagery intelligence and other mechanisms for revealing most of the ballistic missile-carrying submarines that countries rely on to be able to respond to nuclear attack; * Ubiquitous sensors and algorithms that can identify individuals through face recognition, leading to universal surveillance; * Autonomous weapons with an independent chain of command, making it easier for authoritarian regimes to violently suppress their citizens. Allan is Co-Director of the Governance of AI Program, at the Future of Humanity Institute within Oxford University. His goals have been to understand the causes of world peace and stability, which in the past has meant studying why war has declined, the role of reputation and honor as drivers of war, and the motivations behind provocation in crisis escalation. Full transcript, links to learn more, and summary of key points. His current focus is helping humanity safely navigate the invention of advanced artificial intelligence. I ask Allan: * What are the distinctive characteristics of artificial intelligence from a political or international governance point of view? * Is Allan’s work just a continuation of previous research on transformative technologies, like nuclear weapons? * How can AI be well-governed? * How should we think about the idea of arms races between companies or countries? * What would you say to people skeptical about the importance of this topic? * How urgently do we need to figure out solutions to these problems? When can we expect artificial intelligence to be dramatically better than today? * What’s the most urgent questions to deal with in this field? * What can people do if they want to get into the field? * Is there anything unusual that people can look for in themselves to tell if they're a good fit to do this kind of research? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
May 15, 2018 • 2h 1min

#30 - Eva Vivalt on how little social science findings generalize from one study to another

If we have a study on the impact of a social program in a particular place and time, how confident can we be that we’ll get a similar result if we study the same program again somewhere else? Dr Eva Vivalt is a lecturer in the Research School of Economics at the Australian National University. She compiled a huge database of impact evaluations in global development - including 15,024 estimates from 635 papers across 20 types of intervention - to help answer this question. Her finding: not confident at all. The typical study result differs from the average effect found in similar studies so far by almost 100%. That is to say, if all existing studies of a particular education program find that it improves test scores by 10 points - the next result is as likely to be negative or greater than 20 points, as it is to be between 0-20 points. She also observed that results from smaller studies done with an NGO - often pilot studies - were more likely to look promising. But when governments tried to implement scaled-up versions of those programs, their performance would drop considerably.  For researchers hoping to figure out what works and then take those programs global, these failures of generalizability and ‘external validity’ should be disconcerting. Is ‘evidence-based development’ writing a cheque its methodology can’t cash? Should this make us invest less in empirical research, or more to get actually reliable results? Or as some critics say, is interest in impact evaluation distracting us from more important issues, like national or macroeconomic reforms that can’t be easily trialled? We discuss this as well as Eva’s other research, including Y Combinator’s basic income study where she is a principal investigator. Full transcript, links to related papers, and highlights from the conversation. Links mentioned at the start of the show: * 80,000 Hours Job Board * 2018 Effective Altruism Survey **Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type *80,000 Hours* into your podcasting app.** Questions include: * What is the YC basic income study looking at, and what motivates it? * How do we get people to accept clean meat? * How much can we generalize from impact evaluations? * How much can we generalize from studies in development economics? * Should we be running more or fewer studies? * Do most social programs work or not? * The academic incentives around data aggregation * How much can impact evaluations inform policy decisions? * How often do people change their minds? * Do policy makers update too much or too little in the real world? * How good or bad are the predictions of experts? How does that change when looking at individuals versus the average of a group? * How often should we believe positive results? * What’s the state of development economics? * Eva’s thoughts on our article on social interventions * How much can we really learn from being empirical? * How much should we really value RCTs? * Is an Economics PhD overrated or underrated? Get this episode by subscribing to our podcast: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
May 8, 2018 • 1h 21min

#29 - Anders Sandberg on 3 new resolutions for the Fermi paradox & how to colonise the universe

Part 2 out now: #33 - Dr Anders Sandberg on what if we ended ageing, solar flares & the annual risk of nuclear war The universe is so vast, yet we don’t see any alien civilizations. If they exist, where are they? Oxford University’s Anders Sandberg has an original answer: they’re ‘sleeping’, and for a very compelling reason. Because of the thermodynamics of computation, the colder it gets, the more computations you can do. The universe is getting exponentially colder as it expands, and as the universe cools, one Joule of energy gets worth more and more. If they wait long enough this can become a 10,000,000,000,000,000,000,000,000,000,000x gain. So, if a civilization wanted to maximize its ability to perform computations – its best option might be to lie in wait for trillions of years. Why would a civilization want to maximise the number of computations they can do? Because conscious minds are probably generated by computation, so doing twice as many computations is like living twice as long, in subjective time. Waiting will allow them to generate vastly more science, art, pleasure, or almost anything else they are likely to care about. Full transcript, related links, and key quotes. But there’s no point waking up to find another civilization has taken over and used up the universe’s energy. So they’ll need some sort of monitoring to protect their resources from potential competitors like us. It’s plausible that this civilization would want to keep the universe’s matter concentrated, so that each part would be in reach of the other parts, even after the universe’s expansion. But that would mean changing the trajectory of galaxies during this dormant period. That we don’t see anything like that makes it more likely that these aliens have local outposts throughout the universe, and we wouldn’t notice them until we broke their rules. But breaking their rules might be our last action as a species. This ‘aestivation hypothesis’ is the invention of Dr Sandberg, a Senior Research Fellow at the Future of Humanity Institute at Oxford University, where he looks at low-probability, high-impact risks, predicting the capabilities of future technologies and very long-range futures for humanity. In this incredibly fun conversation we cover this and other possible explanations to the Fermi paradox, as well as questions like: * Should we want optimists or pessimists working on our most important problems? * How should we reason about low probability, high impact risks? * Would a galactic civilization want to stop the stars from burning? * What would be the best strategy for exploring and colonising the universe? * How can you stay coordinated when you’re spread across different galaxies? * What should humanity decide to do with its future? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Apr 27, 2018 • 1h 3min

#28 - Owen Cotton-Barratt on why scientists should need insurance, PhD strategy & fast AI progresses

A researcher is working on creating a new virus – one more dangerous than any that exist naturally. They believe they’re being as careful as possible. After all, if things go wrong, their own life and that of their colleagues will be in danger. But if an accident is capable of triggering a global pandemic – hundreds of millions of lives might be at risk. How much additional care will the researcher actually take in the face of such a staggering death toll? In a new paper Dr Owen Cotton-Barratt, a Research Fellow at Oxford University’s Future of Humanity Institute, argues it’s impossible to expect them to make the correct adjustments. If they have an accident that kills 5 people – they’ll feel extremely bad. If they have an accident that kills 500 million people, they’ll feel even worse – but there’s no way for them to feel 100 million times worse. The brain simply doesn’t work that way. So, rather than relying on individual judgement, we could create a system that would lead to better outcomes: research liability insurance.  Links to learn more, summary and full transcript. Once an insurer assesses how much damage a particular project is expected to cause and with what likelihood – in order to proceed, the researcher would need to take out insurance against the predicted risk. In return, the insurer promises that they’ll pay out – potentially tens of billions of dollars – if things go really badly. This would force researchers think very carefully about the cost and benefits of their work – and incentivize the insurer to demand safety standards on a level that individual researchers can’t be expected to impose themselves. ***Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.*** Owen is currently hiring for a selective, two-year research scholars programme at Oxford. In this wide-ranging conversation Owen and I also discuss: * Are academics wrong to value personal interest in a topic over its importance? * What fraction of research has very large potential negative consequences? * Why do we have such different reactions to situations where the risks are known and unknown? * The downsides of waiting for tenure to do the work you think is most important. * What are the benefits of specifying a vague problem like ‘make AI safe’ more clearly? * How should people balance the trade-offs between having a successful career and doing the most important work? * Are there any blind alleys we’ve gone down when thinking about AI safety? * Why did Owen give to an organisation whose research agenda he is skeptical of? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
4 snips
Apr 18, 2018 • 2h 17min

#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detection systems contains the strain at its source. Ten minutes into the movie, we see the results of her work – nothing happens. Life goes on as usual. She continues to be amazingly competent, and nothing continues to go wrong. Fade to black. Roll credits. If your job is to prevent catastrophes, success is when nobody has to pay attention to you. But without regular disasters to remind authorities why they hired you in the first place, they can’t tell if you’re actually achieving anything. And when budgets come under pressure you may find that success condemns you to the chopping block. Dr Tom Inglesby, Director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health, worries this may be about to happen to the scientists working on the ‘Global Health Security Agenda’. In 2014 Ebola showed the world why we have to detect and contain new diseases before they spread, and that when it comes to contagious diseases the nations of the world sink or swim together. Fifty countries decided to work together to make sure all their health systems were up to the challenge. Back then Congress provided 5 years’ funding to help some of the world’s poorest countries build the basic health security infrastructure necessary to control pathogens before they could reach the US. Links to learn more, job opportunities, and full transcript. But with Ebola fading from public memory and no recent tragedies to terrify us, Congress may not renew that funding and the project could fall apart. (Learn more about how you can help: http://www.nti.org/analysis/articles/protect-us-investments-global-health-security/ ) But there are positive signs as well - the center Inglesby leads recently received a $16 million grant from the Open Philanthropy Project to further their work preventing global catastrophes. It also runs the [Emerging Leaders in Biosecurity Fellowship](http://www.centerforhealthsecurity.org/our-work/emergingbioleaders/) to train the next generation of biosecurity experts for the US government. And Inglesby regularly testifies to Congress on the threats we all face and how to address them. In this in-depth interview we try to provide concrete guidance for listeners who want to to pursue a career in health security. Some of the topics we cover include: * Should more people in medicine work on security? * What are the top jobs for people who want to improve health security and how do they work towards getting them? * What people can do to protect funding for the Global Health Security Agenda. * Should we be more concerned about natural or human caused pandemics? Which is more neglected? * Should we be allocating more attention and resources to global catastrophic risk scenarios? * Why are senior figures reluctant to prioritize one project or area at the expense of another? * What does Tom think about the idea that in the medium term, human-caused pandemics will pose a far greater risk than natural pandemics, and so we should focus on specific counter-measures? * Are the main risks and solutions understood, and it’s just a matter of implementation? Or is the principal task to identify and understand them? * How is the current US government performing in these areas? * Which agencies are empowered to think about low probability high magnitude events? And more... Get this episode by subscribing: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Apr 10, 2018 • 1h 44min

#26 - Marie Gibbons on how exactly clean meat is made & what's needed to get it in every supermarket

Learn about the innovative process of clean meat production from selecting animal types to cell cultivation. Marie Gibbons discusses the challenges and opportunities in developing large bioreactors. Explore the possibilities of clean meat beyond traditional species, including panda and dinosaurs. Discover the balance between academic and commercial research and the role of the Good Food Institute in advancing clean meat alternatives.
undefined
Mar 28, 2018 • 2h 39min

#25 - Robin Hanson on why we have to lie to ourselves about why we do what we do

On February 2, 1685, England’s King Charles II was struck by a sudden illness. Fortunately his physicians were the best of the best. To reassure the public they kept them abreast of the King’s treatment regimen. King Charles was made to swallow a toxic metal; had blistering agents applied to his scalp; had pigeon droppings attached to his feet; was prodded with a red-hot poker; given forty drops of ooze from “the skull of a man that was never buried”; and, finally, had crushed stones from the intestines of an East Indian goat forced down his throat. Sadly, despite these heroic efforts, he passed away the following week.  Why did the doctors go this far? Prof, Robin Hanson, Associate Professor of Economics at George Mason University suspects that on top of any medical beliefs they also had a hidden motive: it needed to be clear, to the king and the public, that the physicians cared enormously about saving His Royal Majesty. Only by going ‘all out’ would they be protected against accusations of negligence should the King die.  Full transcript, summary, and links to articles discussed in the show. If you believe Hanson, the same desire to be seen to care about our family and friends explains much of what’s perverse about our medical system today. And not just medicine - Robin thinks we’re mostly kidding ourselves when we say our charities exist to help others, our schools exist to educate students and our politics are about choosing wise policies.  So important are hidden motives for navigating our social world that we have to deny them to ourselves, lest we accidentally reveal them to others. Robin is a polymath economist, who has come up with surprising and novel insight in a range of fields including psychology, politics and futurology. In this extensive episode we discuss his latest book with Kevin Simler, *The Elephant in the Brain: Hidden Motives in Everyday Life*, but also: * What was it like being part of a competitor group to the ‘World Wide Web’, and being beaten to the post? * If people aren’t going to school to learn, what’s education all about? * What split brain patients tell us about our ability to justify anything * The hidden motivations that shape religions * Why we choose the friends we do * Why is our attitude to medicine mysterious? * What would it look like if people were focused on doing as much good as possible?  * Are we better off donating now, when we’re older, or even wait until well after our deaths? * How much of the behavior of ‘effective altruists’ can we assume is genuinely motivated by wanting to do as much good as possible? * What does Robin mean when he refers to effective altruism as a youth movement? Is that a good or bad thing? * And much more...
undefined
Mar 20, 2018 • 55min

#24 - Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves? Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals. Summary, related links and full transcript. In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better. But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour. In this interview Stefan offers a range of views on the projects and culture that make up ‘effective altruism’ - including where it’s going right and where it’s going wrong. Stefan did his PhD in formal epistemology, before moving on to a postdoc in political rationality at the London School of Economics, while working on advocacy projects to improve truthfulness among politicians. At the time the interview was recorded Stefan was a researcher at the Centre for Effective Altruism in Oxford. We discuss: * Should we trust our own judgement more than others’? * How hard is it to improve political discourse? * What should we make of well-respected academics writing articles that seem to be completely misinformed? * How is effective altruism (EA) changing? What might it be doing wrong? * How has Stefan’s view of EA changed? * Should EA get more involved in politics, or steer clear of it? Would it be a bad idea for a talented graduate to get involved in party politics? * How much should we cooperate with those with whom we have disagreements? * What good reasons are there to be inconsiderate? * Should effective altruism potentially focused on a more narrow range of problems? *The 80,000 Hours podcast is produced by Keiran Harris.* **If you subscribe to our podcast, you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can do so by searching ‘80,000 Hours’ wherever you get your podcasts.**
undefined
Mar 16, 2018 • 45min

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

Dr. Jan Leike, a Research Scientist at DeepMind, shares valuable insights on how to join the world's leading AI team. He discusses the importance of completing a computer science and mathematics degree, publishing papers, finding a supportive supervisor, and attending top conferences. Jan also talks about the qualities of a good fit for research and highlights the pressing issue of AGI safety. They also touch upon misconceptions about AI, DeepMind's research focus, and failures of current AI systems.
undefined
Mar 7, 2018 • 1h 8min

#22 - Leah Utyasheva on the non-profit that figured out how to massively cut suicide rates

How people kill themselves varies enormously depending on which means are most easily available. In the United States, suicide by firearm stands out. In Hong Kong, where most people live in high rise buildings, jumping from a height is more common. And in some countries in Asia and Africa with many poor agricultural communities, the leading means is drinking pesticide. There’s a good chance you’ve never heard of this issue before. And yet, of the 800,000 people who kill themselves globally each year 20% die from pesticide self-poisoning. Full transcript, summary and links to articles discussed in today's show. Research suggests most people who try to kill themselves with pesticides reflect on the decision for less than 30 minutes, and that less than 10% of those who don't die the first time around will try again. Unfortunately, the fatality rate from pesticide ingestion is 40% to 70%. Having such dangerous chemicals near people's homes is therefore an enormous public health issue not only for the direct victims, but also the partners and children they leave behind. Fortunately researchers like Dr Leah Utyasheva have figured out a very cheap way to massively reduce pesticide suicide rates. In this episode, Leah and I discuss: * How do you prevent pesticide suicide and what’s the evidence it works? * How do you know that most people attempting suicide don’t want to die? * What types of events are causing people to have the crises that lead to attempted suicide? * How much money does it cost to save a life in this way? * How do you estimate the probability of getting law reform passed in a particular country? * Have you generally found politicians to be sympathetic to the idea of banning these pesticides? What are their greatest reservations? * The comparison of getting policy change rather than helping person-by-person * The importance of working with locals in places like India and Nepal, rather than coming in exclusively as outsiders * What are the benefits of starting your own non-profit versus joining an existing org and persuading them of the merits of the cause? * Would Leah in general recommend starting a new charity? Is it more exciting than it is scary? * Is it important to have an academic leading this kind of work? * How did The Centre for Pesticide Suicide Prevention get seed funding? * How does the value of saving a life from suicide compare to savings someone from malaria * Leah’s political campaigning for the rights of vulnerable groups in Eastern Europe  * What are the biggest downsides of human rights work?

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app