
80,000 Hours Podcast
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Hosted by Rob Wiblin and Luisa Rodriguez.
Latest episodes

Dec 27, 2018 • 2h 57min
#50 - David Denkenberger on how to feed all 8b people through an asteroid/nuclear winter
If an asteroid impact or nuclear winter blocked the sun for years, our inability to grow food would result in billions dying of starvation, right? According to Dr David Denkenberger, co-author of Feeding Everyone No Matter What: no. If he's to be believed, nobody need starve at all. Even without the sun, David sees the Earth as a bountiful food source. Mushrooms farmed on decaying wood. Bacteria fed with natural gas. Fish and mussels supported by sudden upwelling of ocean nutrients - and more. Dr Denkenberger is an Assistant Professor at the University of Alaska Fairbanks, and he's out to spread the word that while a nuclear winter might be horrible, experts have been mistaken to assume that mass starvation is an inevitability. In fact, the only thing that would prevent us from feeding the world is insufficient preparation. ∙ Links to learn more, summary and full transcript Not content to just write a book pointing this out, David has gone on to found a growing non-profit - the Alliance to Feed the Earth in Disasters (ALLFED) - to prepare the world to feed everyone come what may. He expects that today 10% of people would find enough food to survive a massive disaster. In principle, if we did everything right, nobody need go hungry. But being more realistic about how much we're likely to invest, David thinks a plan to inform people ahead of time could save 30%, and a decent research and development scheme 80%. ∙ 80,000 Hours' updated article on How to find the best charity to give to ∙ A potential donor evaluates ALLFED According to David's published cost-benefit analyses, work on this problem may be able to save lives, in expectation, for under $100 each, making it an incredible investment. These preparations could also help make humanity more resilient to global catastrophic risks, by forestalling an ‘everyone for themselves' mentality, which then causes trade and civilization to unravel. But some worry that David's cost-effectiveness estimates are exaggerations, so I challenge him on the practicality of his approach, and how much his non-profit's work would actually matter in a post-apocalyptic world. In our extensive conversation, we cover: * How could the sun end up getting blocked, or agriculture otherwise be decimated? * What are all the ways we could we eat nonetheless? What kind of life would this be? * Can these methods be scaled up fast? * What is his organisation, ALLFED, actually working on? * How does he estimate the cost-effectiveness of this work, and what are the biggest weaknesses of the approach? * How would more food affect the post-apocalyptic world? Won't people figure it out at that point anyway? * Why not just leave guidebooks with this information in every city? * Would these preparations make nuclear war more likely? * What kind of people is ALLFED trying to hire? * What would ALLFED do with more money? * How he ended up doing this work. And his other engineering proposals for improving the world, including ideas to prevent a supervolcano explosion. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

Dec 20, 2018 • 1h 36min
#49 - Rachel Glennerster on a year's worth of education for 30c & other development 'best buys'
If I told you it's possible to deliver an extra year of ideal primary-level education for under $1, would you believe me? Hopefully not - the claim is absurd on its face. But it may be true nonetheless. The very best education interventions are phenomenally cost-effective, and they're not the kinds of things you'd expect, says Dr Rachel Glennerster. She's Chief Economist at the UK's foreign aid agency DFID, and used to run J-PAL, the world-famous anti-poverty research centre based in MIT's Economics Department, where she studied the impact of a wide range of approaches to improving education, health, and governing institutions. According to Dr Glennerster: "...when we looked at the cost effectiveness of education programs, there were a ton of zeros, and there were a ton of zeros on the things that we spend most of our money on. So more teachers, more books, more inputs, like smaller class sizes - at least in the developing world - seem to have no impact, and that's where most government money gets spent." "But measurements for the top ones - the most cost effective programs - say they deliver 460 LAYS per £100 spent ($US130). LAYS are Learning-Adjusted Years of Schooling. Each one is the equivalent of the best possible year of education you can have - Singapore-level." Links to learn more, summary and full transcript. "...the two programs that come out as spectacularly effective... well, the first is just rearranging kids in a class." "You have to test the kids, so that you can put the kids who are performing at grade two level in the grade two class, and the kids who are performing at grade four level in the grade four class, even if they're different ages - and they learn so much better. So that's why it's so phenomenally cost effective because, it really doesn't cost anything." "The other one is providing information. So sending information over the phone [for example about how much more people earn if they do well in school and graduate]. So these really small nudges. Now none of those nudges will individually transform any kid's life, but they are so cheap that you get these fantastic returns on investment - and we do very little of that kind of thing." In this episode, Dr Glennerster shares her decades of accumulated wisdom on which anti-poverty programs are overrated, which are neglected opportunities, and how we can know the difference, across a range of fields including health, empowering women and macroeconomic policy. Regular listeners will be wondering - have we forgotten all about the lessons from episode 30 of the show with Dr Eva Vivalt? She threw several buckets of cold water on the hope that we could accurately measure the effectiveness of social programs at all. According to Vivalt, her dataset of hundreds of randomised controlled trials indicates that social science findings don’t generalize well at all. The results of a trial at a school in Namibia tell us remarkably little about how a similar program will perform if delivered at another school in Namibia - let alone if it's attempted in India instead. Rachel offers a different and more optimistic interpretation of Eva's findings. To learn more and figure out who you sympathise with more, you'll just have to listen to the episode. Regardless, Vivalt and Glennerster agree that we should continue to run these kinds of studies, and today’s episode delves into the latest ideas in global health and development. Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

6 snips
Nov 22, 2018 • 3h 16min
#48 - Brian Christian on better living through the wisdom of computer science
Please let us know if we've helped you: Fill out our annual impact survey
Ever felt that you were so busy you spent all your time paralysed trying to figure out where to start, and couldn't get much done? Computer scientists have a term for this - thrashing - and it's a common reason our computers freeze up. The solution, for people as well as laptops, is to 'work dumber': pick something at random and finish it, without wasting time thinking about the bigger picture.
Bestselling author Brian Christian studied computer science, and in the book Algorithms to Live By he's out to find the lessons it can offer for a better life. He investigates into when to quit your job, when to marry, the best way to sell your house, how long to spend on a difficult decision, and how much randomness to inject into your life. In each case computer science gives us a theoretically optimal solution, and in this episode we think hard about whether its models match our reality.
Links to learn more, summary and full transcript.
One genre of problems Brian explores in his book are 'optimal stopping problems', the canonical example of which is ‘the secretary problem’. Imagine you're hiring a secretary, you receive *n* applicants, they show up in a random order, and you interview them one after another. You either have to hire that person on the spot and dismiss everybody else, or send them away and lose the option to hire them in future.
It turns out most of life can be viewed this way - a series of unique opportunities you pass by that will never be available in exactly the same way again.
So how do you attempt to hire the very best candidate in the pool? There's a risk that you stop before finding the best, and a risk that you set your standards too high and let the best candidate pass you by.
Mathematicians of the mid-twentieth century produced an elegant optimal approach: spend exactly one over *e*, or approximately 37% of your search, just establishing a baseline without hiring anyone, no matter how promising they seem. Then immediately hire the next person who's better than anyone you've seen so far.
It turns out that your odds of success in this scenario are also 37%. And the optimal strategy and the odds of success are identical regardless of the size of the pool. So as *n* goes to infinity you still want to follow this 37% rule, and you still have a 37% chance of success. Even if you interview a million people.
But if you have the option to go back, say by apologising to the first applicant and begging them to come work with you, and you have a 50% chance of your apology being accepted, then the optimal explore percentage rises all the way to 61%.
Today’s episode focuses on Brian’s book-length exploration of how insights from computer algorithms can and can't be applied to our everyday lives. We cover:
* Computational kindness, and the best way to schedule meetings
* How can we characterize a computational model of what people are actually doing, and is there a rigorous way to analyse just how good their instincts actually are?
* What’s it like being a human confederate in the Turing test competition?
* Is trying to detect fake social media accounts a losing battle?
* The canonical explore/exploit problem in computer science: the multi-armed bandit
* What’s the optimal way to buy or sell a house?
* Why is information economics so important?
* What kind of decisions should people randomize more in life?
* How much time should we spend on prioritisation?
Get this episode by subscribing: type '80,000 Hours' into your podcasting app.
The 80,000 Hours Podcast is produced by Keiran Harris.

26 snips
Nov 2, 2018 • 2h 5min
#47 - Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles
After dropping out of a machine learning PhD at Stanford, Daniel Ziegler needed to decide what to do next. He’d always enjoyed building stuff and wanted to shape the development of AI, so he thought a research engineering position at an org dedicated to aligning AI with human interests could be his best option.
He decided to apply to OpenAI, and spent about 6 weeks preparing for the interview before landing the job. His PhD, by contrast, might have taken 6 years. Daniel thinks this highly accelerated career path may be possible for many others.
On today’s episode Daniel is joined by Catherine Olsson, who has also worked at OpenAI, and left her computational neuroscience PhD to become a research engineer at Google Brain. She and Daniel share this piece of advice for those curious about this career path: just dive in. If you're trying to get good at something, just start doing that thing, and figure out that way what's necessary to be able to do it well.
Catherine has even created a simple step-by-step guide for 80,000 Hours, to make it as easy as possible for others to copy her and Daniel's success.
Please let us know how we've helped you: fill out our 2018 annual impact survey so that 80,000 Hours can continue to operate and grow.
Blog post with links to learn more, a summary & full transcript.
Daniel thinks the key for him was nailing the job interview.
OpenAI needed him to be able to demonstrate the ability to do the kind of stuff he'd be working on day-to-day. So his approach was to take a list of 50 key deep reinforcement learning papers, read one or two a day, and pick a handful to actually reproduce. He spent a bunch of time coding in Python and TensorFlow, sometimes 12 hours a day, trying to debug and tune things until they were actually working.
Daniel emphasizes that the most important thing was to practice *exactly* those things that he knew he needed to be able to do. His dedicated preparation also led to an offer from the Machine Intelligence Research Institute, and so he had the opportunity to decide between two organisations focused on the global problem that most concerns him.
Daniel’s path might seem unusual, but both he and Catherine expect it can be replicated by others. If they're right, it could greatly increase our ability to get new people into important ML roles in which they can make a difference, as quickly as possible.
Catherine says that her move from OpenAI to an ML research team at Google now allows her to bring a different set of skills to the table. Technical AI safety is a multifaceted area of research, and the many sub-questions in areas such as reward learning, robustness, and interpretability all need to be answered to maximize the probability that AI development goes well for humanity.
Today’s episode combines the expertise of two pioneers and is a key resource for anyone wanting to follow in their footsteps. We cover:
* What are OpenAI and Google Brain doing?
* Why work on AI?
* Do you learn more on the job, or while doing a PhD?
* Controversial issues within ML
* Is replicating papers a good way of determining suitability?
* What % of software developers could make similar transitions?
* How in-demand are research engineers?
* The development of Dota 2 bots
* Do research scientists have more influence on the vision of an org?
* Has learning more made you more or less worried about the future?
Get this episode by subscribing: type '80,000 Hours' into your podcasting app.
The 80,000 Hours Podcast is produced by Keiran Harris.

Oct 23, 2018 • 2h 49min
#46 - Hilary Greaves on moral cluelessness & tackling crucial questions in academia
The barista gives you your coffee and change, and you walk away from the busy line. But you suddenly realise she gave you $1 less than she should have. Do you brush your way past the people now waiting, or just accept this as a dollar you’re never getting back? According to philosophy Professor Hilary Greaves - Director of Oxford University's Global Priorities Institute, which is hiring - this simple decision will completely change the long-term future by altering the identities of almost all future generations. How? Because by rushing back to the counter, you slightly change the timing of everything else people in line do during that day - including changing the timing of the interactions they have with everyone else. Eventually these causal links will reach someone who was going to conceive a child. By causing a child to be conceived a few fractions of a second earlier or later, you change the sperm that fertilizes their egg, resulting in a totally different person. So asking for that $1 has now made the difference between all the things that this actual child will do in their life, and all the things that the merely possible child - who didn't exist because of what you did - would have done if you decided not to worry about it. As that child's actions ripple out to everyone else who conceives down the generations, ultimately the entire human population will become different, all for the sake of your dollar. Will your choice cause a future Hitler to be born, or not to be born? Probably both! Links to learn more, summary and full transcript. Some find this concerning. The actual long term effects of your decisions are so unpredictable, it looks like you’re totally clueless about what's going to lead to the best outcomes. It might lead to decision paralysis - you won’t be able to take any action at all. Prof Greaves doesn’t share this concern for most real life decisions. If there’s no reasonable way to assign probabilities to far-future outcomes, then the possibility that you might make things better in completely unpredictable ways is more or less canceled out by equally likely opposite possibility. But, if instead we’re talking about a decision that involves highly-structured, systematic reasons for thinking there might be a general tendency of your action to make things better or worse -- for example if we increase economic growth -- Prof Greaves says that we don’t get to just ignore the unforeseeable effects. When there are complex arguments on both sides, it's unclear what probabilities you should assign to this or that claim. Yet, given its importance, whether you should take the action in question actually does depend on figuring out these numbers. So, what do we do? Today’s episode blends philosophy with an exploration of the mission and research agenda of the Global Priorities Institute: to develop the effective altruism movement within academia. We cover: * How controversial is the multiverse interpretation of quantum physics? * Given moral uncertainty, how should population ethics affect our real life decisions? * How should we think about archetypal decision theory problems? * What are the consequences of cluelessness for those who based their donation advice on GiveWell style recommendations? * How could reducing extinction risk be a good cause for risk-averse people? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

Oct 17, 2018 • 2h 31min
#45 - Tyler Cowen's case for maximising econ growth, stabilising civilization & thinking long-term
I've probably spent more time reading Tyler Cowen - Professor of Economics at George Mason University - than any other author. Indeed it's his incredibly popular blog Marginal Revolution that prompted me to study economics in the first place. Having spent thousands of hours absorbing Tyler's work, it was a pleasure to be able to question him about his latest book and personal manifesto: Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals.
Tyler makes the case that, despite what you may have heard, we *can* make rational judgments about what is best for society as a whole. He argues:
1. Our top moral priority should be preserving and improving humanity's long-term future
2. The way to do that is to maximise the rate of sustainable economic growth
3. We should respect human rights and follow general principles while doing so.
We discuss why Tyler believes all these things, and I push back where I disagree. In particular: is higher economic growth actually an effective way to safeguard humanity's future, or should our focus really be elsewhere?
In the process we touch on many of moral philosophy's most pressing questions: Should we discount the future? How should we aggregate welfare across people? Should we follow rules or evaluate every situation individually? How should we deal with the massive uncertainty about the effects of our actions? And should we trust common sense morality or follow structured theories?
Links to learn more, summary and full transcript.
After covering the book, the conversation ranges far and wide. Will we leave the galaxy, and is it a tragedy if we don't? Is a multi-polar world less stable? Will humanity ever help wild animals? Why do we both agree that Kant and Rawls are overrated?
Today's interview is released on both the 80,000 Hours Podcast and Tyler's own show: Conversation with Tyler.
Tyler may have had more influence on me than any other writer but this conversation is richer for our remaining disagreements. If the above isn't enough to tempt you to listen, we also look at:
* Why couldn’t future technology make human life a hundred or a thousand times better than it is for people today?
* Why focus on increasing the rate of economic growth rather than making sure that it doesn’t go to zero?
* Why shouldn’t we dedicate substantial time to the successful introduction of genetic engineering?
* Why should we completely abstain from alcohol and make it a social norm?
* Why is Tyler so pessimistic about space? Is it likely that humans will go extinct before we manage to escape the galaxy?
* Is improving coordination and international cooperation a major priority?
* Why does Tyler think institutions are keeping up with technology?
* Given that our actions seem to have very large and morally significant effects in the long run, are our moral obligations very onerous?
* Can art be intrinsically valuable?
* What does Tyler think Derek Parfit was most wrong about, and what was he was most right about that’s unappreciated today?
Get this episode by subscribing: type 80,000 Hours into your podcasting app.
The 80,000 Hours Podcast is produced by Keiran Harris.

20 snips
Oct 2, 2018 • 3h 52min
#44 - Paul Christiano on how we'll hand the future off to AI, & solving the alignment problem
In this discussion, Paul Christiano, an OpenAI researcher with a theoretical computer science background, shares his insights on how AI will gradually transform our world. He delves into AI alignment issues, emphasizing strategies OpenAI is developing to ensure AI systems reflect human values. Christiano also predicts that AI may surpass humans in scientific research and discusses the potential economic impacts of AI on labor and savings. With provocative ideas on moral value and rights for AI, this conversation is a deep dive into the future of technology and ethics.

Sep 25, 2018 • 2h 44min
#43 - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines
In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret deterrence system which will automatically wipe out humanity upon detection of a single nuclear explosion in Russia. With US bombs heading towards the USSR and unable to be recalled, Dr Strangelove points out that “the whole point of this Doomsday Machine is lost if you keep it a secret – why didn’t you tell the world, eh?” The Soviet ambassador replies that it was to be announced at the Party Congress the following Monday: “The Premier loves surprises”.
Daniel Ellsberg - leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency - claims in his new book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a colleague wondered how so many details of their nuclear planning had leaked.
Links to learn more, summary and full transcript.
The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today.
If the system can’t contact military leaders, it checks for signs of a nuclear strike, and if it detects them, automatically launches all remaining Soviet weapons at targets across the northern hemisphere.
As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity.
You might think the United States would have a more sensible nuclear launch policy. You’d be wrong.
As Ellsberg explains, based on first-hand experience as a nuclear war planner in the 50s, that the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth.
The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe.
The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival.
Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if the tell the enemy you’ve done it.
Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity.
Strategically, the setup is stupid. Ethically, it is monstrous.
So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization?
Daniel explores these questions eloquently and urgently in his book. Today we cover:
* Why full disarmament today would be a mistake and the optimal number of nuclear weapons to hold
* How well are secrets kept in the government?
* What was the risk of the first atomic bomb test?
* The effect of Trump on nuclear security
* Do we have a reliable estimate of the magnitude of a ‘nuclear winter’?
* Why Gorbachev allowed Russia’s covert biological warfare program to continue
Get this episode by subscribing: type 80,000 Hours into your podcasting app.
The 80,000 Hours Podcast is produced by Keiran Harris.

18 snips
Sep 11, 2018 • 2h 46min
#42 - Amanda Askell on moral empathy, the value of information & the ethics of infinity
Consider two familiar moments at a family reunion. Our host, Uncle Bill, takes pride in his barbecuing skills. But his niece Becky says that she now refuses to eat meat. A groan goes round the table; the family mostly think of this as an annoying picky preference. But if seriously considered as a moral position, as they might if instead Becky were avoiding meat on religious grounds, it would usually receive a very different reaction. An hour later Bill expresses a strong objection to abortion. Again, a groan goes round the table; the family mostly think that he has no business in trying to foist his regressive preference on anyone. But if considered not as a matter of personal taste, but rather as a moral position - that Bill genuinely believes he’s opposing mass-murder - his comment might start a serious conversation. Amanda Askell, who recently completed a PhD in philosophy at NYU focused on the ethics of infinity, thinks that we often betray a complete lack of moral empathy. All sides of the political spectrum struggle to get inside the mind of people we disagree with and see issues from their point of view. Links to learn more, summary and full transcript. This often happens because of confusion between preferences and moral positions. Assuming good faith on the part of the person you disagree with, and actually engaging with the beliefs they claim to hold, is perhaps the best remedy for our inability to make progress on controversial issues. One potential path for progress surrounds contraception; a lot of people who are anti-abortion are also anti-contraception. But they’ll usually think that abortion is much worse than contraception, so why can’t we compromise and agree to have much more contraception available? According to Amanda, a charitable explanation for this is that people who are anti-abortion and anti-contraception engage in moral reasoning and advocacy based on what, in their minds, is the best of all possible worlds: one where people neither use contraception nor get abortions. So instead of arguing about abortion and contraception, we could discuss the underlying principle that one should advocate for the best possible world, rather than the best probable world. Successfully break down such ethical beliefs, absent political toxicity, and it might be possible to actually converge on a key point of agreement. Today’s episode blends such everyday topics with in-depth philosophy, including: * What is 'moral cluelessness' and how can we work around it? * Amanda's biggest criticisms of social justice activists, and of critics of social justice activists * Is there an ethical difference between prison and corporal punishment? * How to resolve 'infinitarian paralysis' - the inability to make decisions when infinities are involved. * What’s effective altruism doing wrong? * How should we think about jargon? Are a lot of people who don’t communicate clearly just scamming us? * How can people be more successful within the cocoon of school and university? * How did Amanda find doing a philosophy PhD, and how will she decide what to do now? Links: * Career review: Congressional staffer * Randomised experiment on quitting * Psychology replication quiz * Should you focus on your comparative advantage. Get this episode by subscribing: type 80,000 Hours into your podcasting app. The 80,000 Hours podcast is produced by Keiran Harris.

Aug 28, 2018 • 2h 18min
#41 - David Roodman on incarceration, geomagnetic storms, & becoming a world-class researcher
With 698 inmates per 100,000 citizens, the U.S. is by far the leader among large wealthy nations in incarceration. But what effect does imprisonment actually have on crime?
According to David Roodman, Senior Advisor to the Open Philanthropy Project, the marginal effect is zero.
* 80,000 HOURS IMPACT SURVEY - Let me know how this show has helped you with your career.
* ROB'S AUDIOBOOK RECOMMENDATIONS
This stunning rebuke to the American criminal justice system comes from the man Holden Karnofsky’s called "the gold standard for in-depth quantitative research", whose other investigations include the risk of geomagnetic storms, whether deworming improves health and test scores, and the development impacts of microfinance.
Links to learn more, summary and full transcript.
The effects of crime can be split into three categories; before, during, and after.
Does having tougher sentences deter people from committing crime?
After reviewing studies on gun laws and ‘three strikes’ in California, David concluded that the effect of deterrence is zero.
Does imprisoning more people reduce crime by incapacitating potential offenders?
Here he says yes, noting that crimes like motor vehicle theft have gone up in a way that seems pretty clearly connected with recent Californian criminal justice reforms (though the effect on violent crime is far lower).
Finally, do the after-effects of prison make you more or less likely to commit future crimes?
This one is more complicated.
Concerned that he was biased towards a comfortable position against incarceration, David did a cost-benefit analysis using both his favored reading of the evidence and the devil's advocate view; that there is deterrence and that the after-effects are beneficial.
For the devil’s advocate position David used the highest assessment of the harm caused by crime, which suggests a year of prison prevents about $92,000 in crime. But weighed against a lost year of liberty, valued at $50,000, plus the cost of operating prisons, the numbers came out exactly the same.
So even using the least-favorable cost-benefit valuation of the least favorable reading of the evidence -- it just breaks even.
The argument for incarceration melts further when you consider the significant crime that occurs within prisons, de-emphasised because of a lack of data and a perceived lack of compassion for inmates.
In today’s episode we discuss how to conduct such impactful research, and how to proceed having reached strong conclusions.
We also cover:
* How do you become a world class researcher? What kinds of character traits are important?
* Are academics aware of following perverse incentives?
* What’s involved in data replication? How often do papers replicate?
* The politics of large orgs vs. small orgs
* Geomagnetic storms as a potential cause area
* How much does David rely on interviews with experts?
* The effects of deworming on child health and test scores
* Should we have more ‘data vigilantes’?
* What are David’s critiques of effective altruism?
* What are the pros and cons of starting your career in the think tank world?
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.