80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
Oct 2, 2018 • 3h 52min

#44 - Paul Christiano on how we'll hand the future off to AI, & solving the alignment problem

Paul Christiano is one of the smartest people I know. After our first session produced such great material, we decided to do a second recording, resulting in our longest interview so far. While challenging at times I can strongly recommend listening - Paul works on AI himself and has a very unusually thought through view of how it will change the world. This is now the top resource I'm going to refer people to if they're interested in positively shaping the development of AI, and want to understand the problem better. Even though I'm familiar with Paul's writing I felt I was learning a great deal and am now in a better position to make a difference to the world. A few of the topics we cover are: * Why Paul expects AI to transform the world gradually rather than explosively and what that would look like * Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us * Why AI systems will probably be granted legal and property rights * How an advanced AI that doesn't share human goals could still have moral value * Why machine learning might take over science research from humans before it can do most other tasks * Which decade we should expect human labour to become obsolete, and how this should affect your savings plan. Links to learn more, summary and full transcript. Important new article: These are the world’s highest impact career paths according to our research Here's a situation we all regularly confront: you want to answer a difficult question, but aren't quite smart or informed enough to figure it out for yourself. The good news is you have access to experts who *are* smart enough to figure it out. The bad news is that they disagree. If given plenty of time - and enough arguments, counterarguments and counter-counter-arguments between all the experts - should you eventually be able to figure out which is correct? What if one expert were deliberately trying to mislead you? And should the expert with the correct view just tell the whole truth, or will competition force them to throw in persuasive lies in order to have a chance of winning you over? In other words: does 'debate', in principle, lead to truth? According to Paul Christiano - researcher at the machine learning research lab OpenAI and legendary thinker in the effective altruism and rationality communities - this question is of more than mere philosophical interest. That's because 'debate' is a promising method of keeping artificial intelligence aligned with human goals, even if it becomes much more intelligent and sophisticated than we are. It's a method OpenAI is actively trying to develop, because in the long-term it wants to train AI systems to make decisions that are too complex for any human to grasp, but without the risks that arise from a complete loss of human oversight. If AI-1 is free to choose any line of argument in order to attack the ideas of AI-2, and AI-2 always seems to successfully defend them, it suggests that every possible line of argument would have been unsuccessful. But does that mean that the ideas of AI-2 were actually right? It would be nice if the optimal strategy in debate were to be completely honest, provide good arguments, and respond to counterarguments in a valid way. But we don't know that's the case. Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Sep 25, 2018 • 2h 44min

#43 - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines

In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret deterrence system which will automatically wipe out humanity upon detection of a single nuclear explosion in Russia. With US bombs heading towards the USSR and unable to be recalled, Dr Strangelove points out that “the whole point of this Doomsday Machine is lost if you keep it a secret – why didn’t you tell the world, eh?” The Soviet ambassador replies that it was to be announced at the Party Congress the following Monday: “The Premier loves surprises”. Daniel Ellsberg - leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency - claims in his new book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a colleague wondered how so many details of their nuclear planning had leaked. Links to learn more, summary and full transcript. The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today. If the system can’t contact military leaders, it checks for signs of a nuclear strike, and if it detects them, automatically launches all remaining Soviet weapons at targets across the northern hemisphere. As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity. You might think the United States would have a more sensible nuclear launch policy. You’d be wrong. As Ellsberg explains, based on first-hand experience as a nuclear war planner in the 50s, that the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth. The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe. The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival. Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if the tell the enemy you’ve done it. Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity. Strategically, the setup is stupid. Ethically, it is monstrous. So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization? Daniel explores these questions eloquently and urgently in his book. Today we cover: * Why full disarmament today would be a mistake and the optimal number of nuclear weapons to hold * How well are secrets kept in the government? * What was the risk of the first atomic bomb test? * The effect of Trump on nuclear security * Do we have a reliable estimate of the magnitude of a ‘nuclear winter’? * Why Gorbachev allowed Russia’s covert biological warfare program to continue Get this episode by subscribing: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Sep 11, 2018 • 2h 46min

#42 - Amanda Askell on moral empathy, the value of information & the ethics of infinity

Consider two familiar moments at a family reunion. Our host, Uncle Bill, takes pride in his barbecuing skills. But his niece Becky says that she now refuses to eat meat. A groan goes round the table; the family mostly think of this as an annoying picky preference. But if seriously considered as a moral position, as they might if instead Becky were avoiding meat on religious grounds, it would usually receive a very different reaction. An hour later Bill expresses a strong objection to abortion. Again, a groan goes round the table; the family mostly think that he has no business in trying to foist his regressive preference on anyone. But if considered not as a matter of personal taste, but rather as a moral position - that Bill genuinely believes he’s opposing mass-murder - his comment might start a serious conversation. Amanda Askell, who recently completed a PhD in philosophy at NYU focused on the ethics of infinity, thinks that we often betray a complete lack of moral empathy. All sides of the political spectrum struggle to get inside the mind of people we disagree with and see issues from their point of view. Links to learn more, summary and full transcript. This often happens because of confusion between preferences and moral positions. Assuming good faith on the part of the person you disagree with, and actually engaging with the beliefs they claim to hold, is perhaps the best remedy for our inability to make progress on controversial issues. One potential path for progress surrounds contraception; a lot of people who are anti-abortion are also anti-contraception. But they’ll usually think that abortion is much worse than contraception, so why can’t we compromise and agree to have much more contraception available? According to Amanda, a charitable explanation for this is that people who are anti-abortion and anti-contraception engage in moral reasoning and advocacy based on what, in their minds, is the best of all possible worlds: one where people neither use contraception nor get abortions. So instead of arguing about abortion and contraception, we could discuss the underlying principle that one should advocate for the best possible world, rather than the best probable world. Successfully break down such ethical beliefs, absent political toxicity, and it might be possible to actually converge on a key point of agreement. Today’s episode blends such everyday topics with in-depth philosophy, including: * What is 'moral cluelessness' and how can we work around it? * Amanda's biggest criticisms of social justice activists, and of critics of social justice activists * Is there an ethical difference between prison and corporal punishment? * How to resolve 'infinitarian paralysis' - the inability to make decisions when infinities are involved. * What’s effective altruism doing wrong? * How should we think about jargon? Are a lot of people who don’t communicate clearly just scamming us? * How can people be more successful within the cocoon of school and university? * How did Amanda find doing a philosophy PhD, and how will she decide what to do now? Links: * Career review: Congressional staffer * Randomised experiment on quitting * Psychology replication quiz * Should you focus on your comparative advantage. Get this episode by subscribing: type 80,000 Hours into your podcasting app. The 80,000 Hours podcast is produced by Keiran Harris.
undefined
Aug 28, 2018 • 2h 18min

#41 - David Roodman on incarceration, geomagnetic storms, & becoming a world-class researcher

With 698 inmates per 100,000 citizens, the U.S. is by far the leader among large wealthy nations in incarceration. But what effect does imprisonment actually have on crime? According to David Roodman, Senior Advisor to the Open Philanthropy Project, the marginal effect is zero. * 80,000 HOURS IMPACT SURVEY - Let me know how this show has helped you with your career. * ROB'S AUDIOBOOK RECOMMENDATIONS This stunning rebuke to the American criminal justice system comes from the man Holden Karnofsky’s called "the gold standard for in-depth quantitative research", whose other investigations include the risk of geomagnetic storms, whether deworming improves health and test scores, and the development impacts of microfinance. Links to learn more, summary and full transcript. The effects of crime can be split into three categories; before, during, and after. Does having tougher sentences deter people from committing crime? After reviewing studies on gun laws and ‘three strikes’ in California, David concluded that the effect of deterrence is zero. Does imprisoning more people reduce crime by incapacitating potential offenders? Here he says yes, noting that crimes like motor vehicle theft have gone up in a way that seems pretty clearly connected with recent Californian criminal justice reforms (though the effect on violent crime is far lower). Finally, do the after-effects of prison make you more or less likely to commit future crimes? This one is more complicated. Concerned that he was biased towards a comfortable position against incarceration, David did a cost-benefit analysis using both his favored reading of the evidence and the devil's advocate view; that there is deterrence and that the after-effects are beneficial. For the devil’s advocate position David used the highest assessment of the harm caused by crime, which suggests a year of prison prevents about $92,000 in crime. But weighed against a lost year of liberty, valued at $50,000, plus the cost of operating prisons, the numbers came out exactly the same. So even using the least-favorable cost-benefit valuation of the least favorable reading of the evidence -- it just breaks even. The argument for incarceration melts further when you consider the significant crime that occurs within prisons, de-emphasised because of a lack of data and a perceived lack of compassion for inmates. In today’s episode we discuss how to conduct such impactful research, and how to proceed having reached strong conclusions. We also cover: * How do you become a world class researcher? What kinds of character traits are important? * Are academics aware of following perverse incentives? * What’s involved in data replication? How often do papers replicate? * The politics of large orgs vs. small orgs * Geomagnetic storms as a potential cause area * How much does David rely on interviews with experts? * The effects of deworming on child health and test scores * Should we have more ‘data vigilantes’? * What are David’s critiques of effective altruism? * What are the pros and cons of starting your career in the think tank world? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Aug 21, 2018 • 2h 11min

#40 - Katja Grace on forecasting future technology & how much we should trust expert predictions

Experts believe that artificial intelligence will be better than humans at driving trucks by 2027, working in retail by 2031, writing bestselling books by 2049, and working as surgeons by 2053. But how seriously should we take these predictions? Katja Grace, lead author of ‘When Will AI Exceed Human Performance?’, thinks we should treat such guesses as only weak evidence. But she also says there might be much better ways to forecast transformative technology, and that anticipating such advances could be one of our most important projects. Note: Katja's organisation AI Impacts is currently hiring part- and full-time researchers. There’s often pessimism around making accurate predictions in general, and some areas of artificial intelligence might be particularly difficult to forecast. But there are also many things we’re able to predict confidently today -- like the climate of Oxford in five years -- that we no longer give ourselves much credit for. Some aspects of transformative technologies could fall into this category. And these easier predictions could give us some structure on which to base the more complicated ones. Links to learn more, summary and full transcript. One controversial debate surrounds the idea of an intelligence explosion; how likely is it that there will be a sudden jump in AI capability? And one way to tackle this is to investigate a more concrete question: what’s the base rate of any technology having a big discontinuity? A significant historical example was the development of nuclear weapons. Over thousands of years, the efficacy of explosives didn’t increase by much. Then within a few years, it got thousands of times better. Discovering what leads to such anomalies may allow us to better predict the possibility of a similar jump in AI capabilities. In today’s interview we also discuss: * Why is AI impacts one of the most important projects in the world? * How do you structure important surveys? Why do you get such different answers when asking what seem to be very similar questions? * How does writing an academic paper differ from posting a summary online? * When will unguided machines be able to produce better and cheaper work than humans for every possible task? * What’s one of the most likely jobs to be automated soon? * Are people always just predicting the same timelines for new technologies? * How do AGI researchers different from other AI researchers in their predictions? * What are attitudes to safety research like within ML? Are there regional differences? * How much should we believe experts generally? * How does the human brain compare to our best supercomputers? How many human brains are worth all the hardware in the world? * How quickly has the processing capacity for machine learning problems been increasing? * What can we learn from the development of previous technologies in figuring out how fast transformative AI will arrive? * What should we expect from a post AI dominated economy? * How much influence can people ever have on things that will happen in 20 years? Are there any examples of people really trying to do this? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours podcast is produced by Keiran Harris.
undefined
Aug 7, 2018 • 2h 18min

#39 - Spencer Greenberg on the scientific approach to solving difficult everyday questions

Will Trump be re-elected? Will North Korea give up their nuclear weapons? Will your friend turn up to dinner? Spencer Greenberg, founder of ClearerThinking.org has a process for working out such real life problems. Let’s work through one here: how likely is it that you’ll enjoy listening to this episode? The first step is to figure out your ‘prior probability’; what’s your estimate of how likely you are to enjoy the interview before getting any further evidence? Other than applying common sense, one way to figure this out is called reference class forecasting: looking at similar cases and seeing how often something is true, on average. Spencer is our first ever return guest. So one reference class might be, how many Spencer Greenberg episodes of the 80,000 Hours Podcast have you enjoyed so far? Being this specific limits bias in your answer, but with a sample size of at most 1 - you’d probably want to add more data points to reduce variability. Zooming out, how many episodes of the 80,000 Hours Podcast have you enjoyed? Let’s say you’ve listened to 10, and enjoyed 8 of them. If so 8 out of 10 might be your prior probability. But maybe the two you didn’t enjoy had something in common. If you’ve liked similar episodes in the past, you’d update in favour of expecting to enjoy it, and if you’ve disliked similar episodes in the past, you’d update negatively. You can zoom out further; what fraction of long-form interview podcasts have you ever enjoyed? Then you’d look to update whenever new information became available. Do the topics seem interesting? Did Spencer make a great point in the first 5 minutes? Was this description unbearably self-referential? Speaking of the Question of Evidence: in a world where Spencer was not worth listening to, how likely is it that we’d invite him back for a second episode? Links to learn more, summary and full transcript. We’ll run through several diverse examples, and how to actually work out the changing probabilities as you update. But that’s only a fraction of the conversation. We also discuss: * How could we generate 20-30 new happy thoughts a day? What would that do to our welfare? * What do people actually value? How do EAs differ from non EAs? * Why should we care about the distinction between intrinsic and instrumental values? * Would hedonic utilitarians really want to hook themselves up to happiness machines? * What types of activities are people generally under-confident about? Why? * When should you give a lot of weight to your prior belief? * When should we trust common sense? * Does power posing have any effect? * Are resumes worthless? * Did Trump explicitly collude with Russia? What are the odds of him getting re-elected? * What’s the probability that China and the US go to War in the 21st century? * How should we treat claims of expertise on diets? * Why were Spencer’s friends suspicious of Theranos for years? * How should we think about the placebo effect? * Does a shift towards rationality typically cause alienation from family and friends? How do you deal with that? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours podcast is produced by Keiran Harris.
undefined
Jul 26, 2018 • 1h 60min

#38 - Yew-Kwang Ng on anticipating effective altruism decades ago & how to make a much happier world

Will people who think carefully about how to maximize welfare eventually converge on the same views? The effective altruism community has spent a lot of time over the past 10 years debating how best to increase happiness and reduce suffering, and gradually narrowed in on the world’s poorest people, all animals capable of suffering, and future generations. Yew-Kwang Ng, Professor of Economics at Nanyang Technological University in Singapore, was independently working on this exact question since the 70s. Many of his conclusions have ended up foreshadowing what is now conventional wisdom within effective altruism - though other views he holds remain controversial or little-known. For instance, he thinks we ought to explore increasing pleasure via direct brain stimulation, and that genetic engineering may be an important tool for increasing happiness in the future. His work has suggested that the welfare of most wild animals is on balance negative and he thinks that in the future this is a problem humanity might work to solve. Yet he thinks that greatly improved conditions for farm animals could eventually justify eating meat. He has spent most of his life advocating for the view that happiness, broadly construed, is the only intrinsically valuable thing. If it’s true that careful researchers will converge as Prof Ng believes, these ideas may prove as prescient as his other, now widely accepted, opinions. Link to our summary and appreciation of Kwang’s top publications and insights throughout a lifetime of research. Kwang has led an exceptional life. While in high school he was drawn to physics, mathematics, and philosophy, yet he chose to study economics because of his dream: to establish communism in an independent Malaya. But events in the Soviet Union and China, in addition to his burgeoning knowledge and academic appreciation of economics, would change his views about the practicability of communism. He would soon complete his journey from young revolutionary to academic economist, and eventually become a columnist writing in support of Deng Xiaoping’s Chinese economic reforms in the 80s. He got his PhD at Sydney University in 1971, and has since published over 250 refereed papers - covering economics, biology, politics, mathematics, philosophy, psychology, and sociology. He's most well-known for his work in welfare economics, and proposed ‘welfare biology’ as a new field of study. In 2007, he was made a Distinguished Fellow of the Economic Society of Australia, the highest award that the society bestows. Links to learn more, summary and full transcript. In this episode we discuss how he developed some of his most unusual ideas and his fascinating life story, including: * Why Kwang believes that *’Happiness Is Absolute, Universal, Ultimate, Unidimensional, Cardinally Measurable and Interpersonally Comparable’* * What are the most pressing questions in economics? * Did Kwang have to worry about censorship from the Chinese government when promoting market economics, or concern for animal welfare? * Welfare economics and where Kwang thinks it went wrong Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Jul 16, 2018 • 1h 44min

#37 - GiveWell picks top charities by estimating the unknowable. James Snowden on how they do it.

What’s the value of preventing the death of a 5-year-old child, compared to a 20-year-old, or an 80-year-old? The global health community has generally regarded the value as proportional to the number of health-adjusted life-years the person has remaining - but GiveWell, one of the world’s foremost charity evaluators, no longer uses that approach. They found that contrary to the years-remaining’ method, many of their staff actually value preventing the death of an adult more than preventing the death of a young child. However there’s plenty of disagreement: the team’s estimates of the relative value span a four-fold range. As James Snowden - a research consultant at GiveWell - explains in this episode, there’s no way around making these controversial judgement calls based on limited information. If you try to ignore a question like this, you just implicitly take an unreflective stand on it instead. And for each charity they look into there’s 1 or 2 dozen of these highly uncertain parameters they need to estimate. GiveWell has been trying to find better ways to make these decisions since its inception in 2007. Lives hang in the balance, so they want their staff to say what they really believe and bring their private knowledge to the table, rather than just defer to a imaginary consensus. Their strategy is a massive spreadsheet that lists dozens of things they need to estimate, and asking every staff member to give a figure and justification. Then once a year, the GiveWell team get together and try to identify what they really disagree about and think through what evidence it would take to change their minds. Full transcript, summary of the conversation and links to learn more. Often the people who have the greatest familiarity with a particular intervention are the ones who drive the decision, as others defer to them. But the group can also end up with very different figures, based on different prior beliefs about moral issues and how the world works. In that case then use the median of everyone’s best guess to make their key decisions. In making his estimate of the relative badness of dying at different ages, James specifically considered two factors: how many years of life do you lose, and how much interest do you have in those future years? Currently, James believes that the worst time for a person to die is around 8 years of age. We discuss his experiences with such calculations, as well as a range of other topics: * Why GiveWell’s recommendations have changed more than it looks. * What are the biggest research priorities for GiveWell at the moment? * How do you take into account the long-term knock-on effects from interventions? * If GiveWell's advice were going to end up being very different in a couple years' time, how might that happen? * Are there any charities that James thinks are really cost-effective which GiveWell hasn't funded yet? * How does domestic government spending in the developing world compare to effective charities? * What are the main challenges with policy related interventions? * How much time do you spend discovering new interventions? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Jul 11, 2018 • 2h 5min

#36 - Tanya Singh on ending the operations management bottleneck in effective altruism

Almost nobody is able to do groundbreaking physics research themselves, and by the time his brilliance was appreciated, Einstein was hardly limited by funding. But what if you could find a way to unlock the secrets of the universe like Einstein nonetheless? Today’s guest, Tanya Singh, sees herself as doing something like that every day. She’s Executive Assistant to one of her intellectual heroes who she believes is making a huge contribution to improving the world: Professor Bostrom at Oxford University's Future of Humanity Institute (FHI). She couldn’t get more work out of Bostrom with extra donations, as his salary is already easily covered. But with her superior abilities as an Executive Assistant, Tanya frees up hours of his time every week, essentially ‘buying’ more Bostrom in a way nobody else can. She also help manage FHI more generally, in so doing freeing up more than an hour of other staff time for each hour she works. This gives her the leverage to do more good than other people or other positions. In our previous episode, Tara Mac Aulay objected to viewing operations work as predominately a way of freeing up other people's time: “A good ops person doesn’t just allow you to scale linearly, but also can help figure out bottlenecks and solve problems such that the organization is able to do qualitatively different work, rather than just increase the total quantity”, Tara said. Full transcript, summary and links to learn more. Tara’s right that buying time for people at the top of their field is just one path to impact, though it’s one Tanya says she finds highly motivating. Other paths include enabling complex projects that would otherwise be impossible, allowing you to hire and grow much faster, and preventing disasters that could bring down a whole organisation - all things that Tanya does at FHI as well. In today’s episode we discuss all of those approaches, as we dive deeper into the broad class of roles we refer to as ‘operations management’. We cover the arguments we made in ‘Why operations management is one of the biggest bottlenecks in effective altruism’, as well as: * Does one really need to hire people aligned with an org’s mission to work in ops? * The most notable operations successes in the 20th Century. * What’s it like being the only operations person in an org? * The role of a COO as compared to a CEO, and the options for career progression. * How do good operation teams allow orgs to scale quickly? * How much do operations staff get to set their org’s strategy? * Which personal weaknesses aren’t a huge problem in operations? * How do you automate processes? Why don’t most people do this? * Cultural differences between Britain and India where Tanya grew up. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours podcast is produced by Keiran Harris.
undefined
Jun 21, 2018 • 1h 23min

#35 - Tara Mac Aulay on the audacity to fix the world without asking permission

"You don't need permission. You don't need to be allowed to do something that's not in your job description. If you think that it's gonna make your company or your organization more successful and more efficient, you can often just go and do it." How broken is the world? How inefficient is a typical organisation? Looking at Tara Mac Aulay’s life, the answer seems to be ‘very’. At 15 she took her first job - an entry-level position at a chain restaurant. Rather than accept her place, Tara took it on herself to massively improve the store’s shambolic staff scheduling and inventory management. After cutting staff costs 30% she was quickly promoted, and at 16 sent in to overhaul dozens of failing stores in a final effort to save them from closure. That’s just the first in a startling series of personal stories that take us to a hospital drug dispensary where pharmacists are wasting a third of their time, a chemotherapy ward in Bhutan that’s killing its patients rather than saving lives, and eventually the Centre for Effective Altruism, where Tara becomes CEO and leads it through start-up accelerator Y Combinator. In this episode Tara shows how the ability to do practical things, avoid major screw-ups, and design systems that scale, is both rare and precious. Full transcript, key quotes and links to learn more. People with an operations mindset spot failures others can't see and fix them before they bring an organisation down. This kind of resourcefulness can transform the world by making possible critical projects that would otherwise fall flat on their face. But as Tara's experience shows they need to figure out what actually motivates the authorities who often try to block their reforms. We explore how people with this skillset can do as much good as possible, what 80,000 Hours got wrong in our article 'Why operations management is one of the biggest bottlenecks in effective altruism’, as well as: * Tara’s biggest mistakes and how to deal with the delicate politics of organizational reform. * How a student can save a hospital millions with a simple spreadsheet model. * The sociology of Bhutan and how medicine in the developing world often makes things worse rather than better. * What most people misunderstand about operations, and how to tell if you have what it takes. * And finally, operations jobs people should consider applying for, such as those open now at the Centre for Effective Altruism. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode