80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
Aug 19, 2021 • 2h 19min

#109 – Holden Karnofsky on the most important century

Will the future of humanity be wild, or boring? It's natural to think that if we're trying to be sober and measured, and predict what will really happen rather than spin an exciting story, it's more likely than not to be sort of... dull. But there's also good reason to think that that is simply impossible. The idea that there's a boring future that's internally coherent is an illusion that comes from not inspecting those scenarios too closely. At least that is what Holden Karnofsky — founder of charity evaluator GiveWell and foundation Open Philanthropy — argues in his new article series titled 'The Most Important Century'. He hopes to lay out part of the worldview that's driving the strategy and grantmaking of Open Philanthropy's longtermist team, and encourage more people to join his efforts to positively shape humanity's future. Links to learn more, summary and full transcript. The bind is this. For the first 99% of human history the global economy (initially mostly food production) grew very slowly: under 0.1% a year. But since the industrial revolution around 1800, growth has exploded to over 2% a year. To us in 2020 that sounds perfectly sensible and the natural order of things. But Holden points out that in fact it's not only unprecedented, it also can't continue for long. The power of compounding increases means that to sustain 2% growth for just 10,000 years, 5% as long as humanity has already existed, would require us to turn every individual atom in the galaxy into an economy as large as the Earth's today. Not super likely. So what are the options? First, maybe growth will slow and then stop. In that case we today live in the single miniscule slice in the history of life during which the world rapidly changed due to constant technological advances, before intelligent civilization permanently stagnated or even collapsed. What a wild time to be alive! Alternatively, maybe growth will continue for thousands of years. In that case we are at the very beginning of what would necessarily have to become a stable galaxy-spanning civilization, harnessing the energy of entire stars among other feats of engineering. We would then stand among the first tiny sliver of all the quadrillions of intelligent beings who ever exist. What a wild time to be alive! Isn't there another option where the future feels less remarkable and our current moment not so special? While the full version of the argument above has a number of caveats, the short answer is 'not really'. We might be in a computer simulation and our galactic potential all an illusion, though that's hardly any less weird. And maybe the most exciting events won't happen for generations yet. But on a cosmic scale we'd still be living around the universe's most remarkable time. Holden himself was very reluctant to buy into the idea that today’s civilization is in a strange and privileged position, but has ultimately concluded "all possible views about humanity's future are wild". In the conversation Holden and Rob cover each part of the 'Most Important Century' series, including: • The case that we live in an incredibly important time • How achievable-seeming technology - in particular, mind uploading - could lead to unprecedented productivity, control of the environment, and more • How economic growth is faster than it can be for all that much longer • Forecasting transformative AI • And the implications of living in the most important century Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
undefined
Aug 11, 2021 • 1h 33min

#108 – Chris Olah on working at top AI labs without an undergrad degree

Chris Olah has had a fascinating and unconventional career path. Most people who want to pursue a research career feel they need a degree to get taken seriously. But Chris not only doesn't have a PhD, but doesn’t even have an undergraduate degree. After dropping out of university to help defend an acquaintance who was facing bogus criminal charges, Chris started independently working on machine learning research, and eventually got an internship at Google Brain, a leading AI research group. In this interview — a follow-up to our episode on his technical work — we discuss what, if anything, can be learned from his unusual career path. Should more people pass on university and just throw themselves at solving a problem they care about? Or would it be foolhardy for others to try to copy a unique case like Chris’? Links to learn more, summary and full transcript. We also cover some of Chris' personal passions over the years, including his attempts to reduce what he calls 'research debt' by starting a new academic journal called Distill, focused just on explaining existing results unusually clearly. As Chris explains, as fields develop they accumulate huge bodies of knowledge that researchers are meant to be familiar with before they start contributing themselves. But the weight of that existing knowledge — and the need to keep up with what everyone else is doing — can become crushing. It can take someone until their 30s or later to earn their stripes, and sometimes a field will split in two just to make it possible for anyone to stay on top of it. If that were unavoidable it would be one thing, but Chris thinks we're nowhere near communicating existing knowledge as well as we could. Incrementally improving an explanation of a technical idea might take a single author weeks to do, but could go on to save a day for thousands, tens of thousands, or hundreds of thousands of students, if it becomes the best option available. Despite that, academics have little incentive to produce outstanding explanations of complex ideas that can speed up the education of everyone coming up in their field. And some even see the process of deciphering bad explanations as a desirable right of passage all should pass through, just as they did. So Chris tried his hand at chipping away at this problem — but concluded the nature of the problem wasn't quite what he originally thought. In this conversation we talk about that, as well as: • Why highly thoughtful cold emails can be surprisingly effective, but average cold emails do little • Strategies for growing as a researcher • Thinking about research as a market • How Chris thinks about writing outstanding explanations • The concept of 'micromarriages' and ‘microbestfriendships’ • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
undefined
Aug 4, 2021 • 3h 9min

#107 – Chris Olah on what the hell is going on inside neural networks

Big machine learning models can identify plant species better than any human, write passable essays, beat you at a game of Starcraft 2, figure out how a photo of Tobey Maguire and the word 'spider' are related, solve the 60-year-old 'protein folding problem', diagnose some diseases, play romantic matchmaker, write solid computer code, and offer questionable legal advice. Humanity made these amazing and ever-improving tools. So how do our creations work? In short: we don't know. Today's guest, Chris Olah, finds this both absurd and unacceptable. Over the last ten years he has been a leader in the effort to unravel what's really going on inside these black boxes. As part of that effort he helped create the famous DeepDream visualisations at Google Brain, reverse engineered the CLIP image classifier at OpenAI, and is now continuing his work at Anthropic, a new $100 million research company that tries to "co-develop the latest safety techniques alongside scaling of large ML models". Links to learn more, summary and full transcript. Despite having a huge fan base thanks to his explanations of ML and tweets, today's episode is the first long interview Chris has ever given. It features his personal take on what we've learned so far about what ML algorithms are doing, and what's next for this research agenda at Anthropic. His decade of work has borne substantial fruit, producing an approach for looking inside the mess of connections in a neural network and back out what functional role each piece is serving. Among other things, Chris and team found that every visual classifier seems to converge on a number of simple common elements in their early layers — elements so fundamental they may exist in our own visual cortex in some form. They also found networks developing 'multimodal neurons' that would trigger in response to the presence of high-level concepts like 'romance', across both images and text, mimicking the famous 'Halle Berry neuron' from human neuroscience. While reverse engineering how a mind works would make any top-ten list of the most valuable knowledge to pursue for its own sake, Chris's work is also of urgent practical importance. Machine learning models are already being deployed in medicine, business, the military, and the justice system, in ever more powerful roles. The competitive pressure to put them into action as soon as they can turn a profit is great, and only getting greater. But if we don't know what these machines are doing, we can't be confident they'll continue to work the way we want as circumstances change. Before we hand an algorithm the proverbial nuclear codes, we should demand more assurance than "well, it's always worked fine so far". But by peering inside neural networks and figuring out how to 'read their minds' we can potentially foresee future failures and prevent them before they happen. Artificial neural networks may even be a better way to study how our own minds work, given that, unlike a human brain, we can see everything that's happening inside them — and having been posed similar challenges, there's every reason to think evolution and 'gradient descent' often converge on similar solutions. Among other things, Rob and Chris cover: • Why Chris thinks it's necessary to work with the largest models • What fundamental lessons we've learned about how neural networks (and perhaps humans) think • How interpretability research might help make AI safer to deploy, and Chris’ response to skeptics • Why there's such a fuss about 'scaling laws' and what they say about future AI progress Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
undefined
Jul 28, 2021 • 1h 53min

#106 – Cal Newport on an industrial revolution for office work

If you wanted to start a university department from scratch, and attract as many superstar researchers as possible, what’s the most attractive perk you could offer?How about just not needing an email address.According to today's guest, Cal Newport — computer science professor and best-selling author of A World Without Email — it should seem obscene and absurd for a world-renowned vaccine researcher with decades of experience to spend a third of their time fielding requests from HR, building management, finance, and so on. Yet with offices organised the way they are today, nothing could be more natural. Links to learn more, summary and full transcript. But this isn’t just a problem at the elite level — this affects almost all of us. A typical U.S. office worker checks their email 80 times a day, once every six minutes on average. Data analysis by RescueTime found that a third of users checked email or Slack every three minutes or more, averaged over a full work day. Each time that happens our focus is broken, killing our momentum on the knowledge work we're supposedly paid to do. When we lament how much email and chat have reduced our focus and filled our days with anxiety and frenetic activity, we most naturally blame 'weakness of will'. If only we had the discipline to check Slack and email once a day, all would be well — or so the story goes. Cal believes that line of thinking fundamentally misunderstands how we got to a place where knowledge workers can rarely find more than five consecutive minutes to spend doing just one thing. Since the Industrial Revolution, a combination of technology and better organization have allowed the manufacturing industry to produce a hundred-fold as much with the same number of people. Cal says that by comparison, it's not clear that specialised knowledge workers like scientists, authors, or senior managers are *any* more productive than they were 50 years ago. If the knowledge sector could achieve even a tiny fraction of what manufacturing has, and find a way to coordinate its work that raised productivity by just 1%, that would generate on the order of $100 billion globally each year. Since the 1990s, when everyone got an email address and most lost their assistants, that lack of direction has led to what Cal calls the 'hyperactive hive mind': everyone sends emails and chats to everyone else, all through the day, whenever they need something. Cal points out that this is so normal we don't even think of it as a way of organising work, but it is: it's what happens when management does nothing to enable teams to decide on a better way of organising themselves. A few industries have made progress taming the 'hyperactive hive mind'. But on Cal's telling, this barely scratches the surface of the improvements that are possible within knowledge work. And reigning in the hyperactive hive mind won't just help people do higher quality work, it will free them from the 24/7 anxiety that there's someone somewhere they haven't gotten back to. In this interview Cal and Rob also cover: • Is this really one of the world's most pressing problems? • The historical origins of the 'hyperactive hive mind' • The harm caused by attention switching • Who's working to solve the problem and how • Cal's top productivity advice for high school students, university students, and early career workers • And much moreChapters:Rob’s intro (00:00:00)The interview begins (00:02:02)The hyperactive hivemind (00:04:11)Scale of the harm (00:08:40)Is email making professors stupid? (00:22:09)Why haven't we already made these changes? (00:29:38)Do people actually prefer the hyperactive hivemind? (00:43:31)Solutions (00:55:52)Advocacy (01:10:47)How to Be a High School Superstar (01:23:03)How to Win at College (01:27:46)So Good They Can't Ignore You (01:31:47)Personal barriers (01:42:51)George Marshall (01:47:11)Rob’s outro (01:49:18)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Sofia Davis-Fogel
undefined
Jul 12, 2021 • 2h 55min

#105 – Alexander Berger on improving global health and wellbeing in clear and direct ways

The effective altruist research community tries to identify the highest impact things people can do to improve the world. Unsurprisingly, given the difficulty of such a massive and open-ended project, very different schools of thought have arisen about how to do the most good. Today's guest, Alexander Berger, leads Open Philanthropy's 'Global Health and Wellbeing' programme, where he oversees around $175 million in grants each year, and ultimately aspires to disburse billions in the most impactful ways he and his team can identify. This programme is the flagship effort representing one major effective altruist approach: try to improve the health and wellbeing of humans and animals that are alive today, in clearly identifiable ways, applying an especially analytical and empirical mindset. Links to learn more, summary, Open Phil jobs, and full transcript. The programme makes grants to tackle easily-prevented illnesses among the world's poorest people, offer cash to people living in extreme poverty, prevent cruelty to billions of farm animals, advance biomedical science, and improve criminal justice and immigration policy in the United States. Open Philanthropy's researchers rely on empirical information to guide their decisions where it's available, and where it's not, they aim to maximise expected benefits to recipients through careful analysis of the gains different projects would offer and their relative likelihoods of success. This 'global health and wellbeing' approach — sometimes referred to as 'neartermism' — contrasts with another big school of thought in effective altruism, known as 'longtermism', which aims to direct the long-term future of humanity and its descendants in a positive direction. Longtermism bets that while it's harder to figure out how to benefit future generations than people alive today, the total number of people who might live in the future is far greater than the number alive today, and this gain in scale more than offsets that lower tractability. The debate between these two very different theories of how to best improve the world has been one of the most significant within effective altruist research since its inception. Alexander first joined the influential charity evaluator GiveWell in 2011, and since then has conducted research alongside top thinkers on global health and wellbeing and longtermism alike, ultimately deciding to dedicate his efforts to improving the world today in identifiable ways. In this conversation Alexander advocates for that choice, explaining the case in favour of adopting the 'global health and wellbeing' mindset, while going through the arguments for the longtermist approach that he finds most and least convincing. Rob and Alexander also tackle: • Why it should be legal to sell your kidney, and why Alexander donated his to a total stranger • Why it's shockingly hard to find ways to give away large amounts of money that are more cost effective than distributing anti-malaria bed nets • How much you gain from working with tight feedback loops • Open Philanthropy's biggest wins • Why Open Philanthropy engages in 'worldview diversification' by having both a global health and wellbeing programme and a longtermist programme as well • Whether funding science and political advocacy is a good way to have more social impact • Whether our effects on future generations are predictable or unforeseeable • What problems the global health and wellbeing team works to solve and why • Opportunities to work at Open Philanthropy Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
undefined
Jun 29, 2021 • 2h 21min

#104 – Pardis Sabeti on the Sentinel system for detecting and stopping pandemics

When the first person with COVID-19 went to see a doctor in Wuhan, nobody could tell that it wasn’t a familiar disease like the flu — that we were dealing with something new. How much death and destruction could we have avoided if we'd had a hero who could? That's what the last Assistant Secretary of Defense Andy Weber asked on the show back in March. Today’s guest Pardis Sabeti is a professor at Harvard, fought Ebola on the ground in Africa during the 2014 outbreak, runs her own lab, co-founded a company that produces next-level testing, and is even the lead singer of a rock band. If anyone is going to be that hero in the next pandemic — it just might be her. Links to learn more, summary and full transcript. She is a co-author of the SENTINEL proposal, a practical system for detecting new diseases quickly, using an escalating series of three novel diagnostic techniques. The first method, called SHERLOCK, uses CRISPR gene editing to detect familiar viruses in a simple, inexpensive filter paper test, using non-invasive samples. If SHERLOCK draws a blank, we escalate to the second step, CARMEN, an advanced version of SHERLOCK that uses microfluidics and CRISPR to simultaneously detect hundreds of viruses and viral strains. More expensive, but far more comprehensive. If neither SHERLOCK nor CARMEN detects a known pathogen, it's time to pull out the big gun: metagenomic sequencing. More expensive still, but sequencing all the DNA in a patient sample lets you identify and track every virus — known and unknown — in a sample. If Pardis and her team succeeds, our future pandemic potential patient zero may: 1. Go to the hospital with flu-like symptoms, and immediately be tested using SHERLOCK — which will come back negative 2. Take the CARMEN test for a much broader range of illnesses — which will also come back negative 3. Their sample will be sent for metagenomic sequencing, which will reveal that they're carrying a new virus we'll have to contend with 4. At all levels, information will be recorded in a cloud-based data system that shares data in real time; the hospital will be alerted and told to quarantine the patient 5. The world will be able to react weeks — or even months — faster, potentially saving millions of lives It's a wonderful vision, and one humanity is ready to test out. But there are all sorts of practical questions, such as: • How do you scale these technologies, including to remote and rural areas? • Will doctors everywhere be able to operate them? • Who will pay for it? • How do you maintain the public’s trust and protect against misuse of sequencing data? • How do you avoid drowning in the data the system produces? In this conversation Pardis and Rob address all those questions, as well as: • Pardis’ history with trying to control emerging contagious diseases • The potential of mRNA vaccines • Other emerging technologies • How to best educate people about pandemics • The pros and cons of gain-of-function research • Turning mistakes into exercises you can learn from • Overcoming enormous life challenges • Why it’s so important to work with people you can laugh with • And much moreChapters:The interview begins (00:01:40)Trying to control emerging contagious diseases (00:04:36)SENTINEL (00:15:31)SHERLOCK (00:25:09)CARMEN (00:36:32)Metagenomic sequencing (00:51:53)How useful these technologies could be (01:02:35)How this technology could apply to the US (01:06:41)Failure modes for this technology (01:18:34)Funding (01:27:06)mRNA vaccines (01:31:14)Other emerging technologies (01:34:45)Operation Outbreak (01:41:07)COVID (01:49:16)Gain-of-function research (01:57:34)Career advice (02:01:47)Overcoming big challenges (02:10:23)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Sofia Davis-Fogel.
undefined
Jun 21, 2021 • 2h 22min

#103 – Max Roser on building the world's best source of COVID-19 data at Our World in Data

History is filled with stories of great people stepping up in times of crisis. Presidents averting wars; soldiers leading troops away from certain death; data scientists sleeping on the office floor to launch a new webpage a few days sooner. That last one is barely a joke — by our lights, people like today’s guest Max Roser should be viewed with similar admiration by historians of COVID-19. Links to learn more, summary and full transcript. Max runs Our World in Data, a small education nonprofit which began the pandemic with just six staff. But since last February his team has supplied essential COVID statistics to over 130 million users — among them BBC, The Financial Times, The New York Times, the OECD, the World Bank, the IMF, Donald Trump, Tedros Adhanom, and Dr. Anthony Fauci, just to name a few. An economist at Oxford University, Max Roser founded Our World in Data as a small side project in 2011 and has led it since, including through the wild ride of 2020. In today's interview Max explains how he and his team realized that if they didn't start making COVID data accessible and easy to make sense of, it wasn't clear when anyone would. Our World in Data wasn't naturally set up to become the world's go-to source for COVID updates. Up until then their specialty had been long articles explaining century-length trends in metrics like life expectancy — to the point that their graphing software was only set up to present yearly data. But the team eventually realized that the World Health Organization was publishing numbers that flatly contradicted themselves, most of the press was embarrassingly out of its depth, and countries were posting case data as images buried deep in their sites where nobody would find them. Even worse, nobody was reporting or compiling how many tests different countries were doing, rendering all those case figures largely meaningless. Trying to make sense of the pandemic was a time-consuming nightmare. If you were leading a national COVID response, learning what other countries were doing and whether it was working would take weeks of study — and that meant, with the walls falling in around you, it simply wasn't going to happen. Ministries of health around the world were flying blind. Disbelief ultimately turned to determination, and the Our World in Data team committed to do whatever had to be done to fix the situation. Overnight their software was quickly redesigned to handle daily data, and for the next few months Max and colleagues like Edouard Mathieu and Hannah Ritchie did little but sleep and compile COVID data. In this episode Max tells the story of how Our World in Data ran into a huge gap that never should have been there in the first place — and how they had to do it all again in December 2020 when, eleven months into the pandemic, there was nobody to compile global vaccination statistics. We also talk about: • Our World in Data's early struggles to get funding • Why government agencies are so bad at presenting data • Which agencies did a good job during the COVID pandemic (shout out to the European CDC) • How much impact Our World in Data has by helping people understand the world • How to deal with the unreliability of development statistics • Why research shouldn't be published as a PDF • Why academia under-incentivises data collection • The history of war • And much more Producer: Keiran Harris. Audio mastering: Ryan Kessler. Transcriptions: Sofia Davis-Fogel.
undefined
Jun 11, 2021 • 3h 57min

#102 – Tom Moynihan on why prior generations missed some of the biggest priorities of all

It can be tough to get people to truly care about reducing existential risks today. But spare a thought for the longtermist of the 17th century: they were surrounded by people who thought extinction was literally impossible. Today’s guest Tom Moynihan, intellectual historian and author of the book X-Risk: How Humanity Discovered Its Own Extinction, says that until the 18th century, almost everyone — including early atheists — couldn’t imagine that humanity or life could simply disappear because of an act of nature. Links to learn more, summary and full transcript. This is largely because of the prevalence of the ‘principle of plenitude’, which Tom defines as saying: “Whatever can happen will happen. In its stronger form it says whatever can happen will happen reliably and recurrently. And in its strongest form it says that all that can happen is happening right now. And that's the way things will be forever.” This has the implication that if humanity ever disappeared for some reason, then it would have to reappear. So why would you ever worry about extinction? Here are 4 more commonly held beliefs from generations past that Tom shares in the interview: • All regions of matter that can be populated will be populated: In other words, there are aliens on every planet, because it would be a massive waste of real estate if all of them were just inorganic masses, where nothing interesting was going on. This also led to the idea that if you dug deep into the Earth, you’d potentially find thriving societies. • Aliens were human-like, and shared the same values as us: they would have the same moral beliefs, and the same aesthetic beliefs. The idea that aliens might be very different from us only arrived in the 20th century. • Fossils were rocks that had gotten a bit too big for their britches and were trying to act like animals: they couldn’t actually move, so becoming an imprint of an animal was the next best thing. • All future generations were contained in miniature form, Russian-doll style, in the sperm of the first man: preformation was the idea that within the ovule or the sperm of an animal is contained its offspring in miniature form, and the French philosopher Malebranche said, well, if one is contained in the other one, then surely that goes on forever. And here are another three that weren’t held widely, but were proposed by scholars and taken seriously: • Life preceded the existence of rocks: Living things, like clams and mollusks, came first, and they extruded the earth. • No idea can be wrong: Nothing we can say about the world is wrong in a strong sense, because at some point in the future or the past, it has been true. • Maybe we were living before the Trojan War: Aristotle said that we might actually be living before Troy, because it — like every other event — will repeat at some future date. And he said that actually, the set of possibilities might be so narrow that it might be safer to say that we actually live before Troy. But Tom tries to be magnanimous when faced with these incredibly misguided worldviews. In this nearly four-hour long interview, Tom and Rob cover all of these ideas, as well as: • How we know people really believed such things • How we moved on from these theories • How future intellectual historians might view our beliefs today • The distinction between ‘apocalypse’ and ‘extinction’ • Utopias and dystopias • Big ideas that haven’t flowed through into all relevant fields yet • Intellectual history as a possible high-impact career • And much moreChapters:Rob’s intro (00:00:00)The interview begins (00:01:45)Principle of Plenitude (00:04:02)How do we know they really believed this? (00:13:20)Religious conceptions of time (00:24:01)How to react to wacky old ideas (00:29:18)The Copernican revolution (00:36:55)Fossils (00:42:30)How we got past these theories (00:51:19)Intellectual history (01:01:45)Future historians looking back to today (01:13:11)Could plenitude actually be true? (01:27:38)What is vs. what ought to be (01:36:43)Apocalypse vs. extinction (01:45:56)The history of probability (02:00:52)Utopias and dystopias (02:12:11)How Tom has changed his mind since writing the book (02:28:58)Are we making progress? (02:35:00)Big ideas that haven’t flowed through to all relevant fields yet (02:52:07)Failed predictions (02:59:01)Intellectual history as high-impact career (03:06:56)Communicating progress (03:15:07)What careers in history actually look like (03:23:03)Tom’s next major project (03:43:06)One of the funniest things past generations believed (03:51:50)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Sofia Davis-Fogel.
undefined
May 28, 2021 • 1h 36min

#101 – Robert Wright on using cognitive empathy to save the world

In 2003, Saddam Hussein refused to let Iraqi weapons scientists leave the country to be interrogated. Given the overwhelming domestic support for an invasion at the time, most key figures in the U.S. took that as confirmation that he had something to hide — probably an active WMD program. But what about alternative explanations? Maybe those scientists knew about past crimes. Or maybe they’d defect. Or maybe giving in to that kind of demand would have humiliated Hussein in the eyes of enemies like Iran and Saudi Arabia. According to today’s guest Robert Wright, host of the popular podcast The Wright Show, these are the kinds of things that might have come up if people were willing to look at things from Saddam Hussein’s perspective. Links to learn more, summary and full transcript. He calls this ‘cognitive empathy’. It's not feeling-your-pain-type empathy — it's just trying to understand how another person thinks. He says if you pitched this kind of thing back in 2003 you’d be shouted down as a 'Saddam apologist' — and he thinks the same is true today when it comes to regimes in China, Russia, Iran, and North Korea. The two Roberts in today’s episode — Bob Wright and Rob Wiblin — agree that removing this taboo against perspective taking, even with people you consider truly evil, could potentially significantly improve discourse around international relations. They feel that if we could spread the meme that if you’re able to understand what dictators are thinking and calculating, based on their country’s history and interests, it seems like we’d be less likely to make terrible foreign policy errors. But how do you actually do that? Bob’s new ‘Apocalypse Aversion Project’ is focused on creating the necessary conditions for solving non-zero-sum global coordination problems, something most people are already on board with. And in particular he thinks that might come from enough individuals “transcending the psychology of tribalism”. He doesn’t just mean rage and hatred and violence, he’s also talking about cognitive biases. Bob makes the striking claim that if enough people in the U.S. had been able to combine perspective taking with mindfulness — the ability to notice and identify thoughts as they arise — then the U.S. might have even been able to avoid the invasion of Iraq. Rob pushes back on how realistic this approach really is, asking questions like: • Haven’t people been trying to do this since the beginning of time? • Is there a great novel angle that will change how a lot of people think and behave? • Wouldn’t it be better to focus on a much narrower task, like getting more mindfulness and meditation and reflectiveness among the U.S. foreign policy elite? But despite the differences in approaches, Bob has a lot of common ground with 80,000 Hours — and the result is a fun back-and-forth about the best ways to achieve shared goals. Bob starts by questioning Rob about effective altruism, and they go on to cover a bunch of other topics, such as: • Specific risks like climate change and new technologies • How to achieve social cohesion • The pros and cons of society-wide surveillance • How Rob got into effective altruism If you're interested to hear more of Bob's interviews you can subscribe to The Wright Show anywhere you're getting this one. You can also watch videos of this and all his other episodes on Bloggingheads.tv. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
undefined
May 19, 2021 • 2h 51min

#100 – Having a successful career with depression, anxiety and imposter syndrome

Today's episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!). The producer of this show, Keiran Harris, interviewed our mutual colleague Howie about the major ways that mental illness has affected his life and career. While depression, anxiety, ADHD and other problems are extremely common, it's rare for people to offer detailed insight into their thoughts and struggles — and even rarer for someone as perceptive as Howie to do so. Links to learn more, summary and full transcript. The first half of this conversation is a searingly honest account of Howie’s story, including losing a job he loved due to a depressed episode, what it was like to be basically out of commission for over a year, how he got back on his feet, and the things he still finds difficult today. The second half covers Howie’s advice. Conventional wisdom on mental health can be really focused on cultivating willpower — telling depressed people that the virtuous thing to do is to start exercising, improve their diet, get their sleep in check, and generally fix all their problems before turning to therapy and medication as some sort of last resort. Howie tries his best to be a corrective to this misguided attitude and pragmatically focus on what actually matters — doing whatever will help you get better. Mental illness is one of the things that most often trips up people who could otherwise enjoy flourishing careers and have a large social impact, so we think this could plausibly be one of our more valuable episodes. Howie and Keiran basically treated it like a private conversation, with the understanding that it may be too sensitive to release. But, after getting some really positive feedback, they’ve decided to share it with the world. We hope that the episode will: 1. Help people realise that they have a shot at making a difference in the future, even if they’re experiencing (or have experienced in the past) mental illness, self doubt, imposter syndrome, or other personal obstacles. 2. Give insight into what it's like in the head of one person with depression, anxiety, and imposter syndrome, including the specific thought patterns they experience on typical days and more extreme days. In addition to being interesting for its own sake, this might make it easier for people to understand the experiences of family members, friends, and colleagues — and know how to react more helpfully. So we think this episode will be valuable for: • People who have experienced mental health problems or might in future; • People who have had troubles with stress, anxiety, low mood, low self esteem, and similar issues, even if their experience isn’t well described as ‘mental illness’; • People who have never experienced these problems but want to learn about what it's like, so they can better relate to and assist family, friends or colleagues who do. In other words, we think this episode could be worthwhile for almost everybody. Just a heads up that this conversation gets pretty intense at times, and includes references to self-harm and suicidal thoughts. If you don’t want to hear the most intense section, you can skip the chapter called ‘Disaster’ (44–57mins). And if you’d rather avoid almost all of these references, you could skip straight to the chapter called ‘80,000 Hours’ (1hr 11mins). If you're feeling suicidal or have thoughts of harming yourself right now, there are suicide hotlines at National Suicide Prevention Lifeline in the U.S. (800-273-8255) and Samaritans in the U.K. (116 123). Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode