80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
May 13, 2021 • 2h 26min

#99 – Leah Garcés on turning adversaries into allies to change the chicken industry

Leah Garcés shares her experience of turning chicken farmers into allies, challenging industry norms, building relationships, and striving for animal welfare. The podcast delves into the complexities of the chicken industry, including farmer struggles, ethical concerns, and strategic interventions. It explores the power of cooperative relationships, pricing strategies, and navigating changes in the food industry towards promoting plant-based alternatives. The discussion highlights the evolution of advocacy tactics, market analysis, corporate competitiveness, and progress in animal welfare campaigns globally. It also addresses transitioning farmers to plant-based agriculture, the rise of veganism in African American communities, and career advice for animal welfare advocates.
undefined
May 5, 2021 • 2h 38min

#98 – Christian Tarsney on future bias and a possible solution to moral fanaticism

Imagine that you’re in the hospital for surgery. This kind of procedure is always safe, and always successful — but it can take anywhere from one to ten hours. You can’t be knocked out for the operation, but because it’s so painful — you’ll be given a drug that makes you forget the experience. You wake up, not remembering going to sleep. You ask the nurse if you’ve had the operation yet. They look at the foot of your bed, and see two different charts for two patients. They say “Well, you’re one of these two — but I’m not sure which one. One of them had an operation yesterday that lasted ten hours. The other is set to have a one-hour operation later today.” So it’s either true that you already suffered for ten hours, or true that you’re about to suffer for one hour. Which patient would you rather be? Most people would be relieved to find out they’d already had the operation. Normally we prefer less pain rather than more pain, but in this case, we prefer ten times more pain — just because the pain would be in the past rather than the future. Christian Tarsney, a philosopher at Oxford University's Global Priorities Institute, has written a couple of papers about this ‘future bias’ — that is, that people seem to care more about their future experiences than about their past experiences. Links to learn more, summary and full transcript. That probably sounds perfectly normal to you. But do we actually have good reasons to prefer to have our positive experiences in the future, and our negative experiences in the past? One of Christian’s experiments found that when you ask people to imagine hypothetical scenarios where they can affect their own past experiences, they care about those experiences more — which suggests that our inability to affect the past is one reason why we feel mostly indifferent to it. But he points out that if that was the main reason, then we should also be indifferent to inevitable future experiences — if you know for sure that something bad is going to happen to you tomorrow, you shouldn't care about it. But if you found out you simply had to have a horribly painful operation tomorrow, it’s probably all you’d care about! Another explanation for future bias is that we have this intuition that time is like a videotape, where the things that haven't played yet are still on the way. If your future experiences really are ahead of you rather than behind you, that makes it rational to care more about the future than the past. But Christian says that, even though he shares this intuition, it’s actually very hard to make the case for time having a direction. It’s a live debate that’s playing out in the philosophy of time, as well as in physics. For Christian, there are two big practical implications of these past, present, and future ethical comparison cases. The first is for altruists: If we care about whether current people’s goals are realised, then maybe we should care about the realisation of people's past goals, including the goals of people who are now dead. The second is more personal: If we can’t actually justify caring more about the future than the past, should we really worry about death any more than we worry about all the years we spent not existing before we were born? Christian and Rob also cover several other big topics, including: • A possible solution to moral fanaticism • How much of humanity's resources we should spend on improving the long-term future • How large the expected value of the continued existence of Earth-originating civilization might be • How we should respond to uncertainty about the state of the world • The state of global priorities research • And much more Producer: Keiran Harris. Audio mastering: Ryan Kessler. Transcriptions: Sofia Davis-Fogel.
undefined
Apr 20, 2021 • 2h 36min

#97 – Mike Berkowitz on keeping the US a liberal democratic country

Donald Trump’s attempt to overturn the results of the 2020 election split the Republican party. There were those who went along with it — 147 members of Congress raised objections to the official certification of electoral votes — but there were others who refused. These included Brad Raffensperger and Brian Kemp in Georgia, and Vice President Mike Pence.Although one could say that the latter Republicans showed great courage, the key to the split may lie less in differences of moral character or commitment to democracy, and more in what was being asked of them. Trump wanted the first group to break norms, but he wanted the second group to break the law.And while norms were indeed shattered, laws were upheld.Today’s guest, Mike Berkowitz, executive director of the Democracy Funders Network, points out a problem we came to realize throughout the Trump presidency: So many of the things that we thought were laws were actually just customs.Links to learn more, summary and full transcript. So once you have leaders who don’t buy into those customs — like, say, that a president shouldn’t tell the Department of Justice who it should and shouldn’t be prosecuting — there’s nothing preventing said customs from being violated. And what happens if current laws change? A recent Georgia bill took away some of the powers of Georgia's Secretary of State — Brad Raffensberger. Mike thinks that's clearly retribution for Raffensperger's refusal to overturn the 2020 election results. But he also thinks it means that the next time someone tries to overturn the results of the election, they could get much farther than Trump did in 2020. In this interview Mike covers what he thinks are the three most important levers to push on to preserve liberal democracy in the United States: 1. Reforming the political system, by e.g. introducing new voting methods 2. Revitalizing local journalism 3. Reducing partisan hatred within the United States Mike says that American democracy, like democracy elsewhere in the world, is not an inevitability. The U.S. has institutions that are really important for the functioning of democracy, but they don't automatically protect themselves — they need people to stand up and protect them. In addition to the changes listed above, Mike also thinks that we need to harden more norms into laws, such that individuals have fewer opportunities to undermine the system. And inasmuch as laws provided the foundation for the likes of Raffensperger, Kemp, and Pence to exhibit political courage, if we can succeed in creating and maintaining the right laws — we may see many others following their lead. As Founding Father James Madison put it: “If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary.” Mike and Rob also talk about: • What sorts of terrible scenarios we should actually be worried about, i.e. the difference between being overly alarmist and properly alarmist • How to reduce perverse incentives for political actors, including those to overturn election results • The best opportunities for donations in this space • And much moreChapters:Rob’s intro (00:00:00)The interview begins (00:02:01)What we should actually be worried about (00:05:03)January 6th, 2021 (00:11:03)Trump’s defeat (00:16:44)Improving incentives for representatives (00:30:55)Signs of a loss of confidence in American democratic institutions (00:44:58)Most valuable political reforms (00:54:39)Revitalising local journalism (01:08:07)Reducing partisan hatred (01:21:53)Should workplaces be political? (01:31:40)Mistakes of the left (01:36:50)Risk of overestimating the problem (01:39:56)Charitable giving (01:48:13)How to shortlist projects (01:56:42)Speaking to Republicans (02:04:15)Patriots & Pragmatists and The Democracy Funders Network (02:12:51)Rob’s outro (02:32:58)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Sofia Davis-Fogel.
undefined
Apr 15, 2021 • 3min

The ten episodes of this show you should listen to first

Today we're launching a new podcast feed that might be useful to you and people you know. It's called 'Effective Altruism: An Introduction', and it's a carefully chosen selection of ten episodes of this show, with various new intros and outros to guide folks through them. Basically, as the number of episodes of this show has grown, it has become less and less practical to ask new subscribers to go back and listen through most of our archives. So naturally new subscribers want to know... what should I listen to first? What episodes will help me make sense of effective altruist thinking and get the most out of new episodes? We hope that 'Effective Altruism: An Introduction' will fill in that gap. Across the ten episodes, we cover what effective altruism at its core really is, what folks who are tackling a number of well-known problem areas are up to and why, some more unusual and speculative problems, and how we and the rest of the team here try to think through difficult questions as clearly as possible. Like 80,000 Hours itself, the selection leans towards a focus on longtermism, though other perspectives are covered as well. Another gap it might fill is in helping you recommend the show to people, or suggest a way to learn more about effective altruist style thinking to people who are curious about it. If someone in your life wants to get an understanding of what 80,000 Hours or effective altruism are all about, and prefers to listen to things rather than read, this is a great resource to direct them to. You can find it by searching for effective altruism in your podcasting app, or by going to 80000hours.org/intro. We'd love to hear how you go listening to it yourself, or sharing it with others in your life. Get in touch by emailing podcast@80000hours.org.
undefined
Apr 6, 2021 • 2h

#96 – Nina Schick on disinformation and the rise of synthetic media

You might have heard fears like this in the last few years: What if Donald Trump was woken up in the middle of the night and shown a fake video — indistinguishable from a real one — in which Kim Jong Un announced an imminent nuclear strike on the U.S.?Today’s guest Nina Schick, author of Deepfakes: The Coming Infocalypse, thinks these concerns were the result of hysterical reporting, and that the barriers to entry in terms of making a very sophisticated ‘deepfake’ video today are a lot higher than people think. But she also says that by the end of the decade, YouTubers will be able to produce the kind of content that's currently only accessible to Hollywood studios. So is it just a matter of time until we’ll be right to be terrified of this stuff? Links to learn more, summary and full transcript. Nina thinks the problem of misinformation and disinformation might be roughly as important as climate change, because as she says: “Everything exists within this information ecosystem, it encompasses everything.” We haven’t done enough research to properly weigh in on that ourselves, but Rob did present Nina with some early objections, such as: • Won’t people quickly learn that audio and video can be faked, and so will only take them seriously if they come from a trusted source? • If photoshop didn’t lead to total chaos, why should this be any different? But the grim reality is that if you wrote “I believe that the world will end on April 6, 2022” and pasted it next to a photo of Albert Einstein — a lot of people would believe it was a genuine quote. And Nina thinks that flawless synthetic videos will represent a significant jump in our ability to deceive. She also points out that the direct impact of fake videos is just one side of the issue. In a world where all media can be faked, everything can be denied. Consider Trump’s infamous Access Hollywood tape. If that happened in 2020 instead of 2016, he would have almost certainly claimed it was fake — and that claim wouldn’t be obviously ridiculous. Malignant politicians everywhere could plausibly deny footage of them receiving a bribe, or ordering a massacre. What happens if in every criminal trial, a suspect caught on camera can just look at the jury and say “that video is fake”? Nina says that undeniably, this technology is going to give bad actors a lot of scope for not having accountability for their actions. As we try to inoculate people against being tricked by synthetic media, we risk corroding their trust in all authentic media too. And Nina asks: If you can't agree on any set of objective facts or norms on which to start your debate, how on earth do you even run a society? Nina and Rob also talk about a bunch of other topics, including: • The history of disinformation, and groups who sow disinformation professionally • How deepfake pornography is used to attack and silence women activitists • The key differences between how this technology interacts with liberal democracies vs. authoritarian regimes • Whether we should make it illegal to make a deepfake of someone without their permission • And the coolest positive uses of this technologyChapters:Rob’s intro (00:00:00)The interview begins (00:01:28)Deepfakes (00:05:49)The influence of synthetic media today (00:17:20)The history of misinformation and disinformation (00:28:13)Text vs. video (00:34:05)Privacy (00:40:17)Deepfake pornography (00:49:05)Russia and other bad actors (00:58:38)2016 vs. 2020 US elections (01:13:44)Authoritarian regimes vs. liberal democracies (01:24:08)Law reforms (01:31:52)Positive uses (01:37:04)Technical solutions (01:40:56)Careers (01:52:30)Rob’s outro (01:58:27)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Sofia Davis-Fogel.
undefined
Mar 26, 2021 • 1h 24min

#95 – Kelly Wanser on whether to deliberately intervene in the climate

How long do you think it’ll be before we’re able to bend the weather to our will? A massive rainmaking program in China, efforts to seed new oases in the Arabian peninsula, or chemically induce snow for skiers in Colorado. 100 years? 50 years? 20? Those who know how to write a teaser hook for a podcast episode will have correctly guessed that all these things are already happening today. And the techniques being used could be turned to managing climate change as well. Today’s guest, Kelly Wanser, founded SilverLining — a nonprofit organization that advocates research into climate interventions, such as seeding or brightening clouds, to ensure that we maintain a safe climate. Links to learn more, summary and full transcript. Kelly says that current climate projections, even if we do everything right from here on out, imply that two degrees of global warming are now unavoidable. And the same scientists who made those projections fear the flow-through effect that warming could have. Since our best case scenario may already be too dangerous, SilverLining focuses on ways that we could intervene quickly in the climate if things get especially grim — their research serving as a kind of insurance policy. After considering everything from mirrors in space, to shiny objects on the ocean, to materials on the Arctic, their scientists concluded that the most promising approach was leveraging one of the ways that the Earth already regulates its temperature — the reflection of sunlight off particles and clouds in the atmosphere. Cloud brightening is a climate control approach that uses the spraying of a fine mist of sea water into clouds to make them 'whiter' so they reflect even more sunlight back into space. These ‘streaks’ in clouds are already created by ships because the particulates from their diesel engines inadvertently make clouds a bit brighter. Kelly says that scientists estimate that we're already lowering the global temperature this way by 0.5–1.1ºC, without even intending to. While fossil fuel particulates are terrible for human health, they think we could replicate this effect by simply spraying sea water up into clouds. But so far there hasn't been funding to measure how much temperature change you get for a given amount of spray. And we won't want to dive into these methods head first because the atmosphere is a complex system we can't yet properly model, and there are many things to check first. For instance, chemicals that reflect light from the upper atmosphere might totally change wind patterns in the stratosphere. Or they might not — for all the discussion of global warming the climate is surprisingly understudied. The public tends to be skeptical of climate interventions, otherwise known as geoengineering, so in this episode we cover a range of possible objections, such as: • It being riskier than doing nothing • That it will inevitably be dangerously political • And the risk of the 'double catastrophe', where a pandemic stops our climate interventions and temperatures sky-rocket at the worst time. Kelly and Rob also talk about: • The many climate interventions that are already happening • The most promising ideas in the field • And whether people would be more accepting if we found ways to intervene that had nothing to do with making the world a better place. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
undefined
Mar 20, 2021 • 1h 45min

#94 – Ezra Klein on aligning journalism, politics, and what matters most

How many words in U.S. newspapers have been spilled on tax policy in the past five years? And how many words on CRISPR? Or meat alternatives? Or how AI may soon automate the majority of jobs? When people look back on this era, is the interesting thing going to have been fights over whether or not the top marginal tax rate was 39.5% or 35.4%, or is it going to be that human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously? Today’s guest is Ezra Klein, one of the most prominent journalists in the world. Ezra thinks that pressing issues are neglected largely because there's little pre-existing infrastructure to push them. Links to learn more, summary and full transcript. He points out that for a long time taxes have been considered hugely important in D.C. political circles — and maybe once they were. But either way, the result is that there are a lot of congressional committees, think tanks, and experts that have focused on taxes for decades and continue to produce a steady stream of papers, articles, and opinions for journalists they know to cover (often these are journalists hired to write specifically about tax policy). To Ezra (and to us, and to many others) AI seems obviously more important than marginal changes in taxation over the next 10 or 15 years — yet there's very little infrastructure for thinking about it. There isn't a committee in Congress that primarily deals with AI, and no one has a dedicated AI position in the executive branch of the U.S. Government; nor are big AI think tanks in D.C. producing weekly articles for journalists they know to report on. All of this generates a strong 'path dependence' that can lock the media in to covering less important topics despite having no intention to do so. According to Ezra, the hardest thing to do in journalism — as the leader of a publication, or even to some degree just as a writer — is to maintain your own sense of what’s important, and not just be swept along in the tide of what “the industry / the narrative / the conversation has decided is important." One reason Ezra created the Future Perfect vertical at Vox is that as he began to learn about effective altruism, he thought: "This is a framework for thinking about importance that could offer a different lens that we could use in journalism. It could help us order things differently.” Ezra says there is an audience for the stuff that we’d consider most important here at 80,000 Hours. It’s broadly believed that nobody will read articles on animal suffering, but Ezra says that his experience at Vox shows these stories actually do really well — and that many of the things that the effective altruist community cares a lot about are “...like catnip for readers.” Ezra’s bottom line for fellow journalists is that if something important is happening in the world and you can't make the audience interested in it, that is your failure — never the audience's failure. But is that really true? In today’s episode we explore that claim, as well as: • How many hours of news the average person should consume • Where the progressive movement is failing to live up to its values • Why Ezra thinks 'price gouging' is a bad idea • Where the FDA has failed on rapid at-home testing for COVID-19 • Whether we should be more worried about tail-risk scenarios • And his biggest critiques of the effective altruism community Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
undefined
Mar 12, 2021 • 1h 54min

#93 – Andy Weber on rendering bioweapons obsolete & ending the new nuclear arms race

COVID-19 has provided a vivid reminder of the power of biological threats. But the threat doesn't come from natural sources alone. Weaponized contagious diseases — which were abandoned by the United States, but developed in large numbers by the Soviet Union, right up until its collapse — have the potential to spread globally and kill just as many as an all-out nuclear war. For five years today’s guest — Andy Weber — was the US Assistant Secretary of Defense responsible for biological and other weapons of mass destruction. While people primarily associate the Pentagon with waging wars, including most within the Pentagon itself, Andy is quick to point out that you can't have national security if your population remains at grave risk from natural and lab-created diseases. Andy's current mission is to spread the word that while bioweapons are terrifying, scientific advances also leave them on the verge of becoming an outdated technology. Links to learn more, summary and full transcript. He thinks there is an overwhelming case to increase our investment in two new technologies that could dramatically reduce the risk of bioweapons and end natural pandemics in the process. First, advances in genetic sequencing technology allow direct, real-time analysis of DNA or RNA fragments collected from the environment. You sample widely, and if you start seeing DNA sequences that you don't recognise — that sets off an alarm. Andy says that while desktop sequencers may be expensive enough that they're only in hospitals today, they're rapidly getting smaller, cheaper, and easier to use. In fact DNA sequencing has recently experienced the most dramatic cost decrease of any technology, declining by a factor of 10,000 since 2007. It's only a matter of time before they're cheap enough to put in every home. The second major breakthrough comes from mRNA vaccines, which are today being used to end the COVID pandemic. The wonder of mRNA vaccines is that they can instruct our cells to make any random protein we choose — and trigger a protective immune response from the body. By using the sequencing technology above, we can quickly get the genetic code that matches the surface proteins of any new pathogen, and switch that code into the mRNA vaccines we're already making. Making a new vaccine would become less like manufacturing a new iPhone and more like printing a new book — you use the same printing press and just change the words. So long as we kept enough capacity to manufacture and deliver mRNA vaccines on hand, a whole country could in principle be vaccinated against a new disease in months. In tandem these technologies could make advanced bioweapons a threat of the past. And in the process contagious disease could be brought under control like never before. Andy has always been pretty open and honest, but his retirement last year has allowed him to stop worrying about being seen to speak for the Department of Defense, or for the president of the United States – and we were able to get his forthright views on a bunch of interesting other topics, such as: • The chances that COVID-19 escaped from a research facility • Whether a US president can really truly launch nuclear weapons unilaterally • What he thinks should be the top priorities for the Biden administration • The time he and colleagues found 600kg of unsecured, highly enriched uranium sitting around in a barely secured facility in Kazakhstan, and eventually transported it to the United States • And much more. Job opportunity: Executive Assistant to Will MacAskill Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
undefined
Mar 5, 2021 • 2h 56min

#92 – Brian Christian on the alignment problem

Brian Christian is a bestselling author with a particular knack for accurately communicating difficult or technical ideas from both mathematics and computer science. Listeners loved our episode about his book Algorithms to Live By — so when the team read his new book, The Alignment Problem, and found it to be an insightful and comprehensive review of the state of the research into making advanced AI useful and reliably safe, getting him back on the show was a no-brainer. Brian has so much of substance to say this episode will likely be of interest to people who know a lot about AI as well as those who know a little, and of interest to people who are nervous about where AI is going as well as those who aren't nervous at all. Links to learn more, summary and full transcript. Here’s a tease of 10 Hollywood-worthy stories from the episode: • The Riddle of Dopamine: The development of reinforcement learning solves a long-standing mystery of how humans are able to learn from their experience. • ALVINN: A student teaches a military vehicle to drive between Pittsburgh and Lake Erie, without intervention, in the early 1990s, using a computer with a tenth the processing capacity of an Apple Watch. • Couch Potato: An agent trained to be curious is stopped in its quest to navigate a maze by a paralysing TV screen. • Pitts & McCulloch: A homeless teenager and his foster father figure invent the idea of the neural net. • Tree Senility: Agents become so good at living in trees to escape predators that they forget how to leave, starve, and die. • The Danish Bicycle: A reinforcement learning agent figures out that it can better achieve its goal by riding in circles as quickly as possible than reaching its purported destination. • Montezuma's Revenge: By 2015 a reinforcement learner can play 60 different Atari games — the majority impossibly well — but can’t score a single point on one game humans find tediously simple. • Curious Pong: Two novelty-seeking agents, forced to play Pong against one another, create increasingly extreme rallies. • AlphaGo Zero: A computer program becomes superhuman at Chess and Go in under a day by attempting to imitate itself. • Robot Gymnasts: Over the course of an hour, humans teach robots to do perfect backflips just by telling them which of 2 random actions look more like a backflip. We also cover: • How reinforcement learning actually works, and some of its key achievements and failures • How a lack of curiosity can cause AIs to fail to be able to do basic things • The pitfalls of getting AI to imitate how we ourselves behave • The benefits of getting AI to infer what we must be trying to achieve • Why it’s good for agents to be uncertain about what they're doing • Why Brian isn’t that worried about explicit deception • The interviewees Brian most agrees with, and most disagrees with • Developments since Brian finished the manuscript • The effective altruism and AI safety communities • And much more Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
undefined
Feb 15, 2021 • 2h 33min

#91 – Lewis Bollard on big wins against factory farming and how they happened

I suspect today's guest, Lewis Bollard, might be the single best person in the world to interview to get an overview of all the methods that might be effective for putting an end to factory farming and what broader lessons we can learn from the experiences of people working to end cruelty in animal agriculture. That's why I interviewed him back in 2017, and it's why I've come back for an updated second dose four years later. That conversation became a touchstone resource for anyone wanting to understand why people might decide to focus their altruism on farmed animal welfare, what those people are up to, and why. Lewis leads Open Philanthropy’s strategy for farm animal welfare, and since he joined in 2015 they’ve disbursed about $130 million in grants to nonprofits as part of this program. This episode certainly isn't only for vegetarians or people whose primary focus is animal welfare. The farmed animal welfare movement has had a lot of big wins over the last five years, and many of the lessons animal activists and plant-based meat entrepreneurs have learned are of much broader interest. Links to learn more, summary and full transcript. Some of those include: • Between 2019 and 2020, Beyond Meat's cost of goods sold fell from about $4.50 a pound to $3.50 a pound. Will plant-based meat or clean meat displace animal meat, and if so when? How quickly can it reach price parity? • One study reported that philosophy students reduced their meat consumption by 13% after going through a course on the ethics of factory farming. But do studies like this replicate? And what happens several months later? • One survey showed that 33% of people supported a ban on animal farming. Should we take such findings seriously? Or is it as informative as the study which showed that 38% of Americans believe that Ted Cruz might be the Zodiac killer? • Costco, the second largest retailer in the U.S., is now over 95% cage-free. Why have they done that years before they had to? And can ethical individuals within these companies make a real difference? We also cover: • Switzerland’s ballot measure on eliminating factory farming • What a Biden administration could mean for reducing animal suffering • How chicken is cheaper than peanuts • The biggest recent wins for farmed animals • Things that haven’t gone to plan in animal advocacy • Political opportunities for farmed animal advocates in Europe • How the US is behind Brazil and Israel on animal welfare standards • The value of increasing media coverage of factory farming • The state of the animal welfare movement • And much more If you’d like an introduction to the nature of the problem and why Lewis is working on it, in addition to our 2017 interview with Lewis, you could check out this 2013 cause report from Open Philanthropy.Chapters:Rob’s intro (00:00:00)The interview begins (00:04:37)Biggest recent wins for farmed animals (00:06:13)How to lower the price of plant-based meat (00:24:57)Documentaries for farmed animals (00:37:05)Political opportunities (00:43:07)Do we know how to get people to reduce their meat consumption? (00:45:03)The fraction of Americans who don’t eat meat (00:52:17)Surprising number of people who support a ban on animal farming (00:57:57)What we’ve learned over the past four years (01:02:48)Things that haven’t gone to plan (01:26:30)Animal advocacy in emerging countries (01:34:44)Fish, crustaceans, and wild animals (01:40:28)Open Philanthropy grants (01:47:43)Audience questions (01:59:29)The elimination of slavery (02:10:03)Careers (02:15:52)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Sofia Davis-Fogel.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode