80,000 Hours Podcast

Rob, Luisa, and the 80000 Hours team
undefined
40 snips
Mar 24, 2023 • 2h 38min

#147 – Spencer Greenberg on stopping valueless papers from getting into top journals

Spencer Greenberg, a social scientist and entrepreneur, dives into the flaws in social science research, revealing that nearly 40% of studies cannot be replicated. He discusses p-hacking, where researchers manipulate data for significant results, and the biases stemming from journals favoring positive outcomes. Greenberg also introduces the P-curve analysis technique to assess research integrity and explores the implications of flawed findings on policy-making and ethical research practices.
undefined
30 snips
Mar 14, 2023 • 3h 13min

#146 – Robert Long on why large language models like GPT (probably) aren't conscious

In this discussion, Robert Long, a philosophy fellow at the Center for AI Safety, examines the contentious topic of AI consciousness. He explains why large language models like GPT likely aren’t sentient entities despite their complex outputs. Long emphasizes the differences between human cognition and AI processing, exploring ethical implications of creating potentially conscious machines. He also addresses the philosophical dilemmas surrounding AI's ability to experience pain and pleasure, urging a cautious approach as we navigate the future of artificial intelligence.
undefined
36 snips
Feb 11, 2023 • 2h 42min

#145 – Christopher Brown on why slavery abolition wasn't inevitable

Christopher Brown, a Columbia University history professor and author of "Moral Capital," challenges the belief that the abolition of slavery was inevitable. He discusses how various economic influences shaped attitudes towards slavery and draws parallels to modern struggles against fossil fuel dependence. Brown also highlights the role of the Quakers in the anti-slavery movement, emphasizing the complexities of faith in moral debates. Through historical insights, he argues for a nuanced understanding of moral progress and the roles individuals played in societal transformations.
undefined
26 snips
Jan 26, 2023 • 3h 16min

#144 – Athena Aktipis on why cancer is actually one of our universe's most fundamental phenomena

Athena Aktipis, an associate professor at Arizona State University and author of *The Cheating Cell*, delves into the fascinating world of cancer as a breakdown in cellular cooperation. She explains how cancer cells act in self-interest, disrupting the harmony necessary for a functional multicellular body. The conversation touches on evolutionary pressures, the dynamics of cellular behavior, and innovative therapy strategies that focus on tumor control rather than eradication. Ultimately, Aktipis draws parallels between cancer biology and societal cooperation, raising intriguing implications for our future.
undefined
41 snips
Jan 16, 2023 • 2h 36min

#79 Classic episode - A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

A.J. Jacobs, a New York Times bestselling author known for his humorous self-experimentation, shares his wild adventures in personal growth. He hilariously recounts his journey into radical honesty, emphasizes the importance of gratitude in everyday life, and explores how reframing global issues as puzzles could foster collaboration. His year of living biblically brought both insights and challenges, while his quest to make the largest family tree reveals the power of connection. Jacobs' engaging stories will leave you both entertained and inspired.
undefined
23 snips
Jan 9, 2023 • 2h 37min

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

Join Ben Garfinkel, a Research Fellow at Oxford's Future of Humanity Institute, as he dives into the complex world of artificial intelligence risk. Garfinkel argues that classic AI risk narratives may be overstated, calling for more rigorous scrutiny. He challenges perceptions around the governance of AI, emphasizing the importance of ethical frameworks and the potential consequences of misaligned AI objectives. With insights on historical parallels and funding disparities in AI safety, this conversation is a crucial exploration of our AI-driven future.
undefined
Jan 4, 2023 • 2h 18min

#83 Classic episode - Jennifer Doleac on preventing crime without police and prisons

Jennifer Doleac, an Associate Professor of Economics at Texas A&M, discusses innovative crime prevention strategies that don't rely on police or prisons. She shares how improved street lighting can reduce crime rates by making criminals feel more exposed. Additionally, she delves into the benefits of cognitive behavioral therapy and lead reduction as proactive measures. Jennifer's fascinating insights include using daylight saving time changes as a natural experiment to assess how light levels impact criminal behavior, revealing compelling data on crime patterns.
undefined
23 snips
Dec 29, 2022 • 2h 40min

#143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons

Jeffrey Lewis, director of the East Asia Nonproliferation Program and founder of the Arms Control Wonk blog, breaks down the myth of 'mutually assured destruction' in nuclear strategy. He explains how U.S. military plans aim for dominance in a nuclear war, criticizing the flawed logic behind this approach. Lewis discusses the complexities of decision-making in nuclear policy, highlighting internal conflicts and the barriers to open dialogue within the nuclear community. He also calls for a reevaluation of strategies, especially regarding China and North Korea.
undefined
53 snips
Dec 20, 2022 • 1h 48min

#142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction

In this fascinating conversation, John McWhorter, a Columbia University linguistics professor and prolific author, dives into the intriguing world of language. He shares insights on the cognitive benefits of bilingualism and debunks myths about intelligence. McWhorter discusses the urgency of preserving endangered languages and the dynamics of creole language development. The conversation also explores how AI may reshape language communication and the role of filler words in speech, revealing the complexities behind how we connect through language.
undefined
140 snips
Dec 13, 2022 • 2h 44min

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

In this discussion, Richard Ngo, a researcher at OpenAI with a background at DeepMind, explores the fascinating world of large language models like ChatGPT. He delves into whether these models truly 'understand' language or just simulate understanding. Richard emphasizes the importance of aligning AI with human values to mitigate risks as technology advances. He also compares the governance of AI to nuclear weapons, highlighting the need for effective regulations to ensure safety and transparency in AI applications. This conversation sheds light on the profound implications of AI in society.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app