
80,000 Hours Podcast
#200 – Ezra Karger on what superforecasters and experts think about existential risks
Episode guests
Podcast summary created with Snipd AI
Quick takeaways
- The Existential Risk Persuasion Tournament utilized innovative methodologies to reveal how different individuals assess catastrophic risks differently.
- Superforecasters provided insights that highlighted a significant gap between their assessments and the more alarming projections from concerned experts.
- Discussions around AI risks showcased a divergence in outlook, illustrating the contrast between skepticism and urgency regarding potential existential threats.
- Participants identified crucial questions concerning AI systems' autonomy that significantly impact their perspectives on global governance and risk management.
- Despite long dialogues, entrenched beliefs about existential risks persisted, indicating the complexity involved in reaching a consensus on critical issues like AI.
- Future research must enhance forecasting accuracy and decision-making methodologies to better navigate the uncertainties associated with existential risks, including AI.
Deep dives
AI Risk and Human Extinction Probability
The discussion outlines the concerns surrounding the existential risks posed by artificial intelligence, particularly the potential for AI to cause human extinction. A survey of opinions revealed that while those worried about AI risks estimated a 40% chance of these outcomes over the next millennium, skeptics believed the probability was 30%. The convergence in thought highlights a shared concern, albeit with differing timelines and degrees of worry, about the implications of AI development. This sentiment demonstrates a broader recognition of risk but implies a significant disconnect over urgency and likelihood among different perspectives.
Existential Risk Persuasion Tournament (XPT)
The existential risk persuasion tournament surveyed numerous subject matter experts, superforecasters, and the public to better understand differing perceptions of existential risks. This tournament was noted as a critical step in developing a more systematic understanding of how variably individuals assess these lower-probability events. The methodology implemented innovative techniques to foster thoughtful consideration of risks and allowed participants to articulate distinctions in their forecasting processes. This initiative not only illuminated existing gaps in knowledge but also aimed to enhance the discourse around existential threats.
Differing Perspectives on AI Forecasting
The podcast discusses the stark contrast between forecasters concerned about AI risks and those skeptical of such threats. While those concerned perceived potential shifts in humanity's fate as urgent and necessitating preventative measures, skeptics viewed these risks as more gradual and manageable over time. This divergence suggests underlying philosophical differences regarding the nature of risk, humanity's ability to adapt, and the timeline of AI developments. As a result, this discord emphasizes the complexities in achieving consensus on AI-related existential threats and highlights the importance of understanding varying worldviews.
Role of Superforecasters in Risk Assessment
Superforecasters played a key role in the discussions, offering unique insights into risk assessments based on their track record in accurately predicting political events. Their involvement shed light on the delicate balance between optimism and realism surrounding AI risks. The contrast between their measured, lower-stake estimates and the more heightened projections from concerned forecasters illuminated a fascinating friction. This dynamic raises essential questions regarding the community's understanding of risk complexities and the effective communication of those nuances in public discourse.
Insights on AI Risk through Discourse
Amidst the evolving discussions around AI risks, both groups engaged in deep dialogues that revealed perspectives shaped by various factors, including historical precedents. They explored potential accelerants to catastrophe, such as wars or breakthroughs in lethal technologies, with the belief that progress in AI capabilities could drastically modify societal resilience or fragility. This rich discourse illuminated not only group-specific worries but also interconnected fears regarding humanity's uncertain future. It emphasized the inherent need for collaboration to navigate complexities and promote informed decision-making regarding AI policy and oversight.
Influences on Long-term AI Perspectives
The podcast further notes that all participants expect advanced AI to emerge, though they differed in their views on its implications. Skeptics exhibit a belief in human and institutional resilience, anticipating little change even with potentially dangerous technology emerging. In contrast, those concerned about AI expect advancements to pose significant risks, particularly in abrupt scenarios that could complicate public and private safety measures. This dichotomy serves to underscore the fundamental differences in long-term views surrounding technology development and its societal impacts.
Cruxes Identifying Essential Uncertainties
Both groups identified specific cruxes—critical questions that could influence differing outcomes—that significantly informed their perspectives on AI risks. Two important cruxes included the autonomy of powerful AI systems and their implications on global governance. These cruxes recurred across discussions, emphasizing their import in understanding how each group weighed potential scenarios of AI development. The engagement with these cruxes revealed the mechanisms through which participants think AI may alter geopolitical dynamics, influencing their views on risk and necessary actions.
Collaboration and Disagreement Persistence
Despite extensive engagement over eight weeks among diverse groups discussing diverging beliefs regarding AI risks, significant convergence was not observed. While the discussion environment fostered dialogue, the fundamental disagreements remained intact, suggesting that entrenched views can be resistant to change. The skeptical group registered a minor increase in their forecast probability, while the concerned's expectations dropped only slightly. This unwillingness to alter long-term beliefs underscores the complexity of aligning perspectives on existential risk and the subtle barriers to consensus.
Challenges in Forecasting Human Behavior
Examining forecasting in human behavior, the discussion highlighted how accurately predicting individual responses to uncertainties remains a significant challenge. By assessing how expectations shift in response to evolving information, researchers sought to enhance forecasting accuracy in volatile domains. Despite recognizing these challenges, the potential for better human performance through enhanced methodologies—including the use of expert elicitation—remains an optimistic avenue for future research. This pursuit reflects an ongoing commitment to identifying mechanisms that allow individuals and groups to better navigate uncertainties inherent in complex scenarios.
The Future of Forecasting Research
The conversation turns to the future of forecasting research, underscoring perceived gaps in empirical studies related to decision-making. Training and utilizing human forecasters not only enhances accuracy but also establishes more solid foundations for addressing complex issues. Engaging large language models as an adjunct to human insight opens a vibrant avenue for understanding how to blend human evaluative skills with AI's capabilities. As research advances, the hope is that these methodologies can translate into actionable insights that policymakers can leverage when tackling existential concerns.
Library of Recommendations
The speaker shared distinct recommendations for books that resonate with themes of forecasting and risk. 'Moving Mars' is highlighted for its exploration of technological progress and human response to crises. 'The Second Kind of Impossible' provides a captivating narrative on scientific discovery, while 'The Rise and Fall of American Growth' examines growth patterns and economic drivers across time. Together, these titles offer insights not just on individual experiences with uncertainty but also on broader implications for forecasting in society.
"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra Karger
In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.
Links to learn more, highlights, and full transcript.
They cover:
- How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.
- What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.
- The challenges of predicting low-probability, high-impact events.
- Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.
- The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.
- Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.
- Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.
- Whether large language models could help or outperform human forecasters.
- How people can improve their calibration and start making better forecasts personally.
- Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.
- And plenty more.
Chapters:
- Cold open (00:00:00)
- Luisa’s intro (00:01:07)
- The interview begins (00:02:54)
- The Existential Risk Persuasion Tournament (00:05:13)
- Why is this project important? (00:12:34)
- How was the tournament set up? (00:17:54)
- Results from the tournament (00:22:38)
- Risk from artificial intelligence (00:30:59)
- How to think about these numbers (00:46:50)
- Should we trust experts or superforecasters more? (00:49:16)
- The effect of debate and persuasion (01:02:10)
- Forecasts from the general public (01:08:33)
- How can we improve people’s forecasts? (01:18:59)
- Incentives and recruitment (01:26:30)
- Criticisms of the tournament (01:33:51)
- AI adversarial collaboration (01:46:20)
- Hypotheses about stark differences in views of AI risk (01:51:41)
- Cruxes and different worldviews (02:17:15)
- Ezra’s experience as a superforecaster (02:28:57)
- Forecasting as a research field (02:31:00)
- Can large language models help or outperform human forecasters? (02:35:01)
- Is forecasting valuable in the real world? (02:39:11)
- Ezra’s book recommendations (02:45:29)
- Luisa's outro (02:47:54)
Producer: Keiran Harris
Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore