
80,000 Hours Podcast
#191 (Part 2) – Carl Shulman on government and society after AGI
Episode guests
Podcast summary created with Snipd AI
Quick takeaways
- AI advisors can offer credible advice on critical issues impacting society.
- Trustworthy AI can drive systematic improvements in governance and decision-making processes.
- AI's forecasting capabilities can lead to significant advancements in policy and governance realms.
- AI can provide valuable insights and unbiased estimations in subjective judgment areas.
- Equipping AI in security forces to resist illegal orders is crucial for upholding democracy and rule of law.
- Building trust in AI's forecasts and ensuring transparency are essential for its acceptance and utilization.
Deep dives
AI Capable of Automating Technological Industrial Change
AI is poised to intensify technological, industrial, and economic change significantly by automating processes to enhance AI and develop other technologies. The power of AI raises concerns about AI takeover threats or undermining nuclear deterrence. Swift action in aligning AI safely is crucial, amid risks of AI hacking and compromising safety measures. Political momentum for AI regulations grows with clear evidence of AI's advanced capabilities.
AI's Impact on Government and Politics
Trustworthy superhuman AI advisors could revolutionize governance by providing credible advice on critical issues. AI advisors could reshape responses to crises like the COVID-19 pandemic. Risks emerge from using AI to solidify societal values, potentially hindering future change. Challenges arise in preventing coups once AI is integrated into military and police; international treaties may be needed to manage this transition.
AI Forecasting and Societal Implications
AI's forecasting abilities could lead to significant societal advancements, especially in policy and governance realms. Reliable AI predictions can drive informed decision-making and improve policy effectiveness. The trustworthiness of AI plays a crucial role in influencing how its forecasts are received and acted upon. AI's impact on politics and policy could bring about systematic improvements and enhance decision-making processes.
Applications of AI in Philosophy and Hard Sciences
AI's potential to aid in philosophical and hard science inquiries presents opportunities for advancements in understanding complex, subjective questions and objective scientific principles. In areas where human judgment is subjective, AI can provide valuable insights and unbiased estimations. The revolutionary impact of AI in resolving factual disputes and enhancing decision-making processes across various domains is highlighted.
Ensuring Ethical AI Use in Security Forces
Ensuring AI in security forces adhere to ethical principles and reject illegal orders is critical for maintaining rule of law and democracy. AI systems must be equipped to resist unlawful commands and prevent any attempts to misuse them for illegal activities such as coups. Stricter oversight and joint decision-making processes in AI development are necessary to guarantee alignment with constitutional values and pluralistic interests.
Impacts of AI on Future Predictions and Policy Decision-Making
AI's advanced forecasting capabilities can lead to substantial improvements in predicting future events and shaping policy decisions. The ability of AI to provide accurate and reliable predictions can transform governance, politics, and societal decision-making processes. Building trust in AI's forecasts and ensuring its transparency are crucial factors influencing its acceptance and utilization in diverse domains.
Advancements in Epistemological Capabilities
Advancements in epistemological capabilities have been propelled by the development of science, exemplified by the scientific process. While early dynamics included scholars backing dogmatic beliefs, the scientific revolution marked substantial progress. This evolution demonstrated shifts in societal beliefs and policymaking, highlighting the influence of truth-seeking institutions that reshaped societies.
Truth-Tracking in Media
Media organizations adhere to journalistic norms to maintain credibility and truthfulness, reducing the likelihood of spreading misinformation. The reputation and credibility of news outlets play a crucial role in influencing public beliefs. Institutions with truth-tracking properties hold significant power, guiding societies towards more factual and informed thinking.
Significance of Trustworthy AI Models
Developing AI models that prioritize honesty and reliability is crucial in fostering trust among users. Ensuring AI models are transparent, auditable, and free from biases or backdoors builds credibility in their outputs. Trustworthy AI systems enable diverse factions to align on shared factual grounds, potentially bridging political or philosophical divides.
Challenges and Opportunities in AI Regulation
Navigating AI regulation and deployment poses significant challenges amidst varying levels of concern regarding AI misuse or risks of rogue AI. Efforts to supervise AI advances, ensure safety, and establish international agreements for AI governance are critical in mitigating potential threats and fostering cooperative AI development.
Towards Responsible AI Development
Promoting responsible AI development involves enhancing safety measures, fostering transparency, and advancing tools for evaluating AI integrity. Embracing a collaborative approach to addressing AI risks and ensuring alignment with ethical standards can lead to more secure and beneficial AI applications.
Potential Impact of AI Epistemic Revolution
Embracing an AI epistemic revolution entails leveraging AI tools to improve information accuracy, trustworthiness, and decision-making transparency. Integrating robust AI-based fact-checking mechanisms, promoting ethical AI development, and enhancing data integrity can pave the way for a more informed and accountable information ecosystem.
Navigating AI Policy Challenges
Addressing AI policy challenges requires a multifaceted approach encompassing technical advancements, regulatory frameworks, and international cooperation. Balancing innovation with risk mitigation strategies, promoting AI transparency, and fostering interdisciplinary collaborations are essential for steering AI development towards responsible and beneficial outcomes.
This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!
If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone's pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?
It's common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today's conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.
Links to learn more, highlights, and full transcript.
As Carl explains, today the most important questions we face as a society remain in the "realm of subjective judgement" -- without any "robust, well-founded scientific consensus on how to answer them." But if AI 'evals' and interpretability advance to the point that it's possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or 'best-guess' answers to far more cases.
If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great.
That's because when it's hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it's an unambiguous violation of honesty norms — but so long as there's no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.
Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.
To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable. But in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest.
In the past we've usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.
Carl Shulman and host Rob Wiblin discuss the above, as well as:
- The risk of society using AI to lock in its values.
- The difficulty of preventing coups once AI is key to the military and police.
- What international treaties we need to make this go well.
- How to make AI superhuman at forecasting the future.
- Whether AI will be able to help us with intractable philosophical questions.
- Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.
- Why Carl doesn't support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we're closer to 'crunch time.'
- Opportunities for listeners to contribute to making the future go well.
Chapters:
- Cold open (00:00:00)
- Rob’s intro (00:01:16)
- The interview begins (00:03:24)
- COVID-19 concrete example (00:11:18)
- Sceptical arguments against the effect of AI advisors (00:24:16)
- Value lock-in (00:33:59)
- How democracies avoid coups (00:48:08)
- Where AI could most easily help (01:00:25)
- AI forecasting (01:04:30)
- Application to the most challenging topics (01:24:03)
- How to make it happen (01:37:50)
- International negotiations and coordination and auditing (01:43:54)
- Opportunities for listeners (02:00:09)
- Why Carl doesn't support enforced pauses on AI research (02:03:58)
- How Carl is feeling about the future (02:15:47)
- Rob’s outro (02:17:37)
Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore