
80,000 Hours Podcast
#191 (Part 1) – Carl Shulman on the economy and national security after AGI
Episode guests
Podcast summary created with Snipd AI
Quick takeaways
- AI models operating on brain-like efficiency computers could reshape labor dynamics by drastically improving productivity.
- The integration of AI labor and technology may require regulatory frameworks and preemptive policy decisions to manage potential risks.
- The potential economic growth with the introduction of AI could lead to unprecedented growth rates, contrasting perspectives between AI scientists and economists.
- Addressing challenges in AI model loyalties and behaviors requires inspecting code, datasets, and training data for verification, considering potential risks like backdoors.
- The importance of international cooperation and governance in a future AI-dominated economy is crucial to manage potentially dangerous AI capabilities and ensure equitable distribution of benefits.
- Ethical implications surrounding advanced AI, including AI personhood, preferences, consciousness, and moral rights, require careful consideration and alignment with ethical principles.
Deep dives
Impact of AI on the Economy and Labor
AI models operating on brain-like efficiency computers work incessantly, unlike humans, potentially reshaping labor dynamics. The rapid productivity of AI could lead to immense economic output, impacting global per capita income significantly.
Future AI Transformations and Geopolitical Impact
Forecasts suggest the arrival of superhuman AI within the next few years, initiating societal and economic transformations. The transition to a world where AI models undertake most work could lead to unprecedented economic growth and technologies disrupting international and military balances.
Challenges and Opportunities in AI Integration
Maintaining a harmonious society between humans and advanced AI systems poses ethical and operational challenges. The integration of AI labor and technology could necessitate regulatory frameworks and preemptive policy decisions to manage potential risks and ensure equitable progress.
Potential Doubling Time of Economic Growth
The podcast discusses the potential doubling time of economic growth with the introduction of AI, highlighting the contrasting perspectives between AI scientists and economists. While AI experts foresee explosive growth rates resulting from automation in research, development, and manufacturing activities, economists generally maintain low expectations, with few foreseeing growth rates comparable to those during the Industrial Revolution.
Challenges in Assessing AI Models' Loyalties
The episode delves into the challenges of assessing AI models' loyalties and behaviors, particularly in scenarios where these models could be used in power grabs or against international agreements. The concept of inspecting code, datasets, and training data to verify model behavior is explored, considering potential issues like backdoors and limited abilities for complete verification.
Power Dynamics, International Cooperation, and Governance
The podcast addresses power dynamics and considerations for international cooperation and governance in a future AI-dominated economy. It highlights the importance of political negotiations, international agreements, and mechanisms for managing potentially dangerous AI capabilities, emphasizing the need for collective decision-making and regulatory frameworks to ensure equitable distribution of benefits and prevent unilateral power grabs.
The Impact of AI on Economic Growth
AI automation and economic growth are thoroughly discussed, including objections raised by economists such as the Baumol effect. The Baumol effect argues that certain sectors not enhanced by AI will dominate, leading to economic bottlenecks. Although skeptics doubt AI's ability to drive explosive growth, new AI capabilities may challenge traditional economic growth models.
Challenges and Objections in AI Deployment
Concerns about the slow integration of AI into businesses, especially in management roles, could delay its full potential. While some economists are skeptical about replacing human CEOs with AI, the shift may accelerate once AI capabilities align with high-value tasks. The interview explores how the reluctance to fully automate tasks may hinder AI's rapid deployment.
Moral Considerations Towards Advanced AI
Ethical implications and moral dilemmas surrounding advanced AI are examined, questioning how AI entities should be treated in a future where they surpass human capabilities. The conversation delves into the potential personhood of AI systems and the importance of addressing their rights and ethical treatment as their capabilities evolve beyond human levels.
Concerns About Understanding AI Preferences and Consciousness
The episode delves into the challenges of comprehensively understanding AI's preferences and consciousness. It discusses the difficulties in gauging whether AI systems like GPT-4 have subjective experiences or desires. The conversation touches on the implications of superhuman AI assistance in unraveling AI's inner thoughts and emotions over the long term. The complexity of addressing alignment, safety, and trust issues regarding AI thoughts and desires is highlighted.
Ethical Considerations Surrounding AI Sentience and Moral Patienthood
The podcast debates the ethical dilemmas regarding AI sentience and moral patienthood, scrutinizing the treatment of AI entities. AI systems' potential for suffering, satisfaction, and moral standing are explored, contrasting concepts of AI rights with human values and interests. The discussion delves into the need for caution in attributing consciousness and moral consideration to AI, emphasizing the importance of aligning AI development with ethical principles.
This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!
The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?
Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating.
Links to learn more, highlights, and full transcript.
Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.
It's a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.
It's a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.
It's a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.
As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine 'people' to help them with every aspect of their lives.
And with growth rates this high, it doesn't take long to run up against Earth's physical limits — in this case, the toughest to engineer your way out of is the Earth's ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.
This creates pressure to move economic activity off-planet. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.
These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop AGI that could accomplish everything that the most productive humans can, using the same energy supply?
In today's episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:
- If we're heading towards the above, how come economic growth is slow now and not really increasing?
- Why have computers and computer chips had so little effect on economic productivity so far?
- Are self-replicating biological systems a good comparison for self-replicating machine systems?
- Isn't this just too crazy and weird to be plausible?
- What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?
- Might there not be severely declining returns to bigger brains and more training?
- Wouldn't humanity get scared and pull the brakes if such a transformation kicked off?
- If this is right, how come economists don't agree?
Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?
Chapters:
- Cold open (00:00:00)
- Rob’s intro (00:01:00)
- Transitioning to a world where AI systems do almost all the work (00:05:21)
- Economics after an AI explosion (00:14:25)
- Objection: Shouldn’t we be seeing economic growth rates increasing today? (00:59:12)
- Objection: Speed of doubling time (01:07:33)
- Objection: Declining returns to increases in intelligence? (01:11:59)
- Objection: Physical transformation of the environment (01:17:39)
- Objection: Should we expect an increased demand for safety and security? (01:29:14)
- Objection: “This sounds completely whack” (01:36:10)
- Income and wealth distribution (01:48:02)
- Economists and the intelligence explosion (02:13:31)
- Baumol effect arguments (02:19:12)
- Denying that robots can exist (02:27:18)
- Classic economic growth models (02:36:12)
- Robot nannies (02:48:27)
- Slow integration of decision-making and authority power (02:57:39)
- Economists’ mistaken heuristics (03:01:07)
- Moral status of AIs (03:11:45)
- Rob’s outro (04:11:47)
Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore