Tom Davidson on How Quickly AI Could Automate the Economy
Sep 8, 2023
auto_awesome
AI researcher Tom Davidson discusses the risks of AI automation, including the potential automation of AI research. Topics include the pace of AI progress, historical analogies, AI benchmarks, takeoff speed, bottlenecks, economic impacts, and the future of AI for humanity.
AI progress has been rapid and remarkable, particularly in the deep learning paradigm.
As AI becomes more capable, risks associated with it also increase, such as disinformation and biases in societal systems.
Proactive risk management and understanding potential risks before widespread deployment is crucial to address the risks posed by AI.
The transformative impact of AI on society and the economy requires precise definitions and measurements beyond GDP growth.
Deep dives
Impressive Advances in AI Progress
AI progress in the last 10 years, particularly the past four years, has been rapid and remarkable, especially in the deep learning paradigm. An example is the progression from GPT-2 to GPT-4 within four years, demonstrating significant improvements in language understanding and coding abilities. AI has shown a strong, general understanding of various domains, applying knowledge flexibly. Progress is expected to continue at a similar pace, leading to continued scaling up of the transformer architecture. However, barriers such as cautious regulations may slow down deployment.
Increasing Risks with AI Progress
As AI becomes more capable, the risks associated with it also increase. There are concerns about disinformation, biases in societal systems, and potential dangers arising from emergent capabilities of advanced AI models. The risks can be challenging to predict, as language models can exhibit unexpected capabilities during their training on internet text. As AI development progresses, risks related to dangerous bio-weapons or autonomous replication and adaptation may amplify. While it is difficult to determine the exact timeline, these risks could materialize within the next four to eight years.
The Need for Proactive Risk Management
Addressing the risks posed by AI necessitates proactive risk management rather than reactive regulations. The default approach of reacting to AI risks after they occur leaves little time for robust solutions. Instead, testing and understanding potential risks before widespread deployment can offer better risk mitigation strategies. The rapid pace of AI development, coupled with uncertain capabilities and impacts, requires anticipating and addressing risks in advance. Better benchmarks and measurements are needed to evaluate economic impacts accurately, beyond the current emphasis on GDP growth.
Assessing the Transformative Potential of AI
Determining the transformative impact of AI requires precise definitions. AI may have transformative effects on society comparable to the Industrial or Agricultural Revolutions, fundamentally changing work structures and economic processes. Defining transformative AI based on economic growth alone can be limiting, as growth is influenced by various factors beyond AI capabilities. Artificial general intelligence (AGI), being able to perform any cognitive task at or above human level, provides a more precise definition. However, determining when transformative AI will arrive and its specific economic and social impacts remains uncertain.
AI progress and economic impact
AI progress can be driven by both additional compute and by various techniques that do not rely on additional compute. These techniques include improvements in data, better prompting, better tool use, efficiency gains, and more. These advancements can have a transformative economic impact, potentially leading to significant improvements in human flourishing, such as an end to illness, poverty, and material needs. However, the potential outcomes are not guaranteed to be positive and could range from extremely beneficial to highly dangerous.
Potential risks and challenges
While the economic impact of AI could be highly positive, there are various potential risks and challenges to consider. These include the need for effective compute governance to track and measure AI progress, the importance of information security to prevent theft or misuse of AI systems, and the necessity of evaluating AI models for dangerous capabilities and establishing pre-commitments and governance mechanisms to ensure responsible development and use of AI.
Role of humans and AI in board games
The advancement of AI in board games, such as chess or diplomacy, does not render humans irrelevant. Humans still find value in playing and watching human players engage in these games. Similarly, in the broader context of AI automation in the economy, there is the potential for humans to continue to have a role in tasks that are interesting or preferred by humans, particularly in areas where human presence and judgment are valued, such as diplomacy, art, care, and other interpersonal interactions.
Potential scenarios and cautions
The future impact of AI progress is uncertain. While there may be scenarios where the world continues as it has been, there is also the potential for rapid progress leading to unprecedented benefits or challenges. Ensuring the development of beneficial AI systems requires addressing risks, improving information security, establishing governance mechanisms, and carefully navigating the complexities of AI advancements to mitigate potentially harmful outcomes.
Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky.
Timestamps:
00:00 The current pace of AI
03:58 Near-term risks from AI
09:34 Historical analogies to AI
13:58 AI benchmarks VS economic impact
18:30 AI takeoff speed and bottlenecks
31:09 Tom's model of AI takeoff speed
36:21 How AI could automate AI research
41:49 Bottlenecks to AI automating AI hardware
46:15 How much of AI research is automated now?
48:26 From 20% to 100% automation
53:24 AI takeoff in 3 years
1:09:15 Economic impacts of fast AI takeoff
1:12:51 Bottlenecks slowing AI takeoff
1:20:06 Does the market predict a fast AI takeoff?
1:25:39 "Hard to avoid AGI by 2060"
1:27:22 Risks from AI over the next 20 years
1:31:43 AI progress without more compute
1:44:01 What if AI models fail safety evaluations?
1:45:33 Cybersecurity at AI companies
1:47:33 Will AI turn out well for humanity?
1:50:15 AI and board games
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.