Overview of How AI Might Exacerbate Long-Running Catastrophic Risks
May 13, 2023
auto_awesome
Exploring AI's potential in exacerbating catastrophic risks such as bioterrorism, disinformation spread, and the concentration of power. Discussing the intersection of gene synthesis technology, AI, and bioterrorism risks. Highlighting the dangers of AI in biosecurity and the amplification of disinformation. Examining the risks of human-like AI, data exploitation, and power concentration. Delving into the AI risks in nuclear war, compromising state capabilities and incentivizing conflict.
24:03
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
AI-enabled bioterrorism risks increase with easier access to bioweapons through AI-driven bioengineering.
AI-driven disinformation threatens societal stability by enabling scalable, personalized manipulation through trusted channels.
Deep dives
AI Advancements and Bioterrorism Risks
The advancements in AI technology could escalate the risks associated with bioterrorism by enabling the creation of novel bioweapons. With the knowledge of bioengineering, AI systems could lower the barriers to obtaining such agents, thereby posing an existential threat to humanity. This risk is compounded by the accessibility of biotechnology and gene synthesis, which could lead to the rapid expansion of individuals with the ability to create lethal biological agents.
Impact of AI on Disinformation
AI-enhanced disinformation poses a significant risk by undermining society's ability to address catastrophes effectively. Language models driven by AI can lower the cost of running influence operations, making them more scalable and impactful. Additionally, AI's human-like capabilities can exploit users' trust, leading to personalized disinformation that could be disseminated through trusted channels, influencing societal narratives.
AI, Authoritarianism, and Coordination Failures
AI advancements could concentrate power, potentially leading to misuse by authoritarian regimes. The development of broadly capable AI systems may incentivize authoritarian coups and suppress democratic movements. Moreover, coordination failures triggered by AI interactions, especially in multipolar scenarios, could result in catastrophic outcomes, posing existential risks such as human extinction or severe resource loss.
Developments in AI could exacerbate long-running catastrophic risks, including bioterrorism, disinformation and resulting institutional dysfunction, misuse of concentrated power, nuclear and conventional war, other coordination failures, and unknown risks. This document compiles research on how AI might raise these risks. (Other material in this course discusses more novel risks from AI.) We draw heavily from previous overviews by academics, particularly Dafoe (2020) and Hendrycks et al. (2023).