The Future of Nukes Involves AI and Nobody Knows What Happens Next
Jan 12, 2024
auto_awesome
This podcast discusses the intersection of artificial intelligence and nuclear weapons, highlighting the uncertainties and dangers. It explores the theory of nuclear deterrence, the evolution of AI, near miss incidents, secret letters on British submarines, and the potential implications of AI in nuclear war. The guest emphasizes the need to embrace uncertainty and the lack of clarity in policy-making.
AI can enhance processes in nuclear warfare, but its use in critical decision-making raises concerns.
Nuclear weapons and AI provoke magical thinking and existential questions, but lack of concrete knowledge hinders accurate risk assessment.
The Russian Perimeter System and the history of AI hype highlight the importance of embracing uncertainty when navigating AI and nuclear policy.
Deep dives
The Concept of Deterrence
Deterrence is the strategy of dissuading aggressive actions through the threat of punishment. It involves the idea that launching a nuclear attack would lead to devastating consequences for both sides, thus making it unlikely to happen. The concept of deterrence has evolved over time, with different interpretations and meanings being attached to it. It is a loaded term that often leads to confusion and differing opinions.
AI and Nuclear Warfare
Artificial intelligence (AI) has the potential to impact nuclear weapons and policy. While AI is often associated with advanced technologies like generative adversarial networks, its application in the context of nuclear warfare goes beyond that. AI can be utilized to enhance processes such as anti-submarine warfare, improving noise filters and sensors. It can automate certain aspects of the kill chain in targeting enemy nuclear arsenals. However, the use of AI in critical decision-making, such as initiating an attack, raises moral and practical concerns.
The Magical Thinking Surrounding Nukes and AI
Both nuclear weapons and artificial intelligence elicit an astonishing degree of magical thinking from people. They touch upon existential questions about humanity's future, our uniqueness, and the potential consequences of these technologies. However, the lack of concrete knowledge and the presence of uncertainties make it challenging to accurately assess and compare the risks associated with each. Popular culture and fictional portrayals have further shaped our perceptions of these topics, even though they may not accurately reflect reality.
Automated System for Delegating Launch Authority
The podcast episode discusses an automated system developed by the Soviets during the Cold War known as the Russian Perimeter System. Initially built out of concern for a decapitating first strike by the United States, this system was designed to delegate launch authority in the event that the Russian president and the leadership were killed. The system would detect nuclear explosions in Moscow and, if unable to contact the leadership, would start delegating launch authority to hardened command centers. The system was like a computerized version of the UK's Letters of Last Resort, providing alternative options in the absence of human command.
The History and Cycles of AI Hype
The podcast also explores the history of AI hype and the boom and bust cycles it has gone through. From the early days of AI research in the 1950s, there were high expectations that quickly faded in the face of limited progress. This pattern repeated with expert systems in the 1980s and neural networks in the 1990s. The current moment is characterized by unprecedented resources and attention devoted to AI, but uncertainty remains about the future of the field. The current paradigm of large models may not be sustainable, and regulatory frameworks should not assume a static future. Embracing uncertainty and avoiding overblown claims are important when navigating the intersection of AI and nuclear policy.
According to the hype, artificial intelligence is changing everything. The truth is more complicated, but that doesn’t mean that companies and governments aren’t rushing to embrace the new technology. It’s even being used to update an old and destructive technology: nuclear weapons.
America is modernizing its force, Russia is building new kinds of nuclear weapons, and China is increasing its nuclear stockpile. At the same time, all three countries are looking to AI to outsource the dangerous and deadly work of apocalyptic destruction.
It’s a conversation that’s both fascinating and frightening with one major theme: we don’t know nearly enough. “One lamentable parallel between nuclear weapons and artificial intelligence is that both topics elicit an astonishing degree of magical thinking from otherwise intelligent people, including some with genuine expertise,” Geist wrote in his book.
Cyber Live is coming to YouTube. Subscribe here to be notified.