If the military had been using AIs in 1983, everyone would be dead. Machines will never be conscious, and that animal instinct saved the world.
This is the story:
On 26 September 1983, Lieutenant Colonel Stanislav Petrov sat in a Soviet early-warning bunker watching computers tell him that US nuclear missiles were on their way.
The data said “launch.”
His intuition said “wait.”
Petrov chose to override the system and, in doing so, probably saved the world.
In this Thinking on Paper Book Club short, Mark and Jeremy use the Petrov story as a live case study for one of Federico Faggin’s core arguments in Irreducible: information is not the same as consciousness.
Across a few minutes, they unpack:
Why Petrov’s decision shows the gap between mechanical rule-following and conscious judgment
How “information makes consciousness” sits at the center of Federrio Faggin’s new consciousness theory
Why AI systems that only flip 1s and 0s can’t replicate intuition or qualia. In other words, why AI will never be conscious.
This is a short from our 13-part Book Club on Federico Faggin’s Irreducible.
If you’re interested in AI, consciousness, and the limits of information theory, listen to the full episode for more on information, consciousness, and why Faggin thinks consciousness is irreducible.
Cheers,
Mark and Jeremy.
Keep Thinking On Paper.
--
We like you. Connect with us:
Email: hello@thinkingonpaper.xyz