September 26, 1983. Soviet bunker. Lieutenant Colonel Stanislav Petrov watches computers say US nuclear missiles are incoming.
The data says: Launch.
His intuition says: Wait.
Petrov overrides the system. Saves the world.
If AI had been in charge, everyone would be dead.
Mark and Jeremy use the Petrov story to explore Federico Faggin's argument in *Irreducible*: information is not the same as consciousness.
We unpack:
- Why Petrov's decision shows the gap between rule-following and conscious judgment
- How "information makes consciousness" sits at the center of Faggin's theory
- Why AI systems that flip 1s and 0s can't replicate intuition or qualia
- Why AI will never be conscious
Machines follow rules. Petrov broke them. That's consciousness.
The computers processed information perfectly. They were also perfectly wrong. Petrov had something machines don't: the ability to sense what the data couldn't show.
This is a short from our 13-part Book Club on Faggin's *Irreducible*. If you're interested in AI, consciousness, and the limits of information theory, listen to the full series.
The question: As we hand more decisions to machines, what happens when the data is right but the answer is wrong?
---
Series: Irreducible Book Club (Episode excerpt)
Book: *Irreducible* by Federico Faggin
Topics: Consciousness, AI limits, intuition, nuclear weapons, decision-making, information theory
Historical event: 1983 Soviet nuclear false alarm
--
We like you. Connect with us:
Email: hello@thinkingonpaper.xyz