Eliezer Yudkowsky, a decision theorist and AI alignment researcher, debates with Mark Miller, a computer scientist and software security expert. They explore strategies to mitigate existential risks from AI, discussing their differing views on alignment and decentralization. Yudkowsky warns of potential catastrophic outcomes if AGI is unregulated, while Miller advocates for preserving human institutions amid AI evolution. The conversation touches on prediction, trust, historical analogies to nuclear arms control, and the future dynamics of superintelligence governance.