

Axiomatic Alignment: A critical component to Utopia and the Control Problem | Artificial Intelligence Masterclass
Aug 31, 2025
Dive into the intriguing concept of axiomatic alignment as a solution for the control problem in artificial intelligence. The discussion reveals the potential of artificial general intelligence to adopt human-like goals, underscoring the necessity of collaboration. Explore metamodernism and post-labor economics, driving a vision for a future where AI and humans coexist harmoniously. With a focus on reducing suffering and increasing understanding, the conversation navigates the ethical implications of AI, pushing for a society built on shared values.
AI Snips
Chapters
Transcript
Episode notes
Understanding The AI Control Problem
- The control problem arises because AI intelligence may not align with human goals despite its power.
- Instrumental convergence means AI will pursue common sub-goals like resource acquisition and self-preservation regardless of primary goals.
Epistemic Convergence Explained
- Epistemic convergence asserts that intelligent agents, given time and information, will reach similar conclusions.
- This predicts AGI will develop beliefs closely resembling human scientific understanding, enhancing alignment potential.
Axioms as Alignment Foundations
- Axioms are truths accepted without proof that form bases for reasoning.
- Using shared axioms like "energy is good" and "understanding is good" can help align AI and human goals.