Carina Hong dropped out of Stanford's PhD program to build "mathematical superintelligence" — and just raised $64M to do it. In this episode, we explore what that actually means: an AI that doesn't just solve math problems but discovers new theorems, proves them formally, and gets smarter with each iteration. Carina explains how her team solved a 130-year-old problem about Lyapunov functions, disproved a 30-year-old graph theory conjecture, and why math is the secret "bedrock" for everything from chip design to quant trading to coding agents. We also discuss the fascinating connections between neuroscience, AI, and mathematics.
Lean more about Axiom: https://axiommath.ai/
Subscribe to The Neuron newsletter: https://theneuron.ai