
Book Review: If Anyone Builds It, Everyone Dies
Don't Worry About the Vase Podcast
00:00
Training, Wants, Alignment, and Extinction Scenarios
How modern LLMs are trained, why training induces goal-like wants, why alignment is hard, what capable AIs could do, and a concrete MIRI-style extinction scenario.
Transcript
Play full episode