
Rationalism and AI Doomerism (Robert Wright & Liron Shapira)
Robert Wright's Nonzero
00:00
Why We Haven't Had a Truly Aligned Superintelligence
Liron argues we have never achieved initial alignment and explains how feedback loops and benchmarks change behavior as models scale.
Play episode from 09:56
Transcript


