
Rationalism and AI Doomerism (Robert Wright & Liron Shapira)
Robert Wright's Nonzero
00:00
Tight Version of the Evolution Argument for Misalignment
Liron gives a concise evolutionary example showing why designers may not encode ultimate objectives and consequences for AI optimization.
Play episode from 23:17
Transcript


