

“Interview with Eliezer Yudkowsky on Rationality and Systematic Misunderstanding of AI Alignment” by Liron
Sep 15, 2025
02:53:04
My interview with Eliezer Yudkowsky for If Anyone Builds It, Everyone Dies launch week is out!
Video
Timestamps
- 00:00:00 — Eliezer Yudkowsky Intro
- 00:01:25 — Recent validation of Eliezer's ideas
- 00:03:46 — Sh*t now getting real
- 00:08:47 — Eliezer's rationality teachings
- 00:10:39 — Rationality Lesson 1: I am a brain
- 00:13:10 — Rationality Lesson 2: Philosophy can reduce to AI engineering
- 00:17:19 — Rationality Lesson 3: What is evidence
- 00:22:41 — Rationality Lesson 4: Be more specific
- 00:28:34 — Specificity as a superpower in debates
- 00:30:19 — Rationality Lesson 5: How to spot a rationalization
- 00:36:52 — Rationality might upend your deepest expectations
- 00:38:18 — The typical reaction to superintelligence risk
- 00:40:07 — Eliezer is a techno-optimist, with a few exceptions
- 00:47:57 — Why AI is an existential risk
- 00:53:24 — Engineering outperforms biology
- 01:02:09 — The threshold of "supercritical" AI
- 01:13:23 — How to convince people there's [...]
---
Outline:
(00:19) Video
(00:25) Timestamps
(03:17) Transcript
(03:20) Eliezers Background and Evolution of Views
(11:58) Rationality Fundamentals
(57:59) AI Capabilities and Intelligence Scale
(01:13:38) The Subcritical vs Supercritical Threshold
(01:33:41) The Alignment Problem
(01:57:14) What We Want from AI
(02:18:58) International Coordination Solutions
(02:44:54) Call to Action
---
First published:
September 15th, 2025
---
Narrated by TYPE III AUDIO.