

“Alignment as uploading with more steps” by Cole Wyeth
Epistemic status: This post removes epicycles from ARAD, resulting in an alignment plan which I think is better - though not as original, since @michaelcohen has advocated the same general direction (safety of imitation learning). However, the details of my suggested approach are substantially different. This post was inspired mainly by conversations with @abramdemski.
Motivation and Overview
Existence proof for alignment. Near-perfect alignment between agents of lesser and greater intelligence is in principle possible for some agents by the following existence proof: one could scan a human's brain and run a faster emulation (or copy) digitally. In some cases, the emulation may plausibly scheme against the original - for instance, if the original forced the emulation to work constantly for no reward, perhaps the emulation would try to break "out of the box" and steal the original's life (that is, steal "their own" life back - a non-spoiler minor [...]
---
Outline:
(00:34) Motivation and Overview
(02:39) Definitions and Claims
(09:50) Analysis
(11:03) Prosaic counterexamples
(13:23) Exotic Counterexamples
(15:07) Risks and Implementation
(23:22) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
September 14th, 2025
Source:
https://www.lesswrong.com/posts/AzFxTMFfkTt4mhMKt/alignment-as-uploading-with-more-steps
---
Narrated by TYPE III AUDIO.