
"Cyborgism" by Nicholas Kees & Janus
LessWrong (Curated & Popular)
00:00
Accelerating Alignment Using GPT Models
Plan is to train and empower cyborgs in quotes. A specific kind of human-in-the-loop system which enhances a human operator's cognitive abilities without relying on autonomous agents. This differs from other ideas for accelerating alignment research by focusing primarily on augmenting ourselves and our workflows. Unless we manage to coordinate around it, the default outcome is that humanity will eventually be disempowered by a powerful autonomous agent or agents. In particular, this means trying to get maximum value from our current systems while avoiding things which would reduce the time we have left.
Play episode from 04:33
Transcript


