LessWrong (Curated & Popular) cover image

"Cyborgism" by Nicholas Kees & Janus

LessWrong (Curated & Popular)

00:00

The Cyborgism Agenda

The aim of this agenda is to make some of the risks of accelerating alignment more explicit. We want to chart a path through the minefield so that we can make progress without doing more harm than good. The way I use language models is rather different from most others who have integrated AI into their workflows as far as I'm aware. There is significant path dependence to my approach, but I think it was a fortuitous path. If at any time we feel like the risks are too great, we have to be prepared and willing to abandon a particular direction or shut down the project entirely.

Play episode from 57:39
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app