The Inside View cover image

Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI

The Inside View

00:00

How to Make Models More Likely to Be Corrected by Humans

The main goal is to understand, uh, courage, ability. Courage ability is the ability to correct agents and models after they've been deployed. A cordial agent will let you go and modify its source code. If a model wants to cut down trees, it's going to naturally reason that if the human changes his mind and doesn't want me to cutDownTrees anymore,. Well, maybe it's even smarter than you. And so a cordial agent has that property of being able to change its behavior when told what to do.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app