The Inside View cover image

Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI

The Inside View

00:00

The Importance of Alignment Mine Test Projects

The main thing that I want to study is like embedded agency failures in a toy sandbox. This is essentially what happens when a neural network or an agent is trained with reinforcement learning and the reward signal. And it gets sort of smart enough to realize that the thing that is getting reward isn't being rewarded for, you know, for whatever it is that we are giving it reward for. Once we've kind of isolated that failure, then we can start to ask ourselves like what kind of techniques can we develop to mitigate or eliminate that kind of failure.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app