The Inside View cover image

Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision

The Inside View

CHAPTER

AI Alignment

Alignment is an unusual problem what is alignment in machine learning yeah. I usually use it to mean something like can we get a model to do what we want it to do assuming it's capable enoughsomething like this. My concerns are really about future models that are human level or super human level um especially if they have open-ended demands such as maximizing profit. We don't know how to train a model to maximize profit subject to following the law and it just seems like surely you should be able to do that. That seems like a very bare minimum of Like what we'd want this model to do I think we'd want more than that but  I'm not deliberately trying to be

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner