The Inside View cover image

Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision

The Inside View

CHAPTER

Alignment Problems

The issue is stepping back just from a methodological perspective this is a weird problem because we don't have access to gptn or or these future RL agents that are maximizing profit that are superhuman. We only have access to current models which are subhuman level mostly and it's not clear how to study this longer term problem when we don't has access to those models. I reckon uh it's kind of bringing the law right it's what like if you hire a hitman to kill other people it's kindof bringing the law. Yeah that's what I'm saying.Yeah I'm saying this is just like the extreme case of like I don't think we even know how to solve this

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner