The Inside View cover image

Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming

The Inside View

00:00

Is There a Catastrophic Alignment?

Alignment is a generally like, can we get the models to behave in ways that are in line with our preferences robustly across all inputs such that they never do anything that we consider as catastrophic. I think most people were like, no about li mas lerning and t would be able to submit something for your prize. Don't always know what's like, alignment or mialignment. So maybe sfors like defining quickly, what's yes, mis alignment? Yes.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app