AXRP - the AI X-risk Research Podcast cover image

24 - Superalignment with Jan Leike

AXRP - the AI X-risk Research Podcast

00:00

The Benefits of Language Models

I don't have a like nice compact specification of how I want super intelligent AI to behave and the way you can tell that is that if we did we wouldn't have an alignment problem anymore. In practice it's very unclear to me how much it really matters whether or not people are pre-trained on everything they've ever said. You probably have a lot of thoughts that you haven't written down but in general  it would be in practice quite good at predicting what you would say about various like situations events or scenarios yeah.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app