The Theory of Anything cover image

Episode 49: AGI Alignment and Safety

The Theory of Anything

00:00

The Importance of AGI Safety

I think we've now fully explored the possibilities of AGI safety. And there's nothing else at the moment to say. Stuart Russell has a very interesting AI alignment, value alignment program for narrow AI is really good and deserves attention. I doubt it has anything to do with universal explainers at all. So I don't think it's even viable as an AGI safety program. But you know, if I'm wrong and great, they'll study that in terms of narrow AI.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app