Robert Wright's Nonzero cover image

AI and Existential Risk (Robert Wright & Connor Leahy)

Robert Wright's Nonzero

00:00

The Importance of Understanding Large Language Models

So far as bad actors are concerned, it doesn't matter whether we understand the AI or not, kind of. So solving the so-called interpretability problem, doing a better job of understanding how exactly large language models work is more important for that category. But I think even if like all of AI technology would stop today and no AGI happened, I think this would still be unstable. And we would still see huge new forms of cybercrime and new forms of menstruation. We also don't know how to implement these in computers.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app