Lex Fridman Podcast cover image

#367 – Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI

Lex Fridman Podcast

00:00

The Possibility of Unrolling AGIs

AGIs may need a degree of uncertainty to avoid dogmatism. Human feedback currently handles this, but there should be engineered hard uncertainty. An off switch is also important. It's possible to have a switch, as well as roll back a model off the internet.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app