Lex Fridman Podcast cover image

#367 – Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI

Lex Fridman Podcast

NOTE

The Possibility of Unrolling AGIs

AGIs may need a degree of uncertainty to avoid dogmatism. Human feedback currently handles this, but there should be engineered hard uncertainty. An off switch is also important. It's possible to have a switch, as well as roll back a model off the internet.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner