
#367 – Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI
Lex Fridman Podcast
00:00
The Possibility of Unrolling AGIs
AGIs may need a degree of uncertainty to avoid dogmatism. Human feedback currently handles this, but there should be engineered hard uncertainty. An off switch is also important. It's possible to have a switch, as well as roll back a model off the internet.
Transcript
Play full episode