Future of Life Institute Podcast

Steve Omohundro on Provably Safe AGI

15 snips
Oct 5, 2023
Steve Omohundro, co-author of Provably Safe Systems, discusses the concept of provable safety in AI, formalizing safety, provable contracts, proof-carrying code, language models' logical thinking, AI doing proofs for us, risks of totalitarianism, tamper-proof hardware, least-privilege guarantee, basic AI drives, AI agency and world models, self-improving AI, and the overhyping of AI.
Ask episode
Chapters
Transcript
Episode notes