Future of Life Institute Podcast cover image

Steve Omohundro on Provably Safe AGI

Future of Life Institute Podcast

00:00

Creating Provably Safe Systems for AGI

This chapter discusses the approach to creating provably safe systems for AGI, emphasizing the urgency of addressing existential risks and proposing the need for guardrails to prevent unsafe use of AI systems. It explores the use of mathematical proof and physical cryptography to ensure safety and expresses optimism that the necessary components for this approach already exist or are being developed.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app