Future of Life Institute Podcast cover image

Steve Omohundro on Provably Safe AGI

Future of Life Institute Podcast

00:00

Guard Rails for AI

The chapter explores the challenges of formalizing complex behaviors like deception and breaking into online systems in AI. It suggests using a simple, cheap, and coarse solution like guard rails to prevent risky outcomes in AI, drawing parallels to rules for driving cars on roads.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app