Future of Life Institute Podcast cover image

Steve Omohundro on Provably Safe AGI

Future of Life Institute Podcast

00:00

Incentives, Security, and Risk Mitigation in AI

This chapter discusses the alignment of incentives between major AGI corporations and individuals interested in AI safety, emphasizing cybersecurity and hardware security. It explores vulnerabilities in non-AI systems, the concept of mortal AI, the need for regulatory oversight, and the use of tokens to control computational resources. The conversation also highlights the risks of AI manipulation and deception, the concept of least privilege guarantee, and the importance of provably safe approaches in building AI systems.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app