London Futurists cover image

Provably safe AGI, with Steve Omohundro

London Futurists

00:00

The Potential Risks of Super Intelligent AI

This chapter addresses the potential risks associated with the development of super intelligent AI and discusses the urgency of proactive measures to ensure AGI safety.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app