London Futurists cover image

Don't try to make AI safe; instead, make safe AI, with Stuart Russell

London Futurists

00:00

Implications of Superintelligence and AI Safety Awareness

The chapter discusses the potential implications of superintelligence surpassing human intelligence and the impact on humanity's sense of purpose. It mentions the 'Her' movie as an example of human-AI interaction and highlights the Bletchley Park Global Summit on AI Safety. The chapter concludes with a comparison of China's stricter regulations on AI compared to the US and Europe.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app