Doom Debates cover image

Carl Feynman, AI Engineer & Son of Richard Feynman, Says Building AGI Likely Means Human EXTINCTION!

Doom Debates

00:00

Navigating Transhumanism and AI Risks

This chapter explores the evolving perspectives on transhumanism and the development of artificial intelligence, highlighting initial optimism that has turned to caution regarding significant risks. The discussion addresses potential doom scenarios, including the threat of artificial general intelligence leading to human extinction and the challenges of ensuring AI alignment with human values. Furthermore, it delves into the balancing act of harnessing the potential benefits of superintelligent AI while mitigating adverse consequences on human society.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app