Future of Life Institute Podcast cover image

Roman Yampolskiy on Objections to AI Safety

Future of Life Institute Podcast

00:00

The Impossibility of Accepting Existential Risk

In a world where we have existential risks so nuclear weapons for example constitute an existential risk perhaps engineered pandemics could also wipe out from humanity. Why in a sense shouldn't we accept some level of existential risk from from AI systems? We don't have to build super intelligent godlike machines we can be very happy with very helpful tools if we agree that this is the level of technology we want now.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app