Future of Life Institute Podcast cover image

Roman Yampolskiy on Objections to AI Safety

Future of Life Institute Podcast

00:00

The Importance of Long-Term Planning

All of the horrible things that are going on right now which an AGI system might be able to help with. The running level of existential risks from other factors so I mentioned new clear and and engineered pandemics do you find that this pushes you in the direction of saying we should accept the higher level of risk when we're thinking about whether to deploy AGI? If there was an asteroid coming and we could not stop it by any other way then maybe it would make sense in nine and a half years to press this button when we have nothing left to lose it becomes a very profitable bet. It's an interesting fact of the world that we haven't thought hard about these questions what level

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app