Foresight Institute Radio cover image

Eliezer Yudkowsky vs Mark Miller | ASI Risks: Similar premises, opposite conclusions

Foresight Institute Radio

00:00

Extracting Value From Untrusted AIs

Eliezer asks how humans can safely extract useful work from an imprisoned untrusted superintelligence without existential risk.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app