
Bonus: Preventing an AI-Related Catastrophe
Hear This Idea
00:00
AI Development
We think transformative AI is sufficiently likely in the next 20 to 80 years that it is well worth it, in expected value terms, to work on this issue now. Most future generations will take care of it, and all the work we do now will be in vain,. We hope so, but it might not be prudent to take that risk. The next argument? We might need to solve alignment anyway to make AI useful. Making something have goals aligned with human designers' ultimate objectives seems like very related problems.
Transcript
Play full episode