
Bonus: Preventing an AI-Related Catastrophe
Hear This Idea
00:00
AI Alignment - The End of the Dot Points
People building AI's will be naturally incentivized to also try to make them aligned, and so in some sense safe. Solving the alignment problem isn't the same thing as completely eliminating existential risk from AI. This argument, the problem could be extremely difficult to solve. We think that given the stakes, it could make sense for many people to work on reducing AI risk.
Transcript
Play full episode