
Big needs in longtermism and how to help with them | Holden Karnofsky | EA Global: SF 22
EA Talks
00:00
The AI Alignment Problem
The future is what's called the AI alignment problem. If we were to build sort of these very powerful AI systems that can do research of their own at that point you're dealing with systems that may have enough capabilities for them to take on all of humanity and win. We are training these systems via this sort of black box trial and error so you have an AI system who tries something it gets a thumbs up or a thumbs down from a human. Once it has the power to do so that's what it will be trying to maximize, says Andrew Keen.
Transcript
Play full episode