AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Alignment Problem
The alignment problem is how do we build AGI that does what is in the best interest of humanity? How do we make sure that humanity gets to determine the future of humanity and how do we avoid both like accidental misuse, where something goes wrong we didn't intend. And then the kind of like you know inner alignment problems where like what if this thing just becomes a creature that uses as a threat. We've been able to align open AI's biggest models better than we thought we would at this point so that's good. Once the AI is good enough that we can ask it to like hey can you help us do alignment research I think that's going to be a new tool in the