AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Future of AI Alignment
The hope is to solve two problems that I guess the alignment community have sort of separated where one problem is like you can set up some like scheme for sort of rewarding your AI for various like measurable properties of the environment. And then another way your AI could go wrong is that it is motivated to keep all those signals going, but it does so in a way that's really different from the past. We should be able to detect this via not specifically checking whether or not it's like quote unquote being deceptive or hacking the sensors, but rather just checking that it's no longer doing the type of thing it was doing during training.