AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Alignment Problem in AI Safety
The machine learning enthusiast in me really wants to go work on that objective function and regularize it. But the problem is a little deeper than just trying to find the best objective functions or making sure that the objective function that we have is aligned. There is an inherent problem in the information theoretic or in the foundations of our approach to the problem.