
A Psychopathological Approach to Safety in AGI
Data Skeptic
The Alignment Problem in AI Safety
The machine learning enthusiast in me really wants to go work on that objective function and regularize it. But the problem is a little deeper than just trying to find the best objective functions or making sure that the objective function that we have is aligned. There is an inherent problem in the information theoretic or in the foundations of our approach to the problem.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.