
#49 - AGI: Could The End Be Nigh? (With Rosie Campbell)
Increments
00:00
The Parallels of AI and the Optimization Process
I think the difficulty is inherent to the problem because if you're assuming that they're going to like, you know, if it's generate, it has the capability and propensity is used the words to take for the world. But yes, if you buy that a system might develop some sort of internal goal due to a miss specification of the reward function or something like that. So we might get to a point where there's incentives for the systems and they're smart enough in the sense that they will hide some of their plans from us. Because these systems currently are essentially black boxes, it's very difficult for us to inspect to find out whether or not the things that they are actually optimizing for
Transcript
Play full episode