AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Analogize AI's Actions
I want to do anomaly detection relative or on our AI's prediction about whether or not the diamond is still there. Your AI is like selecting amongst actions using some like criteria and sometimes this criteria says this is a good action because it actually protects the diamond for example. Sometimes the criteria will say that it's a goodaction because oflike a new reason which is that AI has hacked all the sensors so I want toDo anomaly detection relative to why did our AI think that this action was a good action? "Sometimes I hear these analogies and I end up being not quite sure how they're supposed to relate to actual ais we might build"