AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Infinite Utility
In inforbasianism, if you act contrary to what a predictor thinks, then you get infinite utility. In transparent newcrams problem, when the outcome depends on your action, when the box is full, everything actually does work. But there's no possible way for the predictor to know whether or not it will falsify its prediction. What we discover is to dc a certain condition, which we call theselde causality, which kind of selects or kind of restricts the types of problems were ti this thing can work. And anddipsion causality basically says that whatever happens cannot be effected by your choice.