AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Interpolation on the Native Data Domain Is Not Neural Networks, Right?
Chalet thinks that motivated thinking is the primary obstacle to getting people to wake up to the fact that neural networks are poorly suited to discreet problems. Program synthesis would be extremely brittle in classifying cats versus dogs, or even emnist. And deep learning would be very brittle predicting the bgits of pie, or prime numbers, or sorting a list. Chalet fundamentally thinks that there are two types of thinking, type one and ttwo. He thinks that every single thought in our minds is not simply one or the other; rather, it's a combination of both kinds.