AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Optimize a Neural Net?
Back propagation is inherently sequential, because you have to compute the chain of gradients. If we were able to do it in parallel, it probably would have been more efficient and better,. So you want this parallelism. You want to avoid possible gradient o issues. And that's where many obtimisation. Technique scame starting from thi essentially janlicon's own thesis - he mentioned alternative to back propagating. That later was called target propagation. All it meant is another view of the problem. Instead of optimizing the objective of the neural net, with respect to the weights of the neuralNet being unknown variables youre trying to find, you introduce auxiliary variables ith different names.