AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Pros and Cons of Recurrent Based Meta Learning Methods
I think that there are kind of pros and cons to both. In some ways I think that gradient descent may be more slightly more robust to things that are a little bit out of distribution because if you run gradient descent on a data set that you're given at test time here at least know that it will like descend on something objective and improve even if the data that you give it is far from the task that you fit into it. If you kind of feed in some out of distribution data to an RNN really you have no idea what it is going to give you then again if you have this massive data distribution then that's not necessarily a problem to be concerned about. So from that standpoint