AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Scaling Distributed Computing
One of the exciting possibilities of self supervised learning is the several orders of magnitude scaling of everything. Do you think there are some interesting tricks to do large scale distributed compute? Or is it, or is that really outside of even deep learning? That's more about like hardware engineering.