AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Machine Learning at Lower Precision Values?
In regular programming, i'm usually choosing between floats and doubles for my floating point values. I've heard in machine learning they sometimes use 16 bit or even eight bit floating point numbers. Why does machine learning at lower precision values? And how would i choose what precision is? We generally tend to use floats a lot in mission learning but as we mention, eight bit succeen bits are also quite common.