
141 - Building an open source LM, with Iz Beltagy and Dirk Groeneveld
NLP Highlights
00:00
How to Balance Training Budget and Data Asset Size
The training framework is built on PyTorch. We're using a palm architecture as published by Google. One strategic bet here is that we're betting on the Lion Optimizer instead of the atom optimizer. The other big deviation is Reddit data. There was no Reddit data explicitly captured in mama training, but Reddit is a pretty large fairly high quality source if you filter it right.
Transcript
Play full episode