
141 - Building an open source LM, with Iz Beltagy and Dirk Groeneveld
NLP Highlights
How to Balance Training Budget and Data Asset Size
The training framework is built on PyTorch. We're using a palm architecture as published by Google. One strategic bet here is that we're betting on the Lion Optimizer instead of the atom optimizer. The other big deviation is Reddit data. There was no Reddit data explicitly captured in mama training, but Reddit is a pretty large fairly high quality source if you filter it right.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.