AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Will We Train Internal Large Models Yet?
We're trying to solve alignments. Of course, we're thinking about a gi. Our goal is defnitely nowt till i push the state of the art. We have no interest in building large models for large models sake. That's just expensive. Will we train internal large models yet? Probably because we want to experiment on large models and such like this. I mean, i think we can catch up the jpit three, but god knows what g pit four going to be like. And probably be awhile until we could catch up to that.