AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is That Part of the Actual Training or Is That Like the Fine Tuning?
Is that part of the actual training or is that like the fine tuning at the end? I think I've seen this so far in people like building product. And there have been people thinking about these kinds of like longer term alignment questions. So our colleagues on the more longer term alignment side had wrote a paper about critiques training language models to critique which is kind of like a step in the scalable alignment direction. It's interesting to think like how would you decompose training.