
ChatGPT and InstructGPT: Aligning Language Models to Human Intention
Deep Papers
Is That Part of the Actual Training or Is That Like the Fine Tuning?
Is that part of the actual training or is that like the fine tuning at the end? I think I've seen this so far in people like building product. And there have been people thinking about these kinds of like longer term alignment questions. So our colleagues on the more longer term alignment side had wrote a paper about critiques training language models to critique which is kind of like a step in the scalable alignment direction. It's interesting to think like how would you decompose training.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.