AI Safety Fundamentals: Alignment cover image

Weak-To-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision

AI Safety Fundamentals: Alignment

CHAPTER

Training Reward Models for Assistant Model Optimization

This chapter explores training a reward model on human-assistant dialogues to predict preferences between completions and using reinforcement learning to train the assistant model. It also covers fine-tuning on weak labels and studying generalization across tasks with positive outcomes.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner