
Big Data, Reinforcement Learning and Aligning Models
The AI Buzz from Lightning AI
00:00
What is a reward model and how does it enable alignment?
Luca details training a reward model to score outputs and using PPO-style RL to align capabilities with desired behavior.
Transcript
Play full episode