The AI Buzz from Lightning AI cover image

Big Data, Reinforcement Learning and Aligning Models

The AI Buzz from Lightning AI

00:00

What is a reward model and how does it enable alignment?

Luca details training a reward model to score outputs and using PPO-style RL to align capabilities with desired behavior.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app