
Big Data, Reinforcement Learning and Aligning Models
The AI Buzz from Lightning AI
00:00
Should safety and reward models be released publicly?
Luca and Josh discuss the potential value of releasing safety/reward models and how alignment enables safer deployments.
Transcript
Play full episode