AI Safety Fundamentals: Alignment cover image

ML Systems Will Have Weird Failure Modes

AI Safety Fundamentals: Alignment

00:00

Optimal Actions for Intrinsic and Extrinsic Rewards in Model Training and Deployment

Exploring the impact of model actions during training on future outputs, comparing myopic and non-myopic approaches, and visualizing strategies for maximizing rewards in model deployment.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app