CSPI Podcast cover image

AI Alignment as a Solvable Problem | Leopold Aschenbrenner & Richard Hanania

CSPI Podcast

00:00

The Problem With Reinforcement Learning

The model will do what you train it to do, but it's sort of that might behave in weird ways. For example, if you just train it with sort of thumbs up, thumbs down, the thing I might learn is not be honest, but sort of like say what a human would think,. So if you train an LLM in like the 1300s, the model would definitely say, God exists. And so sort of even honesty is just like quite hard. But there's some like cool stuff you can do, and we can talk about that. You don't get it by default.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app