TalkRL: The Reinforcement Learning Podcast cover image

Rohin Shah

TalkRL: The Reinforcement Learning Podcast

00:00

The Alignment Forum - Is That Right?

Human compatible is a pretty good suggestion. There are other books as well, so super intelligence s the philosophy side of the aligment problem by brian christian. Three point ou by max tegmark. A g i safety fundamental course. Just google it. Look at that curriculum and then read things on there. Is probably actually my advice.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app