TalkRL: The Reinforcement Learning Podcast cover image

Rohin Shah

TalkRL: The Reinforcement Learning Podcast

CHAPTER

The Alignment Forum - Is That Right?

Human compatible is a pretty good suggestion. There are other books as well, so super intelligence s the philosophy side of the aligment problem by brian christian. Three point ou by max tegmark. A g i safety fundamental course. Just google it. Look at that curriculum and then read things on there. Is probably actually my advice.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner