

#3044
Mentioned in 7 episodes
The Alignment Problem
Book • 2020
Mentioned by


















Mentioned in 7 episodes
Mentioned by
Alex Hanna as exploring the challenge of aligning AI with human values to avoid potential harm.


19 snips
The AI Con
Mentioned by Michael Littman as a book he is currently reading, focusing on AI alignment and societal implications.

#144 – Michael Littman: Reinforcement Learning and the Future of AI
Recommended by Rich as the number one book to read on the topic of AI alignment.

AI 2027
Recommended by Autumn Nash for its insightful exploration of the challenges in aligning AI with human values.

Vectorizing Your Databases with Steve Pousty