
2 - Learning Human Biases with Rohin Shah
AXRP - the AI X-risk Research Podcast
I'm in Favor of Human Learning.
Ah, so there was a relsin. You were saying athe the version that i wrote computed the optimal policy um, but did not compute on like the optimal for human behaviour. An imean, i mean, if you knew that adding this could let you express optimality, why onnor i i guess i have a vodet against people learning thingsah, there thereis there against people training models that learn things. I'm in favor of human learning. Yeifi. Ah, said, good cresten am. Ander, like, so ait wasn't boltsman rational and b it's, like, computation of the ce values is like kind of sketchy.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.