
2 - Learning Human Biases with Rohin Shah
AXRP - the AI X-risk Research Podcast
00:00
I'm in Favor of Human Learning.
Ah, so there was a relsin. You were saying athe the version that i wrote computed the optimal policy um, but did not compute on like the optimal for human behaviour. An imean, i mean, if you knew that adding this could let you express optimality, why onnor i i guess i have a vodet against people learning thingsah, there thereis there against people training models that learn things. I'm in favor of human learning. Yeifi. Ah, said, good cresten am. Ander, like, so ait wasn't boltsman rational and b it's, like, computation of the ce values is like kind of sketchy.
Transcript
Play full episode