
3 - Negotiable Reinforcement Learning with Andrew Critch
AXRP - the AI X-risk Research Podcast
The Prior's Assumption
LZ: I hope our listeners are interested in the philosophy of bation disagreement. When i think of, like, the priors for a person, am and likehw, how to sort of make sense of them? LZ: To me, it seems like this has to involve some amount of like epistemological knowledge that you can up date. If somebody comes from a different culture, where they like understand things differently, like, hopefully you can like reason it out with them by some means other than patient up dating. And so i just argue that when you're changing which bagent agent is your prior there's an ethical issue at play.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.