AXRP - the AI X-risk Research Podcast cover image

3 - Negotiable Reinforcement Learning with Andrew Critch

AXRP - the AI X-risk Research Podcast

00:00

The Prior's Assumption

LZ: I hope our listeners are interested in the philosophy of bation disagreement. When i think of, like, the priors for a person, am and likehw, how to sort of make sense of them? LZ: To me, it seems like this has to involve some amount of like epistemological knowledge that you can up date. If somebody comes from a different culture, where they like understand things differently, like, hopefully you can like reason it out with them by some means other than patient up dating. And so i just argue that when you're changing which bagent agent is your prior there's an ethical issue at play.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app