6min chapter

AXRP - the AI X-risk Research Podcast cover image

3 - Negotiable Reinforcement Learning with Andrew Critch

AXRP - the AI X-risk Research Podcast

CHAPTER

The Prior's Assumption

LZ: I hope our listeners are interested in the philosophy of bation disagreement. When i think of, like, the priors for a person, am and likehw, how to sort of make sense of them? LZ: To me, it seems like this has to involve some amount of like epistemological knowledge that you can up date. If somebody comes from a different culture, where they like understand things differently, like, hopefully you can like reason it out with them by some means other than patient up dating. And so i just argue that when you're changing which bagent agent is your prior there's an ethical issue at play.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode