Being an agent can get loopy quickly. For instance, imagine that we're playing chess and I'm trying to decide what move to make. Your next move influences the outcome of the game, and my guess of that influences my move, which influences your next move, which influences the outcome of the game. How can we model these dependencies in a general way, without baking in primitive notions of 'belief' or 'agency'? Today, I talk with Scott Garrabrant about his recent work on finite factored sets that aims to answer this question.
Topics we discuss:
- 00:00:43 - finite factored sets' relation to Pearlian causality and abstraction
- 00:16:00 - partitions and factors in finite factored sets
- 00:26:45 - orthogonality and time in finite factored sets
- 00:34:49 - using finite factored sets
- 00:37:53 - why not infinite factored sets?
- 00:45:28 - limits of, and follow-up work on, finite factored sets
- 01:00:59 - relevance to embedded agency and x-risk
- 01:10:40 - how Scott researches
- 01:28:34 - relation to Cartesian frames
- 01:37:36 - how to follow Scott's work
Link to the transcript: axrp.net/episode/2021/06/24/episode-9-finite-factored-sets-scott-garrabrant.html
Link to a transcript of Scott's talk on finite factored sets: alignmentforum.org/posts/N5Jm6Nj4HkNKySA5Z/finite-factored-sets
Scott's LessWrong account: lesswrong.com/users/scott-garrabrant
Other work mentioned in the discussion:
- Causality, by Judea Pearl: bayes.cs.ucla.edu/BOOK-2K
- Scott's work on Cartesian frames: alignmentforum.org/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames