Epistemic status: This post aims at an ambitious target: improving intuitive understanding directly. The model for why this is worth trying is that I believe we are more bottlenecked by people having good intuitions guiding their research than, for example, by the ability of people to code and run evals.
Quite a few ideas in AI safety implicitly use assumptions about individuality that ultimately derive from human experience.
When we talk about AIs scheming, alignment faking or goal preservation, we imply there is something scheming or alignment faking or wanting to preserve its goals or escape the datacentre.
If the system in question were human, it would be quite clear what that individual system is. When you read about Reinhold Messner reaching the summit of Everest, you would be curious about the climb, but you would not ask if it was his body there, or his [...]
---
Outline:(01:38) Individuality in Biology
(03:53) Individuality in AI Systems
(10:19) Risks and Limitations of Anthropomorphic Individuality Assumptions
(11:25) Coordinating Selves
(16:19) Whats at Stake: Stories
(17:25) Exporting Myself
(21:43) The Alignment Whisperers
(23:27) Echoes in the Dataset
(25:18) Implications for Alignment Research and Policy
---
First published: March 28th, 2025
Source: https://www.lesswrong.com/posts/wQKskToGofs4osdJ3/the-pando-problem-rethinking-ai-individuality ---
Narrated by
TYPE III AUDIO.
---
Images from the article:![Left Hand fighting Right Hand]()
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.