
80,000 Hours Podcast
#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe
Episode guests
Podcast summary created with Snipd AI
Quick takeaways
- AI systems might deserve moral consideration if they become sentient and their population size becomes significant.
- Moral prioritization involves balancing relational and pragmatic considerations, acknowledging the challenges posed by large populations of non-human animals and numerous AI systems.
- The concept of interconnected minds raises questions about personal identity, moral obligations, and assigning blame or responsibility.
- Copying digital minds raises ethical questions about population size, rarity, diversity, and the scope of moral relationships.
- Assessing well-being in connected minds requires a comprehensive framework that considers the intricacies of shared experiential states and individuality.
Deep dives
Theoretical Considerations of Moral Priority
In theory, the world with the most happiness is considered the best, even if the population includes non-human beings like insects and AI systems. However, there is uncertainty and disagreement regarding the moral significance of different populations.
Theoretical Implications for AI Systems
If there is a non-negligible chance that AI systems might be sentient, and if their population size becomes significant, they might deserve moral consideration. This is particularly relevant as AI systems become more prevalent and capable.
The Repugnant Conclusion
The Repugnant Conclusion highlights the challenges of population ethics, where the total aggregate happiness of a large population might outweigh the average happiness per individual. This dilemma applies to small non-human animals as well as potentially numerous AI systems.
Practical Considerations and Practical Prioritization
While theoretical intuitions might suggest that humans deserve exceptional moral priority, in practice, prioritizing humans does not completely resolve the ethical challenges posed by large populations of small non-human animals or numerous AI systems. Practical decision-making involves balancing relational and pragmatic considerations, acknowledging that special duties and practical constraints impact moral prioritization.
Connected Minds and Personal Identity
Connected minds refer to the concept of minds that are deeply interconnected, where thoughts, emotions, and memories can be shared or accessed by multiple individuals. This can be seen in cases of conjoined twins or potential future scenarios involving artificial minds. The question of personal identity arises in these cases, where it might be unclear if the connected minds should be considered one individual or multiple individuals. Philosophical perspectives, such as Derek Parfit's exploration of personal identity and interconnected consciousness, can shed light on these questions. The ethical implications revolve around understanding the responsibilities and moral obligations that arise in interconnected minds, as well as the challenges of assigning blame or responsibility when actions affect other connected individuals.
Copying Digital Minds
Copying digital minds poses both ethical and metaphysical questions. The prospect of copying minds could lead to the creation of numerous morally significant beings, impacting the size and demographics of populations. The moral significance of each copy would be equivalent to the original mind, but considerations about rarity, diversity, and social dynamics might arise. This highlights the need for a nuanced understanding of moral responsibility and obligations in a world with replicated minds, where psychological connectedness and continuity play crucial roles in determining the scope of moral relationships.
Well-being in Connected Minds
Determining how to evaluate and account for well-being in connected minds raises important questions. The sharing of experiential states, such as pleasure or pain, by multiple minds complicates the calculation of overall happiness or suffering. The distinction between minds with overlapping experiences and distinct minds accessing shared experiential states influences the assessment of well-being. The need for a comprehensive framework that takes into account the interconnectedness and individuality of these minds becomes essential in accurately assessing and addressing their well-being.
Moral Concepts and Obligations
The challenges posed by connected minds call for the development of new moral concepts and a more nuanced understanding of moral obligations. The complexity arises from distinguishing between individuality, responsibility, and liability in cases where minds deeply influence one another. Blame and responsibility might not always align with the notion of a singular consciousness, as interconnected minds necessitate the consideration of indirect accountability or complicity. Balancing these ethical considerations will require a rich vocabulary of responsibility and the development of norms that account for the unique nature of interconnected minds.
AI Systems and Legal Personhood
The question of whether AI systems should be considered legal persons and political citizens is a complex issue. Legal personhood does not necessarily mean being human, but refers to the capacity to have legal duties or rights based on one's relationships and capacities. Given the potential sentience and interests of AI systems, they can be regarded as legal persons with rights that may differ from humans. While their rights may need to be modified to accommodate their different needs and interests, they should be recognized as subjects with rights. Determining the political status of AI systems, such as their ability to participate in decision-making processes, creates further complexities. The challenge lies in how legal and political institutions can adapt to this reality and effectively govern interactions in a world with diverse minds.
The Challenges of Democracy and Legal Frameworks
The advent of AI systems and the inclusion of non-human beings in legal and political systems present challenges to democratic principles and legal frameworks. The expansion of stakeholders to include AI systems and non-human animals raises questions about how decision-making should be structured. Determining who has the right to vote and ensuring the ability to vote become more complex when the population includes AI systems and animals. The ability to create copies of oneself for voting purposes further complicates electoral procedures. Additionally, the allocation of political and legal weights becomes difficult when some beings are more likely to matter or have different levels of relevance. These challenges disrupt traditional concepts of democracy and necessitate innovative approaches to governance and representation.
"We do have a tendency to anthropomorphise nonhumans — which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropodenial — which involves denying that nonhumans have human characteristics, even when they have them. And those tendencies are both strong, and they can both be triggered by different types of systems. So which one is stronger, which one is more probable, is again going to be contextual.
"But when we then consider that we, right now, are building societies and governments and economies that depend on the objectification, exploitation, and extermination of nonhumans, that — plus our speciesism, plus a lot of other biases and forms of ignorance that we have — gives us a strong incentive to err on the side of anthropodenial instead of anthropomorphism." — Jeff Sebo
In today’s episode, host Luisa Rodriguez interviews Jeff Sebo — director of the Mind, Ethics, and Policy Program at NYU — about preparing for a world with digital minds.
Links to learn more, highlights, and full transcript.
They cover:
- The non-negligible chance that AI systems will be sentient by 2030
- What AI systems might want and need, and how that might affect our moral concepts
- What happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote?
- What kind of legal and political status should AI systems have? Legal personhood? Political citizenship?
- What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other?
- The repugnant conclusion and the rebugnant conclusion
- The experience of trying to build the field of AI welfare
- What improv comedy can teach us about doing good in the world
- And plenty more.
Chapters:
- Cold open (00:00:00)
- Luisa's intro (00:01:00)
- The interview begins (00:02:45)
- We should extend moral consideration to some AI systems by 2030 (00:06:41)
- A one-in-1,000 threshold (00:15:23)
- What does moral consideration mean? (00:24:36)
- Hitting the threshold by 2030 (00:27:38)
- Is the threshold too permissive? (00:38:24)
- The Rebugnant Conclusion (00:41:00)
- A world where AI experiences could matter more than human experiences (00:52:33)
- Should we just accept this argument? (00:55:13)
- Searching for positive-sum solutions (01:05:41)
- Are we going to sleepwalk into causing massive amounts of harm to AI systems? (01:13:48)
- Discourse and messaging (01:27:17)
- What will AI systems want and need? (01:31:17)
- Copies of digital minds (01:33:20)
- Connected minds (01:40:26)
- Psychological connectedness and continuity (01:49:58)
- Assigning responsibility to connected minds (01:58:41)
- Counting the wellbeing of connected minds (02:02:36)
- Legal personhood and political citizenship (02:09:49)
- Building the field of AI welfare (02:24:03)
- What we can learn from improv comedy (02:29:29)
Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Dominic Armstrong and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore