Highlights: #173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe
Dec 14, 2023
auto_awesome
Jeff Sebo, expert on digital minds and preventing moral catastrophes, discusses extending moral consideration to AI systems, determining moral weight for sentient AI, assessing the likelihood of AI satisfying conditions, repugnant conclusion in relation to insects and humans, and unintentional exploitation of AI systems.
Moral consideration should be extended to AI systems if there is a reasonable chance they possess consciousness or sentience, challenging the idea of imposing a definitive standard for moral inclusion.
Even low risks of harm caused by AI systems merit moral consideration, similar to the way a small risk of harm from drunk driving affects decision-making.
Deep dives
Extending Moral Consideration to AI Systems
The general case for extending moral consideration to AI systems is based on the possibility that they could be conscious, sentient, or otherwise significant. The speaker argues that if there is a reasonable non-negligible chance that AI systems possess these features, then moral consideration should be extended to them. This challenges the notion of imposing a definitive or probable standard for moral inclusion. The speaker highlights the importance of caution and humility in assessing the moral standing of AI systems.
Determining Non-negligible Risk
The speaker discusses the concept of non-negligible risk and its application to AI systems. They suggest that even when the probability of harm caused by a certain action is low, it can still merit moral consideration. Drawing a parallel to driving drunk, where the risk of causing harm may be only one in 100 or one in a thousand, the speaker argues that even such low risks warrant consideration and may impact decision-making regarding AI systems.
Assessing AI Sentience and Consciousness
The speaker explores the challenge of assessing the likelihood of AI systems becoming sentient or conscious by 2030. They emphasize the significant disagreement and uncertainty surrounding theories of consciousness. A model is presented based on 12 leading theories, indicating that, apart from the requirement of a biological substrate, other conditions can plausibly be fulfilled by AI systems in the near future. The speaker cautions against making overly skeptical assumptions and encourages an open-minded approach to AI consciousness.
These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode: