Megan Barrett, an insect neurobiologist, discusses the evolutionary case for insect sentience. Jeff Sebo, specializing in ethics, explores moral considerations for AI systems. David Chalmers contemplates the feasibility of artificial consciousness, while Bob Fischer examines the moral weight of animals like chickens. Cameron Meyer Shorb highlights the suffering of wild animals, and Jonathan Birch warns about the nuances of newborn pain. The conversation challenges our understanding of consciousness across species and prompts deep questions about our moral responsibilities.
03:34:40
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Understanding Artificial Sentience
Artificial sentience involves considering if non-biological systems can have subjective experiences like pain or pleasure.
This extends from wondering what it's like to be non-human animals to whether AI systems might also have experiences.
insights INSIGHT
Low Chance of AI Sentience Matters
Even a low non-negligible chance of AI sentience merits moral consideration due to potential suffering.
We commonly treat low probabilities of major harm seriously, like in drunk driving risks.
insights INSIGHT
Evolution and Insect Sentience
Sentience might have evolved multiple times independently across species, complicating assessments.
Close evolutionary relationships suggest insects might merit sentience considerations like crustaceans.
Get the Snipd Podcast app to discover more snips from this episode
Derek Parfit's "Reasons and Persons" is a landmark work in contemporary philosophy, profoundly impacting discussions on personal identity, ethics, and rationality. Parfit challenges traditional notions of the self, arguing that our sense of personal identity is less coherent than we assume. He explores the implications of this for our moral obligations, particularly concerning future generations. The book delves into the complexities of decision-making under uncertainty, examining how we should weigh our present interests against the potential consequences of our actions for the future. Parfit's rigorous analysis and thought-provoking arguments have had a lasting influence on various fields, including ethics, political philosophy, and decision theory. His work continues to stimulate debate and inspire new research.
Human Compatible
Artificial Intelligence and the Problem of Control
Stuart J. Russell
In this book, Stuart Russell explores the concept of intelligence in humans and machines, outlining the near-term benefits and potential risks of AI. He discusses the misuse of AI, from lethal autonomous weapons to viral sabotage, and proposes a novel solution by rebuilding AI on a new foundation where machines are inherently uncertain about human preferences. This approach aims to create machines that are humble, altruistic, and committed to pursuing human objectives, ensuring they remain provably deferential and beneficial to humans.
What if there’s something it’s like to be a shrimp — or a chatbot?
For centuries, humans have debated the nature of consciousness, often placing ourselves at the very top. But what about the minds of others — both the animals we share this planet with and the artificial intelligences we’re creating?
We’ve pulled together clips from past conversations with researchers and philosophers who’ve spent years trying to make sense of animal consciousness, artificial sentience, and moral consideration under deep uncertainty.
Robert Long on what we should picture when we think about artificial sentience (00:02:49)
Jeff Sebo on what the threshold is for AI systems meriting moral consideration (00:07:22)
Meghan Barrett on the evolutionary argument for insect sentience (00:11:24)
Andrés Jiménez Zorrilla on whether there’s something it’s like to be a shrimp (00:15:09)
Jonathan Birch on the cautionary tale of newborn pain (00:21:53)
David Chalmers on why artificial consciousness is possible (00:26:12)
Holden Karnofsky on how we’ll see digital people as... people (00:32:18)
Jeff Sebo on grappling with our biases and ignorance when thinking about sentience (00:38:59)
Bob Fischer on how to think about the moral weight of a chicken (00:49:37)
Cameron Meyer Shorb on the range of suffering in wild animals (01:01:41)
Sébastien Moro on whether fish are conscious or sentient (01:11:17)
David Chalmers on when to start worrying about artificial consciousness (01:16:36)
Robert Long on how we might stumble into causing AI systems enormous suffering (01:21:04)
Jonathan Birch on how we might accidentally create artificial sentience (01:26:13)
Anil Seth on which parts of the brain are required for consciousness (01:32:33)
Peter Godfrey-Smith on uploads of ourselves (01:44:47)
Jonathan Birch on treading lightly around the “edge cases” of sentience (02:00:12)
Meghan Barrett on whether brain size and sentience are related (02:05:25)
Lewis Bollard on how animal advocacy has changed in response to sentience studies (02:12:01)
Bob Fischer on using proxies to determine sentience (02:22:27)
Cameron Meyer Shorb on how we can practically study wild animals’ subjective experiences (02:26:28)
Jeff Sebo on the problem of false positives in assessing artificial sentience (02:33:16)
Stuart Russell on the moral rights of AIs (02:38:31)
Buck Shlegeris on whether AI control strategies make humans the bad guys (02:41:50)
Meghan Barrett on why she can’t be totally confident about insect sentience (02:47:12)
Bob Fischer on what surprised him most about the findings of the Moral Weight Project (02:58:30)
Jeff Sebo on why we’re likely to sleepwalk into causing massive amounts of suffering in AI systems (03:02:46)
Will MacAskill on the rights of future digital beings (03:05:29)
Carl Shulman on sharing the world with digital minds (03:19:25)
Luisa's outro (03:33:43)
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Milo McGuire Transcriptions and web: Katy Moore