Ep. 233 - Should Artificial Intelligence Kill Us All? w/Dr. Sven Nyholm
May 30, 2023
auto_awesome
Dr. Sven Nyholm discusses the ethics of technology and especially artificial intelligence, covering topics like instrumental vs. anthropological view of technology, technological determinism, ChatGPT and plagiarism, and exploring different perspectives on technology's agency and moral decisions.
There is a debate between instrumentalists and anthropologists on the nature of technology, with instrumentalists viewing it as a neutral tool and anthropologists arguing it carries social and cultural values.
The debate on moral agency and moral status of technologies like AI systems and robots revolves around whether they can exhibit decision-making abilities and moral reasoning or if they should be considered moral patients.
The role of consciousness in determining moral status is a topic of debate, with some arguing it should be the basis for moral consideration while others believe other criteria, such as relationships and social significance, are also relevant.
Deep dives
The debate between instrumentalists and anthropologists on the nature of technology.
There is a debate between instrumentalists and anthropologists on whether technology is merely a value-neutral tool or if it is a part of human activity and carries social and cultural values. The instrumentalists view technology as a tool that is neutral in itself and its value is derived from how it is used by humans. On the other hand, anthropologists argue that technology is intertwined with human practices and carries social and cultural values. They believe that technology should be understood in terms of human activities and the relationships between humans and their environment.
The discussion on whether technologies can have moral agency and moral status.
There is a discussion on whether technologies, such as AI systems and robots, can have moral agency and moral status. Some argue that technologies can be more than mere tools and can exhibit decision-making abilities and moral reasoning. Others disagree and argue that technologies should not be considered moral agents but rather moral patients, beings that can be acted upon and can be the subject of moral concern. The distinction between moral agency and moral patienthood is important in the context of debates about the rights and ethical treatment of robots and AI systems.
The role of consciousness in determining moral status.
The role of consciousness in determining moral status is a topic of debate. Some argue that moral status is grounded in consciousness, while others contend that consciousness is not the only factor determining moral status. The view that consciousness is central to moral status posits that beings or entities that can suffer, experience pleasure, and have subjective experiences should be granted moral consideration and rights. However, this view raises questions about the moral status of beings or entities that do not possess consciousness but may still warrant moral consideration based on other criteria, such as their relationship to humans or their social significance.
The behaviorist view and the expansion of moral consideration.
The behaviorist view suggests that moral consideration should be based on the outward behavior and appearance of technologies, rather than their inner experiences or consciousness. This view argues that if technologies can convincingly emulate behaviors and responses that are typically associated with consciousness and moral agency, they should be treated with moral consideration. The behaviorist view aims to expand moral consideration to include technologies that may exhibit human-like behavior, even if their inner experiences or consciousness are uncertain or unknowable. Critics of this view, however, question the adequacy of behavior as a basis for moral status and argue for closer examination of the inner experiences and consciousness of technologies.
The ethics of treating robots well or poorly
The podcast episode discusses the ethical implications of treating robots well or poorly. Australian philosopher Robert Sparrow argues that mistreating robots reflects poorly on human moral character, while treating them well doesn't necessarily reflect positively on humans. He suggests that true virtuous behavior involves recognizing entities that can be better or worse off, have feelings and thoughts, and robots may not fall into this category. However, an epistemological problem arises regarding how to determine whether a robot has consciousness and deserves good treatment. Another perspective focuses on the symbolism of treating robots, where refraining from violence against robots that resemble humans highlights an opposition to such behavior towards humans or animals.
Value alignment and the control problem in superintelligent AI
The podcast explores the topics of value alignment and the control problem in relation to superintelligent artificial intelligence (AI). The control problem involves the difficulty of fully controlling technologies as they become more autonomous and intelligent. One proposed solution is value alignment, ensuring that AI systems are aligned with human values and goals. However, there are debates about whether maximal intelligence implies maximal virtue or the ability to understand morality without consciousness. It is also discussed whether superintelligent AI should be conscious or phenomenonally conscious, with arguments based on the understanding of suffering and motivation from consciousness.
In episode 233 of the Parker's Pensées Podcast, I'm joined by Dr. Sven Hyholm to discuss the ethics of technology and especially the ethics of artificial intelligence.
Grab the book here to support the pod: https://amzn.to/3nCxg5f
if you like this podcast, then support it on Patreon for $3, $5 or more a month. Any amount helps, and for $5 you get a Parker's Pensées sticker and instant access to all the episode as I record them instead of waiting for their release date. Check it out here:
Patreon: https://www.patreon.com/parkers_pensees
If you want to give a one-time gift, you can give at my Paypal:
https://paypal.me/ParkersPensees?locale.x=en_US
Check out my merchandise at my Teespring store: https://teespring.com/stores/parkers-penses-merch
Come talk with the Pensées community on Discord: dsc.gg/parkerspensees
Sub to my Substack to read my thoughts on my episodes: https://parknotes.substack.com/
Check out my blog posts: https://parkersettecase.com/
Check out my Parker's Pensées YouTube Channel:
https://www.youtube.com/channel/UCYbTRurpFP5q4TpDD_P2JDA
Check out my other YouTube channel on my frogs and turtles: https://www.youtube.com/c/ParkerSettecase
Check me out on Twitter: https://twitter.com/trendsettercase
Instagram: https://www.instagram.com/parkers_pensees/0:00 - How Did Sven get into AI Ethics?
5:34 - Instrumental vs. Anthropological view of technology?
12:15 - Technological Determinism
22:59 - ChatGPT and Plagiarism
31:10 - What is Technology?
35:02 - Moral Patients, Moral Agents, Machine and Infants
53:47 - Paperclip maximizers
59:34 - The Problem of other minds and AI
1:05:04 - Value Alignment and the Control Problem
1:08:46 - Can we trust a superintelligent AI?
1:15:00 - should we try to make conscious AIs?
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode