
3 – Value Alignment and the Control Problem
This Is Technology Ethics
Intro
This chapter explores the crucial concepts of value alignment and control in the context of artificial intelligence and technology ethics. The speakers emphasize the importance of these ideas for the responsible development and implementation of AI systems, drawing insights from relevant literature.
In this episode, John and Sven discuss risk and technology ethics. They focus, in particular, on the perennially popular and widely discussed problems of value alignment (how to get technology to align with our values) and control (making sure technology doesn’t do something terrible). They start the conversation with the famous case study of Stanislov Petrov and the prevention of nuclear war.
You can listen below or download the episode here. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services.
Recommendations for further reading
- Atoosa Kasirzadeh and Iason Gabriel, ‘In Conversation with AI: Aligning Language Models with Human Values’
- Nick Bostrom, relevant chapters from Superintelligence
- Stuart Russell, Human Compatible
- Langdon Winner, ‘Do Artifacts Have Politics?‘
- Iason Gabriel, ‘Artificial Intelligence, Values and Alignment‘
- Brian Christian, The Alignment Problem
Discount
You can purchase a 20% discounted copy of This is Technology Ethics by using the code TEC20 at the publisher’s website.