AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
I notice that there has been very little if any discussion on why and how considering homeostasis is significant, even essential for AI alignment and safety. Current post aims to begin amending that situation. In this post I will treat alignment and safety as explicitly separate subjects, which both benefit from homeostatic approaches.
This text is a distillation and reorganisation of three of my older blog posts at Medium:
I will probably share more such distillations or weaves of my old writings in the future.
Introduction
Much of AI safety discussion revolves around the potential dangers posed by goal-driven artificial agents. In many of these discussions, the [...]
---
Outline:
(01:09) Introduction
(02:53) Why Utility Maximisation Is Insufficient
(04:20) Homeostasis as a More Correct and Safer Goal Architecture
(04:25) 1. Multiple Conjunctive Objectives
(05:23) 2. Task-Based Agents or Taskishness -- Do the Deed and Cool Down
(06:22) 3. Bounded Stakes: Reduced Incentive for Extremes
(06:49) 4. Natural Corrigibility and Interruptibility
(08:12) Diminishing Returns and the Golden Middle Way
(09:27) Formalising Homeostatic Goals
(11:32) Parallels with Other Ideas in Computer Science
(13:46) Open Challenges and Future Directions
(18:51) Addendum about Unbounded Objectives
(20:23) Conclusion
---
First published:
January 12th, 2025
Narrated by TYPE III AUDIO.