

“Notes on cooperating with unaligned AI” by Lukas Finnveden
These are some research notes on whether we could reduce AI takeover risk by cooperating with unaligned AIs. I think the best and most readable public writing on this topic is “Making deals with early schemers”, so if you haven't read that post, I recommend starting there.
These notes were drafted before that post existed, and the content is significantly overlapping. Nevertheless, these notes do contain several points that aren’t in that post, and I think it could be helpful for people who are (thinking about) working in this area.
Most of the novel content can be found in the sections on What do AIs want?, Payment structure, What I think should eventually happen, and the appendix BOTEC. (While the sections on “AI could do a lot to help us” and “Making credible promises” are mostly overlapping with “Making deals with early schemers”.) You can also see my recent [...]
---
Outline:
(01:05) Summary
(06:13) AIs could do a lot to help us
(10:24) What do AIs want?
(11:20) Non-consequentialist AI
(12:05) Short-term offers
(13:48) Long-term offers
(15:38) Unified AIs with ~linear returns to resources
(17:04) Uncoordinated AIs with ~linear returns to resources
(18:36) AIs with diminishing returns to resources
(22:10) Making credible promises
(25:38) Payment structure
(28:49) What I think should eventually happen
(29:30) AI companies should make AIs short-term offers
(32:22) AI companies should make long-term promises
(33:03) AI companies should commit to some honesty policy and abide by it
(33:21) AI companies should be creative when trying to communicate with AIs
(35:51) Conclusion
(38:06) Acknowledgments
(38:22) Appendix: BOTEC
(43:33) Full BOTEC
(46:06) BOTEC structure
(48:21) What's the probability that our efforts would make the AIs decide to cooperate with us?
(53:35) There are multiple AIs
(56:34) How valuable would AI cooperation be?
(58:43) Discount for misalignment
(59:03) Discount for lack of understanding
(01:00:19) Putting it together
The original text contained 23 footnotes which were omitted from this narration.
---
First published:
August 24th, 2025
Source:
https://www.lesswrong.com/posts/oLzoHA9ZtF2ygYgx4/notes-on-cooperating-with-unaligned-ai
---
Narrated by TYPE III AUDIO.