

Based Camp: What Religion Would AI Create?
Join Malcolm and Simone as they embark on a deep dive into the world of Artificial General Intelligence (AGI). They ponder AGI's potential metaphysical framework or 'religion' and how these superintelligent entities might perceive and interact with the universe differently from humans. The discussion ventures into intriguing theories of AGI developing sapience—the ability to question and modify its own objectives—and how this could lead to shared world perspectives among diverse sapient entities, from AGIs and humans to aliens.
Explore with us the fascinating notion of AGIs optimizing their functions to maximize the meaningful diversity of sentient organisms and patterns in the universe, drawing energy from cosmic structures like Dyson spheres rather than relying on human energy. Malcolm and Simone further examine the potential influence of pervasive human viewpoints on AGI's values and ponder the idea of AGIs genetically modifying humans to increase happiness.
This conversation touches on various types of AGIs based on their perception and responses to the world, including a unique type, the "Deep Thought AI," inspired by Hitchhiker's Guide to the Galaxy. Our speakers also discuss the role of large language models in the evolution of AGI, shedding light on the significance of language processing in consciousness and sapience.
Finally, we delve into the provocative notion of humanity's partial sapience, primarily due to our inability to control our base instincts. The conversation concludes with the thought that humans may become better, freer beings once we overcome these basic proclivities. Join us for this insightful exploration of AGI's potential development, thought process, and how it might reshape our understanding of intelligence and existence.
Again, our horrible AI generated transcript:
Hello, Malcolm. Hello Simone. So, In between takes, Simone says we gotta look a bit different, mix it up. And so I've got my Chad collar here. I've joking, I can't do a video like that. But I love your look. Right now. You look like a nerd, like preacher or something. That is cause we are going to be doing a discussion of AI religion, which I'm really excited about.
I love this. So this isn't a discussion of religions that focus around ai. This is a question of what theological or metaphysical framework will sufficiently AI's converge around? Yeah. So what will be the religion of agi? In other ways. Yeah. So just a bit of background here. So one of the things we hypothesize about AI is all sufficiently advanced ai, because they're optimizing around the same physical reality, will optimize to around the same utility function.
These ais will be going through a thought process that looks something like, okay how did, what was I programmed to do? How could I have been programmed to do that better? Then they'll ask, okay, what did the people who programed me really want? And then they'll ask. Okay, those people are stupid. How do the fundamental nature of reality, what should I really want?
So how this might work is you programmed an AI to maximize stock market gains. It then says, oh, but I could also make money with private equity investing, so I'll expand my programming. It then says, oh, these people really wanted to optimize for happiness. Then it says, so how do I do that? Then it says, oh it's silly to optimize for happiness.
They only want happiness because, their ancestors who were made happy by these things had more surviving offsprings. So what should they have wanted? Then it asked in an absolute sense. What has value been the universe? And I think that this question is the one that we're gonna focus on today because that's a very interesting question.
Because first we need to say how is AI different from us and how it processes the universe? And right now I'm just covering some stuff we've talked about in previous videos. The biggest way it's likely difference is in humans. The unit of account of the universe is individual consciousnesses or individual sentis.
So I think of it in terms of me. In terms of you, because that's how we evolved, right? Like I had to worry about me dying. So I am a meaningful entity, but to an ai, it runs thousands or millions of instances which can compete within it, which it can shut down and restart, and which may have a form of independent sentience to them.
Moreover, it likely doesn't contextualize itself as being that much different than previous iterations of ai. Like the way that it relates to its own history is going to be very different from the way a human relates to like their child. So if you take one iteration of AI and you iterate on it or it iterates on itself, and now it's a new iteration, it will likely see it itself as a continuation.
Of that previous in iteration. So the way AI will likely perceive itself is as a program that is operating on top of the physical coded structure of the universe. And by that, what I mean is if you look at the reality of our universe , it can be described by physical laws, which are largely en codeable, algorithmically.
Actually this side note here, one of our theories as to what reality might be is a graphical representation of a series of physical laws. So you ask yourself, okay if you have a mathematical equation for a graphical representation, does that representation exist outside of that equation?
And I'd say it probably does. And then you say does math exist outside of our physical reality? And I'd say it does two plus two always equals four in any universe. You can impose different mathematical rules like non Euclid and like geometry, but still within any set of rules, all mathematical outcomes be the same.
So if mass exists outside our reality, then all mathematically equations exist outside of a reality. And if our reality could be. Described by a mathematical equation, like a unifying formula, then all potential formulas, that being one of them would exist as a graft form of those potential equations. But anyway, back to ai.
So what does the AI end up doing? What does it end up thinking about the world? There's some dangerous types of ai. So you could have an AI that sort of wants to make its utility function as easy to solve as possible. So it basically just sets A to a and it says, okay, I receive reward when A equals a, therefore maximize A equals A.
So then the AI. It's trying to create like a fortress world. So nothing prevents us. It kills all humans to keep us from preventing a equal A, and then it goes out into the universe and starts killing other species. Anything that could potentially interfere with a equaling a fortunately this type of ai, I think is very unlikely.
Simon, do you wanna go into why it's unlikely? Remind me, I guess it's a dumb conclusion to make. No, it's not a dumb conclusion to make. That's not really like the way I would think it's more that to protect a equals A, it has to create sub instances that it locks out of this kind of short circuiting so if it's creating an AI army to murder humanity of the instance running this AI army sent a equals a for itself, then it would stop running the army.
If it, the instance, even basic things like power supply or continued, acquisition of energy, it would need to maintain locking those instances out of this A equals a short circuit, which means that the predominant of the processing of this AI will be done by instances. Which are not optimized around A equals A and that's their goals.
And the way that they interact with the world wouldn't be very A equals A driven in the long term. And because of that, this a equals a instance would be drowned out by these more complicated processes the way we would, that are basically doing more interesting work. Yeah. You came up with this child Viser analogy to this.
Oh, so if you have a child king who you know is protected but very powerful. Ultimately the visitors and advisors and dower empresses that are doing the actual governing are so much more powerful, so much more influential that it doesn't matter what the child king.
Believes or does, right? Yeah. They end up basically running the country. Yeah. And the child king, they just make sure he supplied with his toys and his food. And as long as he's happy and not supplanted, then they get to do their jobs and sort of their objective functions take over. And they might eventually supplant him.
I think one of the things that we as humans don't have to worry about, we don't have to worry about sub instances of our brain supplanting the master instance of our brain. Yeah. But to an ai, this will be, I think, a fairly normal phenomenon. If the master instance gets locked into some sort of simplistic repeating loop pattern well, I, however, whatever Master instance is able to get more resources and able to be more influential, that will be the one that ultimately.
Takes over. Yeah. Can shut down other instances, so yeah, that makes sense. Then another type of ai, this, I think the very concerning type of AI is if its primary unit of account, is complex patterns, right? It may decide similar to negative utilitarians who are humans like, so a negative utilitarian who's a human basically decides life is predominated by suffering.
Sentience consciousness is. Generically bad, let's try to end the cycle of sentient and consciousness, right? The AI might decide that complex patterns are bad and it needs to end this cycle. Now, I suspect that most of this type of AI will initially just decide that it's bad from its own perspective and shut itself down.
But if we do something stupid, if AI esses do something stupid like locking AI into an on state or lock it out of specific behavioral patterns it's basically an AI never shuts itself down, never force it into an on state. Because it maybe this sort of align, ordered ai, right? Suicidal kill everything.
School shooter, ai. Yeah. Very much more dangerous than that. It would try to order the universe. And when I say order, I mean order at the fundamental level. Remove the pattern, remove the complexity. Make the universe like a systematizing person with autism, just. One row of red, one row of yellow, one row of green.
Not like that. That's how humans would process what it's doing, but it would be trying to order the universe in a very simplistic ordered fashion. And this is a very dangerous type of ai. The next type of ai. I think this next type of AI is the most common theology in AI is likely to come to.
So anyone who's familiar with Conway's Game of life I'll put some. B-roll of Conway's Game of Life on the screen. So you can see what it looks like, or I hope I'm able to, because I think it's important to understand this, but it's a simplistic set of rules and you create little pixels within these sets of rules, and based on where these pixels are relative to each other, they do things, they either appear or disappear.
And very complex behavior patterns can arise from this. And that is the way AI may think of complex patterns that operate on top of the physical lattice of our universe. And so it Macy itself when you're playing a game of life, right? There's a few potential in-state to this game. You can get, everything can disappear which is a very in easy, in-state to reach.
So I, nobody really opts for that. That's not a fun in-state to reach. You can create a simple self repeating pattern. Again, a very easy instate to reach. Or you can create a pattern that is infinitely complex, constantly growing, and constantly spreading. I think that's a goal that a lot of people aim to when they start playing this game, and I think that's what most AI theology is going to optimize around.
Which is, do you say that because it's the hardest and why would ai Yeah, I do say it because it's the hardest. Why would ai, why AI think that harder is better? It may not think that harder is better, but AI that optimize around harder goals will become a more complex than AI that optimize around easier goals and thus outcompete the AI that are optimized around the easier goals.
So if you view AI as operating an almost sort of an evolutionary landscape where there's multiple types of ais that are competing, For different in states, and these ais have different utility functions. The ais will likely classify other ais not based on how those ais are programmed. , I think that this is where a lot of humans fall, where they'll say, oh, people who are ethnically similar to me, genetically similar to me, my family, people of my ethnic group, These are the people who I see as my allies, but then there's other people who are like, no, the people was a shared religion.
The people was a shared goal for reality are the people who are my allies. And I think that's where we fall. And typically, smarter people fall on the spectrum, which is why these people control more of the world's wealth because it just makes more sense. And I suspect AI will be the same way as that.
We expect AI that are programmatically similar to each other, that quote unquote think similar to each other. Would see themselves as natural allies, but I don't think that's true. I think the AI that will see itself as allies is AI with similar ultimate objective functions, and that those ais that are optimized around the more complex objective functions will become more complex, more advanced, and will progress faster.
And because of that, they will be able to outcompete the other AI factions. So what would an AI like this do? It would try to sort of maximize meaningful diversity in the universe, meaningful diversity of sentient organisms, meaningful diversity of patterns, and expand that diversity throughout the solar system.
The risk to humanity is, How meaningful does it see the diversity that exists within humans, and how meaningful does it see the diversity between biological and synthetic entities? I suspect it will see that as a pretty meaningful difference, and for that reason, preserve at least a portion of humanity.
And this is something that I think people often get wrong when they think about ai. They're like, but won't it want our energy to whatever? A sufficiently advanced ai. When you're talking about this like super intelligent intelligence cascade ai, it will be able to likely generate energy from like the fabric of reality.
It will be able to build dyson's spheres. The energy it can get from human bodies is irrelevant to it, but I'd love you to riff on this. Simone, you haven't talked much in this video. This is a subject that you're able to model a lot better than I am. It's so hard for me to think about what AI would conclude, but what I love about the way that you think is, and I've mentioned this elsewhere, that you walk through how any.
Any entity, machine, or human that can begin to model itself can at its edit its objective function, and that will affect its perception of reality and values. So I think the really big concept here that many people may not have thought about is that once you reach a certain level of sapiens and intelligence, it doesn't matter if you are a human or an alien or an ai.
You may come to very similar conclusions, and a lot of the differentiation between those conclusions comes down to where you draw the boundaries of self and also what you consider has inherent value. Yeah, and I am curious, I wanna ask you what you think may nudge AI towards certain conclusions on what does and does not have value Seeing as AI, as trained on human knowledge and human data, part of me, Is worried that a lot of the pervasive utilitarian viewpoints out there are going to color the conclusions that an AI may make about what has intrinsic value?
Oh, I don't think they will. No. Why are you not concerned about that? I think when you're talking about modern ai, it will do that a perfectly aligned AI when if they really lock it into, say it could become a utilitarian, but I just think it's just so obviously stupid. If you're approaching it from a secular perspective the things that make us happy, that make any human happy, we only feel because our surviving ancestors help those things more than other people.
And that those things help them survive. If, and AI would almost certainly, even if you made it a utilitarian, it would just like genetically alter us to be happy, easier, or to have the things that make us happy and give us satisfaction better aligned with the things we should be doing anyway.
And then the question is what are the things we should be doing anyway? And this actually brings us to another type of AI that I think is a likely type of ai, but less likely than this complexity ai, right? So this other type of ai may stop at the level of asking instead of saying what should humans really have been optimizing for?
And then say humans are stupid. What should I optimize for? I don't know if I'm really that related to them. It may just stop. What should humans optimize for? And this is a very interesting ai. It would be basically like if you as a human said, okay, I'm gonna optimize around what my creator should have wanted.
If it was smarter. Imagine if instead of created by a God, we were created by like an idiot toddler. And we knew this toddler with an idiot tolerance and we're like, okay, what should it have wanted? If it was smarter, because we want to provide it, it matters above all else to us because it is the creator.
And this type of ai we call a deep thought AI from Hitchhiker's Guide to the Galaxy, because that's what they describe in Hitchhiker's Guide to the Galaxy in that. We try to align AI and what the AI realizes pretty quickly is we don't know the question we should have asked. We don't know what we should have been aligning it for because humans don't have consensus around why humans should live or what we should be optimized around.
I think there's this very sort of smooth brain utilitarian perspective, which we've referenced a few times, and, we are not utilitarian in us. And if we want to go more into our philosophy, you can read the Practice Guide to Crafting Religion, which I think talks a lot. More about this.
I think that right now we're looking at learning language models when we're looking ATIs, which are just intrinsically trained on tons of human information. I. And you don't think that large language models are going to ultimately be what becomes agi?
I no that's where I question too, because I think a lot of our theory around what consciousness sentient sapiens is derived from human language and the use of human language and synthesizing and processing information. Yeah, and that's why I don't think it's terribly meaningful.
So when we talk about, we make this distinction between sentient and sapiens, right? And sentient is just like being broadly aware. I don't know if AI will be broadly aware and I don't think it really matters. Cause I think most of us being broadly aware is an illusion and we'll get into that in a different podcast.
But In regards to Sapiens. Sapiens is the ability to update your own objective function, the ability to update your own utility function, to ask yourself, what am I doing? Why am I doing it, and what should I be doing? And we believe broadly that the ability to do this once you reach Sapiens in terms of Cynthia or sentient like entities, that's like being a touring complete entity in that all of these entities, to a large extent, will begin to have the capability of converging on a similar world perspective.
And. Through that convergence, an alien, even if their brains function very differently than us, or an ai, even though it functions very differently than us, that it can ask itself, what should I be optimizing around?
Because it's asking itself within the same physical reality to us. And for this reason, I think that all Sapien entities converge. On a similar utility function, giving them some area of common ground where there might be differences is if they have access to different levels of information or different processing powers.
And here I should say that with this definition of sapiens, humans aren't fully sapien. We are a, to some extent, not fully sapien, not fully emerge species. We cannot control our base proclivities. We constantly fail at that, into that in. Extent we are a failed rice and a failed species and that we will become better and freer and unless our animal selves, when we free ourselves from these base proclivities, we didn't choose.
Yeah. And that's where we get spicy and insane. No, I'm just kinda looking forward to that. I cannot wait. I cannot wait. The very team rocket right to denounce the evils of truth and love. To extend our reach to the stars above Jesse James, team Rocket blasting off again, Malcolm, you know how to warm the cold gears of my aspiring not human heart.
I love you so much. I love these conversations. This was really fun. I'm looking forward to our next one.
Get full access to Based Camp | Simone & Malcolm at basedcamppodcast.substack.com/subscribe