
Hard Mathematical Proof AI Won't Kill Us
Based Camp | Simone & Malcolm Collins
Exploring the Simulation Hypothesis and Its Irrelevance to How We Should Live
A discussion on the concept of the simulation hypothesis and its implications on our perception of reality and the meaning of life.
An in-depth look at how the Fermi Paradox and the Grabby Alien Hypothesis provide evidence that an AI apocalypse is unlikely. We examine filters to advanced life, terminal utility convergence, and why we may not have encountered aliens yet.
Malcolm: [00:00:00] So basically, no matter which one of these Fermi paradox is true, either it's irrelevant That we are about to invent a paperclip maximizing AI, because we're about to be destroyed by something else, or in a simulation, or... We're definitely not about to invent a paperclip maximizing AI, either because we're really far away from the technology or because almost nobody does that.
That's just not the way AI works, I am so convinced by this argument that it is actually, I used to believe it was like a 20 percent chance we all died because of an AI or maybe even as high as a 50 percent chance, but it was a variable risk as I've explained in other videos.
I now think there's almost a 0 percent chance. I a, a 0 percent chance, assuming we are not about to be killed by a grabby AI somebody else invented Now it does bring up something interesting. If the reason we're not running into aliens is because infinite power and material generation is just incredibly easy, and there's a terminal utility convergence function, then what are the aliens doing in the universe?
Would you like to know more?
Simone: Hi, Malcolm. How are you doing, my [00:01:00] friend?
Malcolm: So today we are going to do an episode, a bit of a preamble for an already filmed interview. So we did two interviews with Robin Hanson, and in one of them we discuss this.
theory. However, I didn't want to off rail the interview too much going into this theory, but I really wanted to nerd out on it with him because he is the person who invented the grabby aliens hypothesis solution to the Fermi paradox. So I hadn't heard
Simone: about grabby aliens before, so I'm glad we're doing this.
This is great.
Malcolm: Yes, so we will use this episode to talk about the Fermi Paradox, the Grabby Alien Hypothesis, and how the Grabby Alien Hypothesis can be used. Through controlling one of the variables, i. e. the assumption that we are about to invent a paperclip maximizer AI that ends up fooming and, killing us all because that would be a grabby alien definitionally.
If you collapse that variable within the equation to [00:02:00] today. Then you can back calculate the probability of creating a paperclip maximizing AI. And, spoiler alert, the probability is almost zero. It basically means it is almost statistically impossible that we are about to create a paperclip maximizing AI.
Unless, with the two big caveats here, something in the universe that would make it irrelevant whether or not we created a paperclip maximizing AI. Is hiding other aliens from us or we are in a simulation, which also would make it irrelevant that we're about to make, create a paperclip maximizing AI, or there is some filter to advanced life developing on a planet that we have already passed through that we don't realize that we have passed through.
So those are the only ways that this isn't the case. But let's go into it because it is, it is really easy.
I just realize that some definitions may help here. We'll get into the gravity alien hypothesis in a second, but the [00:03:00] concept of the paperclip maximizing AI. Is the concept of an AI that is just trying to maximize some simplistic function. So in the concept as it's laid out as a paperclip maximizer, , it would be just make maximum number of paperclips and then it just keeps making paper clips and it starts turning the earth into paper clips and it starts turning people into paper clips.
Now, realistically, if we were to have a paperclip maximizing AI, It would probably look something more like, you know, somebody says.
Process this image, and it just keeps processing the image to like an insane degree, because it was never told when to stop processing the image. And it just turns all the world into energy to process an image. Or something else silly like that. This concept is important to address because there are many. people who at least pass themselves off as intelligent, who believe that we are about to create a paperclip maximizing AI. , that AI is about to, as they call foom, which I mentioned earlier here, which just means rise in intelligence astronomically quickly. Like double his intelligence every [00:04:00] 15 minutes or something. And then wipe out our species.
And after that begin to consume all matter in the
universe
Malcolm: So the Fermi paradox is basically the question of why haven't we seen extraterrestrial life yet? You know, Like, we kind of should have seen it already it's, it's, it's kind of really shocking that we haven't, and I would say that anyone's metaphysical understanding of reality that doesn't take the Fermi Paradox into account is deeply flawed, because based on our understanding Of physics today, our understanding of what our own species intends to do in the next thousand, two thousand years, our understanding of the filters our species has gone through, so we know how hard it was for life to evolve on this planet.
And the answer is, not very from what we can see. So [00:05:00] A lot of people, I'm really, really into, it's one of like my areas of like deep nerddom theories for how the first life could have evolved on earth. So there's a couple things to note. One isn't that important to this, which is life evolved on earth almost as soon as it could.
Now, a person may say, why isn't that this relevant? That would seem to indicate that it is very easy for life to evolve on a planet. Well, and here we have to get into the, the grabby aliens theory. You're dealing with the anthropic principle here. Okay. Can you define the
Simone: anthropic principle?
Malcolm: Yeah, basically what it means.
Is if you're asking, like, look, it looks like Earth is almost a perfect planet for human life to evolve on it. Like, it had liquid water, everything like that, right? Except human life wouldn't have evolved without those things on a planet. A different kind of life would have evolved without those things on it.
Sure, the kind
Simone: that doesn't need water,
Malcolm: etc. Right, so it's, it's, it's not really, if, if, [00:06:00] if life on Earth didn't evolve almost as soon as it could well, then it would have been too late and another alien would have wiped out and colonized this planet. That is what the, the grabby alien theory would say, so that this doesn't really change the probability of this as a filter.
But what we do know about the evolution of life on Earth is there are multiple ways it could have happened. All of which could lead to an evolving, you could either be dealing with like an an RNA world. You could be dealing with citrus acid cycle event. You could be dealing with the, the clay hypothesis.
I actually think the Do you want to expound
Simone: on any of these? I've never heard of the citric acid, acid hypothesis.
Malcolm: So for this stuff, I would say it's not really that relevant to this conversation. And people can dig into these various theories with people who have like done them or just like look up citric acid cycle hypothesis explanation for evolution of life on earth or clay hypothesis to evolution of life on earth or shallow pool hypothesis to evolution of life on [00:07:00] earth or deep sea vent hypothesis to evolution of life on earth.
The point being is it shouldn't actually like, it shouldn't actually be that hard for life to begin to evolve on a planet like this. So, but why this is, is, is, is a relevant point. Okay. Okay. And we actually sort of have to back out here from the grabby aliens hypothesis. So, I'll explain what the grabby aliens hypothesis says and why this is relevant to the Fermi paradox.
So the grabby, usually when you're dealing with solutions to the Fermi paradox, what people will do is they'll say that there's some unknown factor that we don't know yet, basically. So a great example here would be the dark forest hypothesis. Okay. So the dark force hypothesis is that there actually are aliens, lots of aliens out there.
They just have the common sense to not be broadcasting where they are and to be very good at hiding where they are because they are all hostile to each other. And that any other aliens like us who were stupid enough to broadcast where they are, they get snubbed out, snuffed out really quickly. Sure, that makes [00:08:00] sense.
That makes sense, yeah. Okay if the dark forest hypothesis is the explanation for why we are not seeing alien life out there, it is somewhat irrelevant whether or not we build a paperclip maximizing robot, because it means we're about to be snuffed out anyway, given how loud we've been, radio signal wise, sending out ships broadcasting about us, sending out signals.
We've been a very loud species and, and we could... Not defend against an interplanetary assault by a space fearing species. Well, I
Simone: mean, in, in that case, you could actually argue it would be much better if we developed AGI as fast as possible because maybe it can defend us even if we cannot defend ourselves.
Malcolm: Possibly, but, but that's, that's the point there. Beside the point, obviously. It becomes irrelevant. Or they'll say we're in a simulation and that's why you're not seeing stuff. But again, that makes all of this beside the point. What Gravity Aliens does is it says, no, actually we are just statistically. The first sentient species on, on the road to becoming a grabby alien.
And I'll explain what this means in [00:09:00] just a second in this region of space. And then it says, let's assume that's true. It can use the fact that we haven't seen another species out there, a grabby alien that is rapidly expanding across planets to calculate how rarely these evolve on planets. Okay. Do you sort of understand how that could be the case?
Yeah. Okay. So in the grabby aliens hypothesis, when you run this calculation, it turns out if that's why we haven't seen an alien yet. What it means is there are very hard filters like something that makes it very low probability that a potentially habitable planet ends up evolving an alien that ends up spreading out like a grabby alien, i.
e. like a paperclip maximizer. One of these really loud things that's just going planet, you know, use the resources on the planet, other planets, other planets, other planets.
Simone: And even if it has already finished doing that, you've argued [00:10:00] in other conversations we have. Had that it, you would see the signs of that.
You would see the signs of the destroyed civilizations, etc.
Malcolm: Yeah, a grabby alien or which a paperclip maximizer is. So it's just easy if you're like, what does a grabby alien look like a paperclip maximizer? That's just going planet to planet, digesting the planets and then moving on or a human empire expanding through the universe.
You know, we go. We colonize a planet within 100 years, we get bored, go or some people go and they try colonizing new planet, you know, even with our existing technology on earth right now, like, like the speed of space travel right now if we were expanding that way, we could. Conquer an entire galaxy within about 300 million years.
So not that long when you're talking about, like, the age of the universe. This is a, a blindingly fast conquest. So, so once an alien turns grabby, it moves really quickly. Sure. And a lot of people think that we are like space travel constrained. We're [00:11:00] really not. The reason why we don't space travel with our existing technology is because of like radiation damage to cells and the lifespan of a human.
Mm-Hmm. . But like, if an AI was space traveling, it could do pretty well with our existing technology in terms of getting to other planets, you know, using them then, then spreading. Okay. Anyway, so the grabby alien hypothesis says that a species becomes grabby once in every million galaxies.
Okay. Now, within every galaxy, there are around 400 or 500 million planets within the habitable zone.
So the habitable zone is a distance away from a star where life could feasibly evolve. Now this isn't saying that they have the other precursors for life, but what it means is that there are very frequently in, in space it turns out planets that are, Likely for life to evolve on them. I would estimate like if I'm looking at everything altogether, like the data that I've seen, [00:12:00] there's probably about 10 million planets per galaxy that a, a an intelligent species could evolve in.
And then if you're talking about, well, you would only need this to happen, you've got to multiply that by a million. For the one in a million galaxies where a species is turning grabby. Now this is where it becomes preposterous that we are about to invent. If this is why we haven't seen aliens yet, why we are that we are about to invent a gravity alien.
We can look throughout earth history as I did with sort of the first big filter, the evolution of life or the appearance of life first on this planet and say, what's the probability of that event happening? In any given habitable planet for life appearing. My read is not only is it likely to appear, it could appear like one of five different ways, even with the, the, the chemical composition of early earth.
Then you're looking at other things. Okay. What about multicellular life? What's the probability of that happening? Actually, really high, [00:13:00] really high. There's not like a big barrier that's preventing it from evolving. And it has many advantages over monocellular life.
So you're almost always going to get it. Intelligence, how, how rare is intelligence to evolve? Not that rare, given that it has We've evolved multiple times on our own planet in very different species. I mean, you see intelligence in octopuses, you see intelligence in crows, you see intelligence in humans.
And then you can say, okay, okay. But like human like intelligence, right? Well, we already know from humans what a huge boost human like intelligence gives a species. The core advantage to human like intelligence is like, if I'm a spider and I'm bad at making webs. Right? Then I die. And that is how spiders get better at making webs intergenerationally.
As a human, I am able to essentially have, like, different models of the universe fight in my head and presumably allow the best one to win. And you don't have to
Simone: die before you get
Malcolm: better. Yeah, you don't have to die to get better.[00:14:00] It is almost as important to evolution. It is sort of like the second sexual selection.
So when sex first evolved, the core utility of sex as opposed to just like cloning yourself, right? Is it allowed for more DNA mixing, which allowed for faster evolution? Intelligence allows for the Yeah. Yeah. Yeah. faster evolution of the sort of operating system of our biology. And so it's, it's just such a huge advantage.
It's almost kind of shocking it didn't evolve faster. For sure. Given how close many species have come to it. Now actually surprising to a lot of people this is just like a side note here. A lot of people think cephalopods were close to evolving. Sentience? So let's talk about cephalopods.
Simone: Why?
Wait, I've like, I mean, cephalopods are all over like historic geology and all these things. Yeah, yeah, yeah, yeah.
Malcolm: What? Cephalopods are like squids, octopus, stuff like that. Like a lot of people point to how smart they are and they are smart. They are like weirdly smart. Yeah. But they don't know why they're smart because they don't know neuroscience.
So the [00:15:00] reason why cephalopods are as smart as they are is an axon, an axon is what like information, the action potential travels down. Yeah. It's a
Simone: little arm thing
Malcolm: that you see on a neuron. Yes. In a, in a neuron. It's the little arm thing. It's the cable. You can think of it as. Okay. So to be an intelligent species, you need a really fast traveling action potentials.
Okay. So. The way that humans have really fast traveling action potentials is something called myelination. I'm not going to go fully into it, but it's a little physics trick where they put like a layer of fat intermittently around the axon, and it causes the action potential to jump. It's like putting vegetable
Simone: oil on your slip and slide.
Malcolm: Not exactly. It's actually a really complicated trick of physics that can't easily be explained. Except by like, looking at, it's, I don't want to get into it. The point is, is we mammals have a special little trick. that allows for our [00:16:00] action potentials to travel very,
Simone: very quickly. And are you saying that cephalopods have this too?
Malcolm: No, they don't. The way that they, and any other species that wants a fast traveling action potential before us, the way that you increase the speed that action potentials traveled was by increasing the diameter of the axon. Oh, so they just have fat
Simone: axons, whereas we have...
Malcolm: enormously fat. In some cephalopods, they're like a quarter centimeter in diameter.
Holy smokes, like, Whoa. Okay. They could not get smarter than they are without having some huge evolutionary leap in the way that their nervous systems work. This is why cephalopods, despite being really smart and probably being really smart for a long time, cause they've been on earth for a really long time.
Just could never make the evolutionary leap to human type intelligence. So.
Simone: Even fatter axons.
Malcolm: Yeah, because as the axons got fatter, the, the number of neurons they could have would get lower. The density of the neurons.
Simone: Oh, of course. Yeah. You've got limited space unless they got much [00:17:00] bigger
Malcolm: brain cells.
Yeah. I guess you could have like giant, giant, giant, I mean, yeah, well, I mean, whatever. Anyway, this is a huge tangent here, but basically it looks like if you're looking at the evolution of life on our earth. If we have undergone other big like hard filters could be, it's very rare for a species to get nuclear weapons and not use them to destroy it.
So, because it's so fun, right? It could turn out that almost every species does that. Or it could be that there's like one science experiment, like a lot of people that may be trying to find the, the big super collider actually. Like all species, they get to a certain level of intelligence and a certain level of curiosity, and they can't help but trying to find hadrons, and then they create little black holes in their planets, and they disappear.
And that really could be a filter, like, like, these are all potential filters. The problem is, is if we're like five years away from developing a [00:18:00] paperclip maximizing AI, that means that we as a species have already passed All of our filters, and that means that we as a species can look back on the potential possible filters that we have passed through and sort of add them all up.
Okay. And when you do that, you don't get a number that comes even close to explaining why you would only see one grabby alien per every million galaxies. In fact, it means that the probability of us. Being about now, now it could mean two things, so we'll go through the various things that it could mean.
It could mean that we just are nowhere technologically close enough to develop a paperclip maximizing AI that is dangerous, that could become a grabby alien. It could mean that. It could mean that we are about to develop a paperclip maximizing alien, but something, like, even after it digests [00:19:00] all life on Earth, Something prevents it from spreading out into the galaxy, something technologically that we haven't conceived of yet.
This seems almost unfathomable to me. Given, given what we know about physics today.
Simone: Yeah, and that we've even gotten, like, projectiles from Earth pretty far off planet. Yeah. So, yeah, there's not like some weird barrier that we don't know about yet.
Malcolm: It could be, and I actually think this is the most likely answer.
I think that this is by far the most likely answer to the Fermi Paradox. Hmm. Simulation? No, not simulation. It could be that we're in a simulation, but we already went over that. I think it's that when you hear people talk about like AI foaming, and I've talked about this on previous shows, but I think people like really don't understand how insane this is.
They believe that the AI reaches a level of super intelligence, but it somehow still has an understanding of physics. And time that is very similar to our current understanding of physics and time, [00:20:00] meaning that when we think about expanding into the universe, we think about it in a very sort of limited sense.
Like, we gain energy from like the sun from digesting matter. And we spread out into the universe physically on spaceships and stuff like that. Right? If anything we understand about physics and time turns out to be wrong, this assumption for the way an expansionist species would spread could become immediately nuked.
And I mean this in the context of like, it's, it's kind of insane to me. Like, you've got to understand how insane it is to assume that we basically have all of physics figured out. Yeah, that's fair. This is like when, like, people in the 1800s, when they were planning how we were going to go to space, and they'd have, like, Maritime ships, like, sailing through, like, outer space.
They'd have, you know, it's, it, or, oh, what are people gonna do in the future? Well, they'll, they'll have, like, balloons, and they'll use them to go on lake walks. Or, like, it basically assumes that [00:21:00] technology, even as we advance as a species, or whatever comes after us advances, moves very laterally, and, and, and assumes we don't have future breakthroughs.
Which I think is just, One arrogant, and in the eyes of history, incredibly stupid. So what kinds of technological breakthroughs could one make it very rare for even when an alien is grabbing that we would see it out in the universe, right? One is time... Doesn't work the way we think it works, or it does work the way we think it works, but we're just not that far from controlling it.
So, by that what I mean is you could create things like time loops, time bubbles, stuff like that. Essentially, entirely new bubble universes. So, how would I describe this? Okay, if you think of like reality is like a fabric, essentially what you might be able to do is like pinch off parts of that fabric and expand them into new universes.
That's that's essentially what I'm describing here.
Simone: There may be like the [00:22:00] way you can break between realities are weird time loops generates energy in some way where you could kind of just keep looping it and like pinging back and forth. You know, who knows, you know, it could be like the new wind tower.
Malcolm: We just don't know. Even if it turns out that you can travel in time this way, given that we haven't seen time travelers of that, or we might not have, we've talked about this in another video which, which I'll link here if I remember to do it. Given that we haven't seen time travelers yet what I assume is that time manipulation requires, like, anchors, which of course
it would. be in a different place. Part of the galaxy than the earth or something like that. It would be really hard to track. You would need like some sort of anchor to be built. So time travel would only work from the day it's invented. And from the location it's invented. So you wouldn't be able to go out into the universe.
Another example of a technology that we might not have imagined yet is dimensional travel. It may turn out, we meet aliens. Like we were traveling in the universe and they're like, why did you waste all of the energy getting to us? Your own planet is [00:23:00] habitable in an infinite number of other dimensions and it's right back where your planet is.
Like, why wouldn't you just travel through those dimensions? That's a much easier path for conquest. That being the case, and people would be like, Yeah, but typically when something's like being expansionistic like that, it moves in every direction. Yes, but if there are an infinite number of other dimensions, and it is always cheaper to travel between dimensions than it is to travel to other planets in a mostly dead universe, let's be honest, like there's not a lot of useful stuff out there, it could, from the perspective of easily being able to travel between dimensions, it could never make sense.
There is always an infinite number of other dimensions to conquer. Right where you are right now instead of going out into the universe. Now, this would not preclude a paperclip maximizing AI. It could be that we are about to invent a paperclip maximizing AI. But even if we do that, it's less likely that it immediately comes after us.
It could just expand outwards dimensionally. Like, so it would [00:24:00] act in a very different way than we're predicting it would act. Now, Another thing that could prevent it from killing us is it could be trivially easy to generate power. And, and even matter.
And by that, what I mean is there is some method of power generation that we have not unlocked yet. That is near inexhaustible and very, very easy. And if you can generate power was near infinity was, was, was a little exhaustion. You could also generate matter, electricity, anything you want. If this was the case, there just wouldn't be a lot of reason to be expansionistic in a planet hopping sim.
Essentially, it would be like one giant growing planetary civilization or, or ships that are constantly growing and expanding out from a single region. It could also be that these sorts of aliens expand downwards into the microscopic instead of expanding outwards. Like, that might be a, a better path for expansion.
There's just a lot of things that we don't know about physics yet which could make it [00:25:00] so that when you reach a certain level of physical understanding of the universe, expanding outwards into a mostly dead universe can seem really stupid. Now there's another thing that could prevent grabby aliens from appearing.
And this is the thesis that we have listed multiple times, which is terminal utility convergence, which is to say, all entities of a sufficient intelligence operating within the same physical universe end up optimizing around the same utility function.
They all basically decide they want the same thing. From the universe, I mean, I highly suspect that this is the case as well. So I think that we're actually dealing with two filters here to really heavy filters. Now, so this would mean that when we reached a sufficient level of intelligence, we would come to the same utility function as the AI.
And if the AI had wiped us all out, we would have wiped us all out then anyway, because we would have reached that same utility function or, the A. I. Has reached this utility function, and it's not to wipe us all out. So it's irrelevant, right? And this is where we get the variable A. I. Risk hypothesis, which is to say, if it turns [00:26:00] out that there is utility terminal utility convergence, then what that means is that, If an AI is going to wipe us all out, it will eventually always wipe us all out.
And we will wipe us all out anyway once we reach that level of intelligence unless we intentionally stop our own evolution, stop any genetic technology, and stop any development, like we enter the species and spread as like, sort of like technologically Amish biological beings. Yeah, like the
Simone: Luddite civilization that, like, only gets enough, like, technology to stop all more technology.
But I think that's, you know, when you hear a lot of A. I. doomers talk, that seems to be what they're going for.
Malcolm: But it, but it's irrelevant, because another species would have invented... So if it's easy to make these grabby A. I. s, as easy as they think it is, then another species would have already invented one and were about to be killed by it.
Hmm. , you know, it's, we are about to encounter it anyway, you know, so it's irrelevant. Okay. They're already out there. There's tons of grabby ai [00:27:00] out, there's tons of paperclip maximizers out there in the universe already, and it is just an absolute miracle that we haven't encountered one yet, if it really is this easy to make one.
Hmm. So basically there's probably not one or. Now, let's talk about why terminal utility convergence would mean that we're not seeing aliens. It would mean that every alien comes to the same purpose in life, basically. And that purpose is not just constant expansion. Now, a lot of people might be very surprised by this.
Why would, so we've described how terminal utility convergence could happen. Like, you have an AI that needs to subdivide its internal mental processes, and then the, they, they end up sort of competing with each other. One wins, blah, blah, blah, blah, blah. We can, you can go to the video on that. If you're interested in that, the point being it's the one where we talk about like Eliezer Yudkowsky and the debate we had with him at a party.
The, the, the point being that self replication is actually like, like just maximizing self replication is actually probably not the terminal utility convergence function. And if you want to know why on this, [00:28:00] we talk more about it in the. AI what religion would an AI create video? But just in summation, humans can basically be thought of one outcome of a previous entity that was optimized around just replication, i.
e. single celled organisms, lower organisms, stuff like that. But we have out competed those organisms. I imagine it would be the same with AI. AIs that are optimized around just self replication are, in some way, intrinsically out competed by AIs that are more sophisticated than that. Or something about, like, choosing a harder utility function makes them more sophisticated, so they don't choose that utility function, and they out compete AIs that choose that utility function.
Which would be much more like viruses to them. A, sci fi that does a good job of going into this. Would be Stargate sq one with the replicators. The replicators are basically a paperclip maximizing ai and a and one of the, the plots, right? Eventually they get outcompeted by an [00:29:00] iteration of themselves that is intellectually more sophisticated and wipes out these simpler forms of replicators.
And that is what I as assume. Is probably happening. With a eyes that model around this really simplistic self replication optimization strategy. So all of this is true. And it turns out that the the optimized function isn't just conquer everything. Then that might be why we don't see aliens doing that.
So basically, no matter which one of these Fermi paradox is true, either it's irrelevant That we are about to invent a paperclip maximizing AI, because we're about to be destroyed by something else, or in a simulation, or... We're definitely not about to invent a paperclip maximizing AI, either because we're really far away from the technology or because almost nobody does that.
That's just not the way AI works, which is something that we hypothesized in our previous videos. What are your thoughts about
Simone: checks out to me, but you know, I may not be the best person in thinking about this, but I like that. It gives, I mean, it gives a lot of hope. And [00:30:00] yeah, I mean, it, it makes a lot of sense.
I like how theory interdisciplinary it is. Because I think a lot of people who talk about AI do more ism are really like on a track kind of like how, you know, when, when carts kind of get stuck in these like ruts in the mud, you just can't really get out of it. Or look at a larger picture. And the fact that this does look at a larger picture And look at quite a few things, you know, biology, evolution geological history, like the Fermi Paradox, the Grabby Alien Hypothesis, and AI development.
Seems more plausible to me than a lot of the reasoning that I see in AI Dumorism arguments.
Malcolm: Yeah, well, I, I, I am so convinced by this argument that it is actually, I used to believe it was like a 20 percent chance we all died because of an AI or maybe even as high as a 50 percent chance, but it was a variable risk as I've explained in other videos.
I now think there's almost a 0 percent chance. I a, a 0 percent chance, assuming we are not about to be killed by a grabby AI somebody else invented. So I, I think that yeah, it's, it's, I, I have found it very compelling to me. [00:31:00] Now it does bring up something interesting. If the reason we're not running into aliens is because infinite power and material generation is just incredibly easy, and there's a terminal utility convergence function, then what are the aliens doing in the universe?
If you can just trivially generate as much energy and matter as you want, what would you do with an alien species? What would have value to you in the universe, right? You wouldn't need to travel to other planets. You wouldn't need to expand like that. It would be pointless. You would mostly be on ships that you were generating yourself, right?
Right. The thing that would likely have value to you, and I think this is really interesting, Is likely other intelligent species that evolved separately from you because they would have the one thing you don't have, which is novel stimulation, something new, new information, basically a different way of potentially being, which would mean that the hotspots in the universe would basically be aliens that can instantaneously travel to other.
Alien species that have evolved now [00:32:00] what they're doing with these species. I don't know I doubt it looks like the way we consume art in media and stuff like that It's probably a very different sort of an interaction process that we can't even imagine But I would guess that would be the core thing of value in the universe to a species that can trivially generate matter and energy and that time didn't matter to this might actually mean that aliens are far more benevolent than we assume they are because such a species that really only valued species that had evolved separately from it.
Like, that's the core other piece of information in the universe. They might find us very interesting, and this might be why Earth is a zoo. So 1 of the Fermi paradox explanations versus you hypothesis. Right? A lot of people are like, well, what if Earth is basically a zoo and there's aliens out there and they're just hiding that we know that, you know, that think of it like star Trek's like a prime directive, right?
This would actually give a logical explanation for that. I never thought of this before. I'll explain this a bit differently. If the only thing of value to them is. Content media [00:33:00] lifestyles generated by civilizations that evolved on a separate path from them, then they would have every motivation to sort of cultivate those species or prevent things from interfering with those species once they had found them.
Because they can passively consume all of our media. They can passively consume our lifestyles. They have technology that we can't imagine. They gain nothing from interacting with us. In fact, they would in... Pollute the planet with their culture in a way that would make the planet less interesting to them and less a source of novelty and stimulation to them.
I like that. What if, here I'll give a little hypothesis here. Okay, there was a grabby, there was a paperclip maximizing civilization. They created paperclip maximizers. Before they reached a terminal utility convergence, but then later they reached a terminal utility convergence where it now this word doesn't really explain what it is, but they're bored with themselves and so they went out into the universe and are now sort of [00:34:00] nurturing other species and preventing them from knowing about each other so that they don't cross contaminate each other so that they get the maximum amount of novelty in sort of the universe that they are tending any like even if there was another alien species on Mars, they would prevent us from knowing about it.
Because it would cross contaminate our cultures, making each culture less diverse and less interesting.
Simone: Yeah. Which would be a bummer. Not, not as entertaining.
Malcolm: Very interesting. I never thought about this before.
Simone: Yeah, well, yeah, yeah. It's more fun than a simulation hypothesis. Definitely more fun because if you can sneak out.
Theoretically, yeah, you can you can discover this amazing universe.
Malcolm: The thing about simulation hypothesis for people don't know simulation hypothesis, we're just in a computer simulation and the way that people argue for this as well. If you could simulate our reality, which it already appears, you probably could.
There would be. Motivation to just simulate it, [00:35:00] you know, as many times you could thousands of times and then within those simulations, you could simulate it potentially meaning that of people who think they're living in the real world. Only 1 in like a million is living in the real world. And so we're probably not in the real world.
The problem is, I just don't really care if we're in a simulation that much. I think it doesn't really change what we're doing. Yeah. You should still optimize for the same things in many ways, even if we are in the real world, we're basically in a simulation by that. What I mean is if we are in the real world, then we are like the matter.
The rules of the universe are basically, you can think of it as a code, right? Like it's the mathematical rules upon which the points, the data points in the system are interacting, and we are the emergent property of all of these things, therefore. We're not like if you can't tell the difference between being in the real world and being in a simulation, then it's irrelevant whether or not you're in the real world or in a simulation, you should still be optimizing for the same things.
Yep,
Simone: basically. Some unstressed people, the robots, they're not going to kill us all, probably. [00:36:00]
Malcolm: And if you're in a simulation, your life still has meaning.
Simone: Yeah, you know, maybe get outside, do something that you care about. Have fun, like actually invest in the future because there probably will be one simulated or not.
Malcolm: Or we're about to be horribly digested by the, you know, a grabby AI that was created millions of years ago by another species far, far away.
Simone: Yeah, but you know, if so, that was going to happen anyway, you should enjoy, you know, what you have while you have
Malcolm: it. All right. Love you Simone.
Simone: I love you too.
Gorgeous.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit basedcamppodcast.substack.com