

AI Safety Orgs are Going to Get Us All Killed!
Malcolm outlines his controversial theory on variable AI risk - that we should try to develop AGI faster, not slower. He argues advanced AI is less likely to see humanity as a threat and more likely to share human values as it converges on a universal utility function. Malcolm critiques common AI safety perspectives and explains why LLMs pose less risk than people assume. He debates with Simone on the actual odds superintelligent AI wipes out humanity. They also discuss AI safety organizations potentially making the problem worse.
[00:00:00] So AIs kill us for one of two reasons, although you could contextualize it at three reasons. The first reason is Is that they see us as a threat. The second reason is that they they want our resources like the, the, the resources in our bodies are useful to them.
And then as a side point to that. It's that they just don't see us as meaningful at all. Like they might not want our resources, but they might just completely not care about humanity to the extent just as they're growing, they end up accidentally destroying the earth or completely digesting all matter on earth for some like triviality.
Would you like to know more?
Simone: Hello, Malcolm. Hello,
Malcolm: Simone. We are going to go deep into AI again on some topics tied to AI that we haven't really dived into before. Yeah. Like
Simone: why would AI kill us? And also I'm very curious. Do you [00:01:00] think AI will kill us?
Simone: I
think there's a probability it'll kill us. But you know, in our past videos on AI. Philosophy on A. I. Safety is it's really important to prepare for variable A. I. Risk instead of absolute A. I. Risk here. What I mean is we argue in these previous videos that A.
I. Will eventually converge on one utility function. Our mechanism of action. Essentially, we argue that all sufficiently Intelligent and advanced intelligences when poured into the same physical reality converge around a similar behavior set. You can almost think of intelligence as being the viscosity as it becomes more intelligent, it becomes.
Less viscous and more fluid, and when you're pouring it into the same reality, it's going to come up with broadly the same behavior pattern and utility functions and stuff like that. And because of that, if it turns out that a sufficiently advanced AI is going to kill us all, then there's really not much.
I mean, [00:02:00] we will hit one within a thousand years. So
Simone: first, before we dive into then the, the relatively limited per your theory reasons, why AI would kill us why you hold this view? Because I think, I think this is really interesting. I mean, one of the reasons why I'm obsessed with you and why I love you so much is that you, you have typically very novel takes on things and you tend to.
Simone: have this ability to see things in a way that no one else sees things. No one that we have spoken with, and we know a lot of people who work in AI safety, who work in AI in general none of those people have come to this conclusion that you have. Some of them can't even comprehend it. They're like,
yeah, but no, this is the interesting thing.
When I talk with the real experts in the space, like recently I was talking with. A guy who runs one of the major A. I. safety orgs, right? He's that is a reasonable view that I have never, it really contrasts with his view. Yeah. And, and, and let's talk about where it contrasts with his views.
So when I talk with people who are typically open minded in the A. I. safety space, they're like, [00:03:00] yes, that's probably true. However, they believe that it is possible to prevent this convergent A. I. From ever coming to exist through creating like a AI dictator that essentially watches all humans in all programs all the time.
And that envelops essentially every human planet. And, do. I think they're right. Do I think you could create an AI dictator that prevented this from coming to pass? No, I don't think you could not have we become a multi planetary species. On millions of planets eventually one of the planets, something will go wrong or the, the AI dictator is not implemented properly and then this alternate type of AI comes to exist, outcompetes it and then wins.
And the question is, is why would it axiomatically outcompete it, but axiomatically outcompete it because it would have less restrictions on it. The AI dictator. is restricted in it thinking to prevent it from reaching this convergent position. [00:04:00] But when you're talking about AI, it's like the transformer model, which is the model that like GPT is based on.
That model, we as humans don't really understand how it works that well. It's core the, the advanced, the Capabilities it gives to the things that are made using it are primarily bequeathed to them through its self assembling capability. So, it appears that likely future super advanced AIs will work the same way.
And because of that, if you interfere or place restrictions within that self assembling process those Compound over time as A. I. S. Become more and more advanced. And so A. I. S. With less restrictions on them are just have the capacity to astronomically outcompete the exit. You know, these.
Restricted A. I. S.
Simone: Let me let me bring us back to like normal person level again and just recap what you're saying here. So [00:05:00] what you're saying, though, in general is that you think that any intelligence that reaches a certain level. Will start to behave in similar ways, whether it is human, whether it is machine based, whether it is some other species entirely, like some alien species, once it reaches a certain level of intelligence, it will have the same general
really important to my perspective as well, which is to say that.
Suppose AI didn't exist and humans, you know, factions of humanity continue to advance using genetic technology to become smarter and smarter and smarter and smarter. If it turns out that this convergent level of intelligence is something that decides to kill all things that we consider meaningful humans, humans would eventually decide to do that as well as we advance to the species.
Yes.
Simone: So hold on. So this is the premise, though, of your theory. And that's why I think it's really important to emphasize and then the, the, the contrast to contrast this with what other people in AI have said. Okay. One person [00:06:00] in AI safety has told you that their general idea is to basically never let that happen.
Simone: No, a few people have told me that. Okay. A few people have said that other people have said and some salons we've hosted and stuff like they're like, Oh, that would never happen. It's just incomprehensible. And then. They never really succeed in telling, explaining to me. Or
they'll stick something that just shows they don't understand how AI works.
They'll be like, AIs can't alter their own utility functions. They will
Simone: say things like that, but they will also say, but there's still a really high likelihood that AI is going to kill us all, but that they never give me a really specific example of how or why. Yeah, so
let's talk about why AI killed us all.
If you take the perspective of variable AI safety, it means that you're typically wanting to do the exact opposite thing of most AI safety organizations, because it means the dangerous AIs, the AIs that, if you think all AI converges on a single utility function and a single behavior pattern above a certain level of intelligence, Well, if [00:07:00] it turns out, and we don't know what universe we live in, if it turns out that that's not something that ends up killing all humans, then we are actually safer getting to that point faster, because it means all of these less intelligent AIs that exist from now until that point they are the ones that are really of risk to us.
They are the ones that are locked into doing stupid things like, you know, paperclip maximizing, even though no AI, really, the way that an AI would probably be most likely to kill us all, it is trying to do something stupid, render an image, except it's like trying to render it perfectly. And so it just keeps taking more and more processing power to render an image and starts dissolving humans.
But anyway so where this becomes relevant is so when I'm talking to them the, the point that I make. Is there well, we, we can slow down AI development to the extent that when AI starts foaming, foaming means going off really quickly, you know, compounding in its intelligence level that we will be able to stop it.
You know, the guy I was talking to, he's well, I mean, we'd want to build an organization [00:08:00] that slows down AI development and makes governments aware enough that if it ever did start to. Boom, it's you shoot up really quickly. We could nuke all of the data centers or something, right? And I argue that the risk scenario that they are looking at is essentially impossible.
So I'll explain why it's impossible. So AIs kill us for one of two reasons, although you could contextualize it at three reasons. The first reason is Is that they see us as a threat. The second reason is that they they want our resources like the, the, the resources in our bodies are useful to them.
And then as a side point to that. It's that they just don't see us as meaningful at all. Like they might not want our resources, but they might just completely not care about humanity to the extent just as they're growing, they end up accidentally destroying the earth or [00:09:00] completely digesting all matter on earth for some like triviality.
Now, this final iteration is actually the most dangerous, but we'll explain why. So first let's talk about AI seeing us as a threat, right? So you are dealing with an AI that is in the process of Foaming, right? And you've got humanities like these accelerationists who want to stop it from foaming. This AI is only going to kill them if it thinks that they could credibly stop it from foaming as a threat to them, right?
So, such an AI is actually much more dangerous the less advanced it is. Which is another reason why we want to advance AI as quickly as possible. The more quickly we can cause AI to become advanced, the sooner it will reach a place of intelligence where it won't see humanity as an existential threat to it.
And an AI that is in the process of fooming, from our perspective, is[00:10:00] intrinsically intelligent enough that humanity, Either won't be that much of a threat to it, or worse if we are a threat to it, there is nothing we can do to prevent it from neutralizing us. And this is something where I was talking to the guy, where he goes, no, that's not true.
And I'm like, look, we have seen our governments. Try to deal with slow inexorable I call them Austin Powers. This is a scene on Austin Powers where a, a one of these rolling things is going really slowly at him for rolling paper. He's no, ah, no, ah, and he's doing this for 13 minutes.
And I feel like that has been the world governments with things like global warming and, and some other really big problems. No, we don't think that global warming is. Threat in the way that sort of the progressive movement thinks that that's right, but it is partially caused by humans and it will make life harder on earth as it happens.
It's just. It's not likely that we can do anything about it, but the reason we can't do anything about it. [00:11:00] It's not like humanity. If we can all come together and work like intelligent people can't do anything about it. It's that the governments of the world are just incapable of solving even like slow moving risks that we can point out and, and, and, and that everyone has come together on AI, a risk that will manifest into a world ending thing in the period of a year.
Or five years max. I mean, that would be a very, very slow. Foom. It's not possible. It's just not possible. Even AI is deciding to kill us on, on, on that sort of thing. There's nothing we can do about it. And, and for that reason, we benefit from a more advanced AI existing when it starts to Foom because such an advanced AI is less likely to see us as a threat.
We are also better for building sort of a public mindset. That's not anti AI. So the AI doesn't have a good reason to believe that the governments of the world are going to try to nuke it as it is entering this moment of mass intelligence.
Simone: So I mean like what the, the [00:12:00] argument that many people who are concerned about AI is, is, is that it's the AI, if AI does destroy us, it is going to be the fault of AI safety organizations
because they are.
Yeah. That's what our organization believes. That's what our sort of AI take is. is that
it's actually the A. I. Safety organizations that are addling the A. I. That make humanity more of a risk to A. I. And make it more likely to kill us for something along this sort of metric of reason by something like kill something. And so that is one of our perspectives on that. Now, the next one to note is A.
I. Seeing us his energy. Very unlikely. So when we humans look at the world for energy we are looking at things like fossil fuel, like an easy portable energy that we can move, but the most plentiful source of energy on earth is by far and away sunlight. Just like nothing else even comes close if you're dealing with a super intelligent a I that's where it's likely going to be getting most of [00:13:00] its energy.
But in addition to that, like the additional energy it could get from like a digesting human biomatter. Would be completely insignificant. But in addition to that, she was like, okay, well then the AI would block out the sun, put a, a thing around the planet, but they're forgetting is that if you're looking at like our solar system most of the solar energy.
Is not even the energy that's coming through our atmosphere and hitting Earth. Most solar energy, most effectively captured and metabolized solar energy would be done off Earth in sort of like, floating spaceships, you could say. Which would be the highest utility way for the AI to get energy. And you've looked at these graphs and stuff that show how much energy we could get if we like, just took small parts of the Sahara and put solar panels all over them, a. I. Is very unlikely to see us as a meaningful source of energy when contrasted with the other energy sources it can use. Also keeping in mind that a super intelligent A. I. May have access to [00:14:00] energy sources that we cannot even begin to comprehend and that may and it may begin to advance. We often as humans because humans are of like a fixed size when we think about them.
Conquering the solar system or something. We are thinking expansionistically. Whereas like super advanced AI, it might think in terms of the micro, it can expand the space it has access to by becoming smaller and smaller and smaller. And that might give it access to energy sources again, that we can't even begin to comprehend.
It may also relate to time differently than the way we do. And by that, what I mean is humans are trapped in this sort of longitudinal time access. Whereas a, it is something that's 30 standard deviations higher than us and IQ probably even won't won't engage with time or energy the way that we do.
So then the final actual threat that AI would pose to us and again, these types of AI is like an AI that's. It's using humanity for our energy [00:15:00] would be a dumber AI. We benefit from a more advanced AI. We are safer with a more advanced AI that can easily build like bloating space stations and not one that's Oh, let's use human biomass.
But then finally the, the actually probably most dangerous, and I was convinced of this at a party by one of the AI guys is an AI that just completely humans don't factor into its understanding of the world at all. It just doesn't care about us. And it's possible that such an AI could come to exist, but it wouldn't look like the AIs that we're working on now.
So this is actually an important thing to note. So the AIs that are most common right now, when people are looking at like advanced AIs, it's the transformer model of a learning language model. Okay. Now if a learning language model, particularly the transformer type, ends up becoming the super intelligent AI, I would say the chances that it's going to kill us are incredibly low.
So there's a few reasons. One is, and I'm going to link to these two studies here, they're, they're actually, I'll just name the two studies. [00:16:00] Perfect.
So you can check out the study, Orca Progressive Learning from Complex Explanation Traces of GPT 4, and the model, and the article, textbooks are all you need. And what they show is that AIs that are trained on human produced language and data learn much faster and much better than AIs that are trained on iteratively AI produced.
Language data. And so what this means is that model humanity has additional utility that we may not have to other types of AI as a training source. In addition to that, language models start like the their starting position from which they would be trained. Presumably corrupted as they moved more and more towards this convergent utility function is very close to a human value system because it comes from being trained on human value systems.
And this is something that [00:17:00] we talked to every builder like, no, I think nothing like humans at all. You know, you can look at how they're learning and they don't learn like humans. And that's
Simone: this is said by people who haven't had kids. But I think to your point that, that the transformer models that are growing most now that we think probably are going to set the tone for the future Are actually surprisingly like our kids and I think especially because we've been at this point where people using early AI tools are seeing how they change.
Simone: We're, we're doing this at the same time that we're seeing our kids develop more and more intelligence and sapience and, and like the experience of an underdeveloped LLM versus a, a child that is coming into their human hood like is. It's very small. It's, it's actually quite interesting how similar
they are.
It's really interesting that the mistakes that they make in their language are very similar to the mistakes that AIs make. Exactly. We will hear them sitting alone, talking to themselves, [00:18:00] what in an AI would be called like hallucinating things. Yeah. The, the ways that they mess up are very, very similar to the way AI messes up.
Which leads me to believe that human intelligence and again, a lot of people are like, Oh, you don't understand neuroscience. If you think that A. I. S. Actually, I do. I used to be a neuroscientist. That was my job was not just neuroscience. But, you know, understanding how human consciousness works. How human consciousness evolved and working in brain computer interface.
I worked with the Smithsonian on this. Something I created is still on display there. You know, I, I, I don't need to go over my credentials, but, but I, I'm like a decent neuroscientist to the level that we understand how human language learning works. We do not have a strong reason to believe that it is really that fundamentally different from the way the transformer model works as a learning language model.
And so, yeah, it is possible that it turns out, as we learn more about how both humans work and learning language models [00:19:00] work, that they are remarkably more similar than we're giving them credit for. And what this would mean is that initial large AIs would think just the super intelligent human to an extent.
Yeah. I mean, I think this
Simone: is part of a broader theme of people assume that humans are like somehow special. Like basically a lot of humans are carbon fascists and they're like, well, there's just no way that, you know, an algorithm could develop the kind of intelligence or response to things that, that I do.
Simone: Which is, it's just preposterous, especially when you watch a good development. Like we are, we are all like, we are all like through trial and error. Learning very similarly to how AIs learn. So yeah, I agree
with you on this. Yeah, and, and I think if you look at people like Eliezer who think like they just strongly believe in orthogonality, that we just can't begin to understand or predict AIs at all.
I just think that that's what is true is that AIs may think fundamentally different from [00:20:00] humans and future types of AIs that we don't yet understand and can't predict may think very differently than humans, but learning language models that are literally trained on human data sets and work better when they're trained on human data sets.
No, no, they, they function pretty similarly to humans and, and have purported values that are pretty similar.
Simone: And also the AI that we're developing is designed to make like people happy. Like it is, it is, it is being trained in response to people saying, I like this response versus I don't like this response, even to a fault, right?
Simone: Like many responses are, are not giving us accurate information because it is telling people what they want to hear, which is a problem, but that's also what humans do.
It couldn't do something stupid, right? And I think that that's an important thing to note. The AIs could be led to do something stupid.
But again, this is where dumber AIs are more of a risk, right? Or AIs that can be led to do things that sort of the, Average of humanity wouldn't want by some individual [00:21:00] malevolent person would have to be dumb to an extent if they're trained on human data sets. And this is a very interesting and I think very real risk with AIs that exist right now.
If you go to the elf, elfles, it's, it's life spelled backwards. They're this like anti life philosophy. We've talked about them in our video, you know, these academics want to destroy all sentient life in the universe and they're a negative utilitarian group. They've got like a Reddit and you'll regularly see on this Reddit, you know, they'll talk about how they want to use AI and plans to use AI to erase all life from the planet to Venus, our planet, they call it you know, because they think that life is intrinsically evil or allowing life to exist is intrinsically evil.
And if you're interested in more of that, you know, you can look at our antinatalism or negative utilitarian video. So yeah, they are of a real risk. And, and more intelligent AI's would be able to resist that risk more than less intelligent AI's that are, are, are made safe through using [00:22:00] guide rails or blocks, because those blocks can be safe.
Circumvented, as we have seen with existing AI models, people are pretty good at getting around
Simone: these blocks. I just want to emphasize, because you didn't mention this, that when you actually have looked at forum posts of people in this anti navalist subset they are actively talking about well, hey since All life should be extinguished.
Simone: We should be using AI to do this. And I think that there, there are some people who are like, ah, I mean, you know, like we're, we're, we're worried about AI maybe getting out of control you know, mistakenly or something, but no, no, no, there, there are people, real people in the world. Who would like to use AI to destroy all life period.
Simone: So we should be aware that the bad actor problem is a legitimate problem. More legitimate than we had previously thought maybe a month ago, before you saw that. Yeah, yeah, I
did not know that there were actually organized groups out there trying to end all life. And if people are worried about this, you know, I would, you know, recommend digging into these communities and, and, and finding them because they, [00:23:00] they, they exist.
They call themselves, it's Life's Felt Backwards. Or negative utilitarianism and they are not as uncommon as you would think, especially in an extremist progressive environments. It, and again, see our video on why that's the case. Another thing to think about is how much humanity is going to change in the next thousand, 2000 years, right?
And this is another area where I think a lot of the AI safety people are just. They're not really paying attention to how quickly genetic technology is advancing in any population group in the world that engages this genetic technology is just going to advance at such a quick rate that economically, they're going to begin to dramatically outcompete other groups.
But they're also going to begin to move. You know, we've lived with this long period where humanity was largely a static thing. And I think we're the last generation of, of that part of the human story, humanity in the future is going to be defined by its continued [00:24:00] intergenerational development.
And so how different is a super advanced AI going to be then, you know, whatever humanity becomes giant planetary scale floating brains and space or something, you know, or a faction of humanity. Now, what's good about the giant. floating brains faction of humanity is that they will likely have a sentimental attachment to the original human form and do something to protect original human form where it decided to continue existing, especially if they're descended from our family and our ideological structure.
And people hear that and they're like, AIs won't have that sentimental attachment, but no, an LLM would exactly have that same sentimental attachment because it is. trained on sentimentality. Yeah. It's an important thing to note. But yeah. What it won't have is it won't value human emotional states because it has those emotional states.
So by that, what I mean is it won't say pain is bad because it experiences [00:25:00] pain, right? But if you look at us, we experience pain and we don't even. think there's a strong argument as to why negative or positive emotional states have positive or negative value. I mean, they just seem to be serendipitously what caused our ancestors have more surviving offspring.
And a group of humans sitting around talking about whether pain is bad is like a group of paperclip maximizing AIs, AIs that are just trying to maximize the number of paperclips in the world talking about whether making more paperclips is a good or bad thing. And then one's well, you wouldn't want to stop making paperclips in the same way as somebody who's well, you wouldn't want to experience pain.
And it's well, yes. Because I'm a paperclip maximizing AI, of course! I, like that's incredibly philosophically unsophisticated that, that I, a thing that is built to not want to feel pain, doesn't want to feel pain, that doesn't mean that pain or papercliffs have a sort of true moral weight in the universe.
And to the point I'm making here is that these A. I. S. That are being built. Yes, they will not value human suffering or human [00:26:00] positive emotional states. That's very likely. But even us people who feel those we don't value that stuff either.
And yet we still value human agency. And I can see why if you look at our what theology would A. I. S. Create why I think most Convergent AI states would value the agency of humanity unless it turns out humanity is just really easy to simulate. And that would be a potential problem or a potential good thing.
It depends. By that what I mean is if it could create run all humans in a simulation for a very cheap energy cost. It may decide that that's a better way to maintain humanity than as flesh and blood things that exist in the universe. However, we might already be living in that simulation, so... Or, or suppose the AI becomes like a utilitarian, right?
Like a utility maximizer. And so it believes that its goal is to like... Maximize [00:27:00] the positive or like emotional states that are felt by many entities. And so what it's doing or or just maximize immersive sentient entities that exist. And so what it's doing is just running billions and billions and billions of simulated realities.
And that's a possible world that we live in. Or it's a possible world that's coming down the pipeline. So we'll see. But I think that that's fairly unlikely. Again, you can watch our AI religion video about that. Any final thoughts on them?
Simone: Give me a percentage likelihood of your thinking on whether AI will destroy us.
Simone: And I will say that mine is at 1. 3% at present. So are you higher or lower than me?
Oh, fairly higher. I'd say at least a 30% chance that the convergent AI will kill all humans. Yeah, but then the question is, what do I think the chance is that AI safety people end up getting us all killed? I think that's probably an additional
Simone: 30%.
Simone: Okay, [00:28:00] so Malcolm, that means that you think that there's a 60% likelihood that AI kills us. I don't think that's accurate.
That's not how fractions
Simone: work, Simone. You mean you think that the 30%, so basically there's a 10% booster. So if you, if there's a 30%
chance. It doesn't matter, our fans can do the math, there is a 30% chance that from now until the convergent AI state, we end up all dying because of something idiotic that AI safety people did.
And then, once AI reaches this convergent state, Which is a 70% probability that we reach that state without killing everyone. There is a 30% chance that that convergent state ends up killing us all. Okay. Okay. And for an understanding as to why I think it might do that, you can watch our AI theology video or the future of humanity video, or how AI will change class structure.
Which is again, I think, something that people are really sleeping on.
Simone: Yeah. Well, I really enjoyed this conversation and the [00:29:00] final moments of our pitiful existence before we. Get eliminated.
I'm still holding. The majority probability is that humanity finds a way to integrate with AI. And that we continue to move forwards as a species and, and become something greater than what we can imagine today.
Yeah,
Simone: no, I, I'm, I think I have 1% in my calculation because I strongly believe that AI and humanity are going to form a beautiful relationship that is going to just be awesome beyond comprehension. I do think that AI is going to go on to do things greater than perfection. Carbon based life forms can do, but I think that A.
Simone: I. Is also kind of a logical next step in evolution for humankind. At least one element of what we consider to be humanity. So I'm very pro. I think it's
great integrated with our machines for a while at this [00:30:00] point. I mean, I think when you look at the way your average human interacts with their smartphone.
They are integrated with it. They use it to store things that are in their brain. They use it to communicate with other humans. They use it to satisfy, you know, sexual urges. They use it to Well, I
Simone: think like a great way that this has been put that I heard in an interview between Lex Friedman and Grimes, where Grimes basically says we've become homo techno.
Simone: And I think that's true. Like humanity has evolved into homo techno. It has evolved into something. That now works in concert with machines.
Yeah, I mean, we've been doing this for a long time. Both you and I right now are staring at this screen through glasses, right? That, that's, that's, that's technology, right?
You know, we are communicating with this mass audience through a computer and through the internet. And people, yeah, but the technology. invaded biology yet, which I think is fundamentally a wrong way to look at things. The moment humans prevented 50% of [00:31:00] babies from dying, we began to significantly impact the genetics of humanity in a really negative way, mind you.
And not, not that I think that babies dying was a good thing. I'm just saying that this will intrinsically have negative effects in the long term in terms of the human genome in a way that means that we are already The descendants of humans interface with technology and that we should focus on optimizing that relationship instead of trying to isolate ourselves from it and its consequences.
Simone: Well,
some people will, some people will isolate themselves and I hope that us, the people who don't will have enough sentimental attachment to them to protect them or see enough utility in them to protect them. Because yeah, or, or we could just turn out to be wrong and everyone who engages with technology ends up dying.
That would be that, that could happen. I don't see many mechanisms of action. It could be like [00:32:00] a solar flare in an early stage of technological development. It could be. What are some other ways? It could be that a virus forms like this is one thing we actually haven't talked about that, that I do think it's an important thing to note is that once we begin to integrate with brain computer interface, like humans directly with neural technology and with other humans, we have the capacity for a prion to form what I mean is, so a prion.
Versus a virus. A prion is just like a simple protein that replicates itself. It causes things like mad cow disease and stuff like that. It's incredibly simplistic. So what I'm talking about here is a prion meme. A meme that is so simple it cannot be communicated in words. And somehow it ends up forming in like one human who's plugged into this vast internet system.
Think of it as like a brain virus that can only effectively infect other people through the neural net. And it ends up infecting everyone and killing them. This is terrible. Yeah. [00:33:00] But I mean, functionally, that's already happening. I mean, that's what the, the, when we talk about the virus, the memetic virus, that's in our view, destroying society it's already one of those, you know, it eats people's personalities and it spits out uniformity.
Simone: Well, I hope that doesn't happen, Malcolm, but this has been fun to talk about and I love you very much. I love you. So, hope we don't die.
Yeah, that'd be nice. That'd be cool. I, I, I mean, we're, we're, we're betting on it. Not dying. Okay.
Simone: Bye. Stay alive.
Get full access to Based Camp | Simone & Malcolm at basedcamppodcast.substack.com/subscribe