

Based Camp: You Probably are Not Sentient
Embark on a deep exploration of the nature of consciousness, self, and the human experience in this thought-provoking video. We dissect the intriguing notion of consciousness as an emergent property of a memory compression system, comparing the mind to a building security system with diverse inputs. Our dialogue delves into how consciousness could influence automatic responses, the deceptive role of consciousness as a 'lying historian', and the perplexing interplay of actions, conscious awareness, and free will.
We challenge common assumptions about universal human experiences, shedding light on the absence of an internal monologue or mental imagery in many individuals. We probe into the role of language and narrative in shaping emotions, and how understanding our mental processes can foster improved interpersonal relationships.
Part of the conversation focuses on the potential decline in IQ due to genetic markers, the role of language acquisition in the development of consciousness in children, and how narrative building might be detrimental. There's a look at the future of humanity, discussing how integration with technology could enhance human experience and our consciousness's susceptibility to modeling others' behaviors and emotions.
The final segment delves into anthropomorphism, artificial intelligence, and our emotional reactions to robots. We share personal experiences with academia, independent research, mental health, the autism-schizophrenia spectrum, and our personal lives and relationship. Join us in this captivating dialogue that blends philosophy, neuroscience, technology, and personal reflections.
Below is a poorly translated transcript of the video. Maybe one day we will have fans to fix these up but for now this is what you get:
Hello Malcolm. Hello Simone. I love your response. I love that it is, Your signature greeting with people.
Very high energy, but I also think it is an element of your social autopilot. Not that I don't have a social autopilot, I'm on that right now, but I think that's a really interesting part of human existence because for the vast majority of our lives, I don't think we're actually. Let alone not sapien, not even really conscious, not even really aware of what's going on.
Oh yeah. And I think it's so arrogant when people pretend that they are aware of most of their lives. We talk about something called road hypnosis. Where they look back on a drive and they're like, I don't remember what I was doing during the drive.
Their brain just shuts off recording. And the question is how much of our life is road hypnosis?
And I think it's a huge portion of our life and it's something, this is what initially got us talking about consciousness early in our relationship was how do we at least enter moments of lucidity where we are. Aware of what's going on. Somewhat sentient, just long enough to be able to change things about the internal self model that does run our autopilot so that at least in the majority of the life when we are on autopilot, we are better serving our values better, better people, more productive, more emotionally in control, et cetera.
And I think our thought on consciousness really evolved in interesting directions from there. When we started really thinking about what consciousness means and why maybe it exists. So I think this'll be really fun to talk about.
So why don't you talk a bit about what you think sentience is.
Think sentience our experience of consciousness, in other words, is really an emergent property of a memory compression system. So imagine you have a building security system with tons of different inputs. It's a feed of doors opening and closing within the building, a bunch of different camera feeds a chemical monitoring system coming in.
Everything's feeding into this one control room. And then being, put into a camera feed and then being stored in memory and there's a man watching the security feed. And I think that's our experience of consciousness is that. Our minds are synthesizing, smell, sight, hormonal fluctuations, going on a lot of very complex inputs.
They're synthesizing them into something that can be compressed in a unified memory, which if relevant will be stored in long-term memory, and then made in turn influence sort of automatic instinctual responses. And because, This memory is being codified and in the moment it's being run through like a camera system.
We're getting the impression that there is some kind of observed conscious driver that is running consciousness.
If I'm gonna run this back to you, it's almost like what you're saying is this guy who is.
Sitting at this feed he is collecting all of these different camera inputs, all of these different sensory inputs, and they are encoded in this single quote unquote experience, which is being written into the hard drive of this computer. And when he is referencing what happened in the past when anybody is referencing what happened in the past within this big security array, they are referencing this encoded, and it is because they are referencing this encoding.
It creates the perception falsely so that the way this encoding works is the way that these things are being experienced in the moment. But it isn't actually well, and that the, that there's some intentional driver that's shaping each decision intentionally through that interface essentially.
Whereas the interface only actually affects insofar as
the memory itself influences like automatic reactions. So I. And I think the research supports this. We automatically respond to things. We automatically start taking action in response to stimulus. Before we have some kind of conscious understanding that we're doing that.
Yes, it does. And our memories. Absolutely. Yeah. Mri. Yeah. MRI missions have, shown this as . And while our memories will influence those responses, Our current experience of consciousness is not in the driver's seat. It is just passively experiencing this encoding of memories.
It believes it's in the driver's seat. And I think that this is what's really interesting is it will apply this feeling of consciousness to any experience that you're doing or any action that you're taking. So when you're doing open brain surgery on someone you need to keep them awake to prevent accidentally cutting part of the brain.
You're not supposed to. So they'll check, right? You can do things like apply a small amount of electricity to a part of the brain and get the person to move their hand, and then you ask them, why did you move your hand? And they'll say, oh, I felt like moving my hand. And you can also see this with split brain patients.
Either patients with a corpus callosum is split in their head and their right brain and their left brain actually function pretty independently of each other when this happens, right? So you can cover one eye and communicate with part of their brain and not the other part of their brain.
So you can tell part of their brain pick up a Rubik's cube and try to solve it. Then you put something on the other eye and you ask, okay, why did you do that? And they'll say, oh, I always felt like solving a Aruba's cube. I always wanted to do this. And you can do this with more complicated things.
So there's this experiment, really great one where they would give people pictures of like attractive women, and they'd say, which is the most attractive? And then they'd do a little slight of hand leader and say, okay, why did you say this one was the most attractive? But it wasn't the one they chose.
You'd actually replace it with another picture. And you could do this with political beliefs as well and all sorts of other things. And most people will say, oh, I chose this person for X, y, and Z reasons, and go into detail about why they chose that person. Even though that wasn't the person they chose, which shows that a lot of our consciousness, a lot of the way that we describe our sentient is more like sense making of our environment.
We know we made X decision, but X decision was actually made completely outside of our sentients control. And then we have this little like lying historian in our head, which is like, no, I made the decision. I made the decision, I make every decision. But, but he's also recording the history that we remember.
So then he's going through and saying, okay, I made the decision for this isn't this. And it's not that. He doesn't have any say. See, this is where he does have a say, and it's something that you mentioned, which is that he can encode emotions into the things we're doing. And this can actually cause a lot of no.
Emotion isn't the right word. Because emotions do let's say that emotional narratives. Emotional, yeah. So that they can encode. Positive or negative modifiers and they can shift the narrative. Like they can change the camera angle or add sad music to something essentially to make it seem like a sad scene.
I'm sure like, you've seen like the YouTube video of the Mary Poppins like preview, but like done with scary music and it just seems Oh yes. Horrifying, yes. Like that. So that's how we can change. Yeah. That is how we can change the narrative. And the first time I was ever introduced to this idea that.
We take action before we consciously are aware of it. The person discussing it said that there's a lot of implications to this because it would lead many people to believe that they don't have free will and have them just say, oh none of this is my fault. I didn't consciously make this decision anyway, where that's really not quite, we would say the right conclusion because you do have the ability to color how you perceive reality.
It's not in this kind of immediate, non asynchronous way that you would expect? I would say that this is just the myth of humanity versus the actuality of humanity. And we would argue that we likely evolved this ability because it was like a compression algorithm for communicating ideas to other people.
I actually don't suspect that grade eights have this sort of internal thing that we call consciousness because they didn't need to communicate these. It, it's a really good compassion algorithm for linear experiences over time but one of the big lies that is, that happens throughout this process is it convinces us that we are a singular entity when in fact our brains function much more like we see AI's function with individual instances running.
And we can see this with the corpus callosum split that I mentioned earlier, where it basically means that we have two largely separate parts of our internal mental processing that are happen. Separate from each other. This idea that the decisions you make happen before they enter your conscious mind, what that basically means is you have another part of your brain, which is making this decision and then delivers it to the conscious mind.
When we were talking about. The idea of a security camera with a bunch of different feeds. A lot of the processing is done locally at these various security cameras before they all get centralized into this sort of communal feed with many of the, quote unquote decisions being made at those local levels.
And so we have this illusion of ourself as a singular entity. Which is created by the way that this sort of sentience processor works. But it is just an illusion. And so when we say, oh, we don't really have self-control, or we're not responsible for our decisions, I think that actually even overstates the level to which we exist in any sort of a meaningful concept close to how we think we exist.
And so then there's this, I would say, added layer of. Complexity or maybe confusion. You shared with me, an article saying that a very high percentage of people don't have an internal monologue, what we would describe. They don't have an internal monologue.
They can't even another high percentage of people can't even create images in their mind. And so what we're even describing is consciousness is also not even something that is. Universal as part of the human experience, which is interesting. Yeah. Because I think most of us who experience consciousness as we're describing it, would have a very hard time understanding even what that means.
I don't know, maybe someone watching this YouTube video doesn't have an internal monologue. I. Wonderful. It's hard for you to model that, but I suspect that the human, yeah the variance within the human condition in terms of how things are processed, it's probably a lot larger than we give it credit for, and it will be even larger in the future.
The statistic that I just cannot stop mentioning because it's something that more people should know, that if you look at the her ability of IQ right now, and you look at the selective pressure, so you look at the number of people who have these markers versus people who don't have these markers, which you can see because there's genetic markers.
It says you, is this. The number of kids they have. We're likely looking at a one standard deviation shift down in IQ in the next 75 years. In developed countries, at least. This is probably gonna affect developing countries later. So I guess good for them. There'll be all the geniuses in the world, we'll be in Africa or whatever.
But places where you have this post prosperity, fertility collapse situation And when we think about how quickly and how much human IQ can shift up or down, we use this one marker iq, but I suspect it's linked to just all sorts of things about how we process reality.
So actually I wanted to dig in a little bit more on the subject of kids, because I think that also as we've become parents, we've had. A more complex understanding of how consciousness develops because we see it start to emerge in our kids.
I think there's definitely this point at which we see consciousness blossoming and it's not one day our kids aren't very conscious and the next they are. I think that consciousness, for example is starting to emerge more and more, especially in our three-year-old. It's just beginning to emerge in our two-year-old, and I think a lot of that has to do.
With where they are with language processing. I think it really influences well, and that's why I say I suspect this had to do, it evolved alongside language to compress ideas. But I think that this is where you can see how the system can break in a way that can be very useful in relationships. So this isn't just like theory or whatever.
So one of the things you'll often see one of our kids do is he'll be in a bad mood, but he won't like understand the concept of generally being in a bad mood. So he'll start crying and he'll say I want this gimme that toy and then you get him the toy and he just, it doesn't stop the bad mood.
And so then he's whatever he notices next close the door or move that chair. Like he, he just is like whatever is currently causing the littlest bit of discomfort, he thinks it's the core cause. Of like this bad mood or why he's angry or what he's angry about. And as humans, I think this happens as well, and this is really bad.
When a friend tells you, you're justified to have an angry state or something like that, because then this little narrative maker in your head says, ah, now you get to be angry. Now you're socially justified to be angry, and you will feel very angry about something, or you might be in a generally bad mood.
And your partner comes into the house and does something that just annoys you in the slightest. And then you create the internal narrative that you are in this bad mood because of what your partner did. And when you keep in mind why you're feeling these things and you try to keep like fully in touch with the way your brain is actually working, it leads to a lot more harmony and a lot fewer fights and relationships because you have language for I am in a bad overlay state right now.
Which just means I'm in a bad mood generally, but I'm not actually mad at you or anything specific. Hold on though. Actually, I think you've touched on something very interesting there, which is that maybe sometimes consciousness and narrative building hampers more than helps us. For example, like the toxic girlfriend who.
Has a bad dream in which her boyfriend cheats on her. She wakes up angry at him. Like she's mad at him for something he didn't actually do. Or, maybe one day she's just, in a bad mood. But then she makes up some narrative about it's because her boyfriend didn't bring her flowers and doesn't appreciate her some, he did something mean.
The presence of consciousness and the ence of narrative building would cause her to turn what might be just a very transient, bad mood into something that builds a grudge over time and literally ends up killing the relationship cumulatively that sometimes consciousness hampers us more than it helps us.
What I love about what you're saying here in this fall is your idea of what it means to be meaningfully human and the spectrum of humanity. Which is you become more human the more you take mastery and ownership over these sort of. Evolved or quirks of the way your brain works and you don't allow them to control your actions.
Your actions are more logically decided and more decided based on as close to an objective view of reality as you can get. And so from the perspective of humanity that you convinced me was a good one, cuz this wasn't the one I had before. Somebody who does that, somebody who has a dream and then can't.
Logically understand that is not a justified reason to be mad at somebody, that they are like meaningfully less human than another person. And so then what does it mean to be fully human? It means to have total mastery over these things. And that is something that we don't have. But I think it, it helps people understand because a lot of people hear the level of disdain.
We talk about things like. Sentience and love and happiness in other human emotional states that a lot of people iterate and they don't understand where that's coming from. But then wouldn't that make an l M more human than we are?
People may not know what they're a large language model. Is more sophisticated than we are, and it's also not bogged down by. The need for hunger, human failings, hormones, all these sorts of pollutants, not pollutants they're very instrumentally, useful for biological humans in a modern, globalized society.
And often with the type of knowledge work that humans are expected to do, it's pretty counterproductive. And then I think that this comes to your goal for yourself or your goal and iteration of yourself, that is your idealized iteration would strip out. Your emotional shortcomings be they love or happiness or hatred or pain or greed.
And I'm not that way by the way. I am not as bought into this philosophy as simonon as I would not strip those things away for myself. I think that they add something. That I feel il ideologically I still think has some value, but I don't know. Maybe you feel that way too and you're just I'm mixed on it.
I'm mixed on it. I, one, I'm deeply uncomfortable being human. I really don't like my body. I really don't like being human. I don't like the corruption to our objective functions that human weaknesses cause, but, My general stance is if this is what I have to deal with, if I've been given a meat puppet, I'm going to use it to the max.
I'm going to play the game. You've given me a crappy little battle bot. I'm gonna take that thing and I'm gonna. Destroy everything. Even if it's the worst machinery ever. This is the way she talks about pregnancy. She's I have a uterus. I am gonna wreck that thing. I am gonna have so many babies.
I'm going to shreds if that's what it was meant to do. Yeah. Then, as a woman, I reach the plains of Valhalla by dying in childbirth. Let it happen. Don't worry, Malcolm, I promise I'll play that clip at your funeral if you die childbirth. Thank you. There. I really should probably plan that out.
But yeah I, I feel conflicted I, yes, if this is the hand that we're dealt, I'm gonna play it and I'm gonna play it hard. But at the same time, yeah I, I really. Aspire to that. I don't think that has to be me. And I guess that's, maybe it's more I AI and machines are my Beatrice and Dantes Inferno.
This idealized version of humanity that I know I am not, and that I do not aspire to be, but that I deeply admire. I don't need to become it. I don't need to be with it. I just. I just see it as a better iteration and as, as naturally and morally superior. Does that make sense? For now, what you hope is to make our kids superior to that, our kids.
Oh, for sure. But our kids are still biological. They're still human. So I think I'm playing the, I'm appreciate this. Next generation's gonna be the first that integrates with tech. I know you saying our generation's gonna integrate with tech. I'm sure that AI models will be trained on, if not us family members or our kids.
Or a combined version of us, which would be even cooler. But I still think that for a while we're gonna be biologically human and limited by. The shortcomings of biological humanity. There's one other element of consciousness that I think you downplay. You used to not downplay it as much, and I don't know why this has changed, maybe because you're so focused on the role that language plays in consciousness, but I do really think that humanity's focus on modeling.
The actions of other animals and humans plays a role in our development of, because one, there's, yeah, let's talk about this model for humanity. It's, yeah, it's the model of humanity that we use in the Pragmatist Guide to Life, which is our first book, which is why I don't talk about it cuz it's an older idea that I had.
When you're trying to model other people's behavior, what you do is you have a mental model of them, which is like an emulation that you're running within your own head.
Of the way that you think that they are going to act and the things you think that they are thinking. This is how you're able to have like arguments with little simulations of other people in your head. You have modeled them and you've modeled you and you are arguing with this different entity. And I actually, when I was a neuroscientist, one of the spaces I focus on was schizophrenia.
And what I actually think that we are seeing when people hear voices is a lower activation of this. Using tms, trans Magnetic Simulation, you can hyper activate parts of a person's brain and then if you like hyper activate the part that's associated with saying letters, right? You like put a letter in front of somebody and they won't be able to help but say it because you have primed them with a vision of that letter and you have lowered the thresholding.
I think what's happening with schizophrenia is something similar to that. They have their system that they use to apply mental models to other things gets activated to easily, like it can be activated by the slightest thing.
Like they look in a store window and they're like, Ah, that must have been done with intentionality. There must be some like thought process behind e the way everything was arranged, or they see something innocuous in the environment like a helicopter, and then they are like, oh, why's a helicopter there?
Although there must be a person in it, they must be thinking about me. Oh my gosh. Or they begin to hear whispers. This is why whisper hearing is associated with schizophrenia. Auditory hallucinations. They're much more common than visual hallucinations. Visual hallucinations are incredibly rare.
But anyway. So that's what's happening with schizophrenia. So the question is, okay, what does this have to do as the regular person? What it has to do as a regular person is that I think people have a sort of internal mental model of themselves, which is used to prime emotional reactions to things.
So when the way we talked about this little like sentience box in your head, What it's doing when it's judging whether or not you should react emotionally to something and how you should react emotionally to something, is it is testing what's happening in this sort of simulation thing. That's what we would call our sentience against this little mental model that's running of the way it thinks you should be feeling.
And you're saying, oh, does this mean he should be feeling anger? Oh, does this mean he should be feeling happiness? And then it outputs that emotional state by telling you that you should be feeling this. . The way you could see this is that if somebody justifies a particular emotion, like you should be really angry about that.
Often a person will become a. Much angrier and they'll begin to spin away. Or how could you let your boyfriend do that to you? And then you're like, ah, this mental model has been adapted to feel angrier and you will actually experience much more of this emotion. But what were you talking about, if not that in general, the role that modeling things played in developing human consciousness, that maybe what happened is one, humans have.
An evolutionary advantage if they are able to model predators and prey, because then they can anticipate the moves of these organisms before they make them. And that too, that ability would start to just like with schizophrenics, get misapplied to that compression algorithm of memory that's being formed.
That's it's a mixture of language. And so language and narrative building plus our modeling things that we're literally anthropomorphizing ourselves, if that makes sense. That's a good way to put it. And I think people see, first of all, as people with schizophrenia, not schizophrenics, they're not defined by their, sorry.
But people with frenchness. But we see this in how easy it is that we answer for morphy things. So I think it's very hard. To not answer for more fights like a dog, right? Like you see a dog, you can see it's happiness, you can see it's worried about things. You can see it's and you perceive it as experiencing these emotions the same way a human does, even though, it probably doesn't.
And you could see this in in, in When people kick those robots you guys? Oh, yes. Oh my gosh, yes. I see somebody kick over these robots and I'm like, I feel so bad for the robot. I'm like, how would you do this to this portal? I know logically the robot's not experiencing all that. Now, when you're a human and you're anthropomorphizing yourself and you have no way of knowing that you're not feeling these things in a real context.
If we struggle to not anthropomorphize robots, How, but how? How do we know that the robot's not suffering? How do we know if it's objective function is to run and kick the ball into the net that it's not experiencing some kind of suffering? Have you moved Lee eyes on a soccer ball?
People will feel bad for it. Simone, I, I. I know. I'm just trying to think of the things that people like definitely can empathize with when I'm talking about this anthropomorphizing of things that most people don't think that we should be anthropomorphizing with. Saying that if you didn't know whether or not it could feel emotions and everyone around you said it could feel emotions, you would 100% believe that robot was feeling emotions as soon as you saw a kick.
Cuz you feel so bad when it gets back up and it tries to walk again. And as humans, it's the same way. If you didn't know, if you didn't have hard proof because you hadn't gone through all the studies like I have and you didn't know that humans probably don't have full control of this sort of senti aspect of themselves and it's likely irrelevant, you would totally answer for more about his humans.
And so I love this way of doing things, Simon. Very interesting thought on your part. There is a subreddit, I don't know if it still exists. It's N S F W where people put googly eyes on butts. Do you think that butts people are anthro butts? You know, Butts uh, are they, are they anthropomorphizing the butts?
Is that. Part of what's fun about that you and I loved no, it's more me. I try to figure out like what is making people tick behind weird NSFW subreddits. But I'm wondering cause that one is an outstanding, we'll, more broadcast on that subscribe if that's what you're interested in, is deep dives on why people are engaged.
Because that's what the prag guided sexuality was. Totally like a meditation on this. Why are humans like turned on? Because obviously we're very interested in the way that like the human mind actually processed the things. I left science, why didn't I at leave Science? Cause I didn't feel like real research was being done anymore.
And I felt like there were specific narratives and it was like toe the line or else. And I'm glad that we have reached a level of financial security where we are able to talk about these things and research these things cuz we actually do a lot of independent research which if you're wondering how we get to these ideas and the data that leads us to get to the ideas, go to our books, and that's where we discuss it all.
But yeah, I mean it's really fun and there are just so many low hanging fruits because academia's not doing anything anymore are not doing the same level of work. I think it should be in these areas. So there's one more thing that I think consciousness some credit for and sapiens in general, because I think that an easy conclusion to make from our theories around consciousness, especially we see it as an illusion, is to say, oh the Collins says don't value consciousness.
They think it's an illusion, therefore it doesn't matter. To the contrary I think it could easily be argued that sapiens is one of the things that we think is most valuable, most interesting. It's what distinguishes humans from other organisms, but it's what makes us. But more important, more importantly than that, it is this narrative building, this e, whether or not it's, illusionary or not.
It is what enables us to edit our objective functions. That is the one differentiating factor. Any non-conscious entity, any entity that doesn't have this narrative building effect, this weird, recording and encoding system and modeling system cannot question it's actions. It cannot look at the compression of all the inputs and the narrative that is being woven and say, should we change the narrative?
And I think that, I've seen critiques of consciousness where people totally miss that. Where they say consciousness can get in the way of things. Not necess, it was evolved because it worked, not because it's superior and I think they're missing the core point here, that consciousness has enabled humanity to pivot in ways no species on earth has never done.
It's what allowed us to make the leap. I completely agree with you and there was a final point I wanted to close out was here that there was this fun video clip. Of we were talking on Piers Morgan and you are talking and you can see me moving my mouth to your words as you're talking and people might wonder why I'm doing this.
And then this actually relates to something we were talking about in the video. So we are both on opposite sense of the spectrum. Tom, if my model of schizophrenia is correct, you basically have an autism to schizophrenia spectrum, which is how much do you innately mentally model others with people who are autistic or have Asperger's?
Not innately running mental models of other people whenever they're interacting with people and people who are on the schizophrenia side of the spectrum, not being able to help running mental models even when there's no humans around. And we always say Simone is diagnosed autism. So definitely on the autism side of the spectrum.
And I am almost certainly when I look at myself on the schizophrenia side of the spectrum, which is I don't hear voices or anything like that, but I really struggle. To not mentally model people I'm engaged with. To the extent that I basically almost passed out after social situations, I find them so exhausting.
If I met a big party, it's like just constantly modeling everyone. And that's what was happening on that podcast. I was in a heightened emotional state where I really cared about what she said. So I was running through the words in my head as she was saying them and trying to process how she would respond to something.
And I couldn't help but move my mouse because it was that sub-level of stimulation. Like I talk about, people can't help but say the letter when that part of their brain is tms and that's what was happening there. But there are reasons why we have in the human genetic code, autism and schizophrenia, why it hasn't been evolved out of us.
And it's because both of these extremes are useful. Autism can make you able to act more logically. About the world around you, not being encumbered by constantly mentally model others. And then my ability, people often will say it's like eerie, how much I can tell what other people are thinking, like to the level where it can feel to some people.
Like I can read their mind in a conversation. And I think that is why you have these people on these schizophrenia side of the spectrum. And then sometimes they just get a little too much of these genes and it leads them to, hear voices constantly instead of just having a really hyperactive ability to mentally model anyone around them.
Yeah, no, one, 100% Malcolm is on overdrive. And then, he'll sometimes be thinking about conversations with other people while we're walking. And I can always tell because he gets so deep into them that he's literally like gesturing. As that's like we're driving in the car, like one hand is on the steering wheel, on the other hand is like gesturing a silent conversation he's having with someone he anticipates speaking with in the future or reliving a conversation he had in the past.
And he will have these aftershocks from when we socialize where he feels. The stress or pain of saying something not quite right to someone. And it hits him like a ton of bricks and he will like visibly like crumble and cringe. And it's not just cringe. Yeah. It's like somebody just kicked me in the nuts or something.
Yeah. Like it, it looks like he has been physically hurt by something. And that is not something that I can even begin. To imagine, and I do think that it's a lot less stressful to be on the end of the spectrum and to just not know that other people hate you. Just blink. Yeah. I'm just like, Doop, like nothing going on there.
Like it's such a good partnership and I think, it was one of our main goals, throughout our books and throughout our lives to understand how humans think and process things and what's really happening in the human brain. I started my career as a neuroscientist and a philosopher, and that was my interest.
It's like what's really going on? And being able to be in a relationship with somebody who sees the world so differently. Has given me such insights that I would never come to on my own, and I just admire that so much about you, Simone, and I admire that you have taken me to where I am which is somewhere I never could have reached without your guidance.
And I love you so much. I love you so much too. You're the superhero that I always wish existed and I still worry that I'm going to wake up from a coma at some point and find out. Me too. You're the sidekick that actually does everything. I might be the superhero. She's the hacker nerd in the background that like actually makes everything work.
And of the hacker nerd went away. The superhero would have nothing. That is so our relationship, I have nothing without you actually doing all the detective work. And telling me where to go next. It's a massively inflated every morning estimation of my contribution. Every calendar, every, I just follow her instructions.
I don't manage my calendar at all. I just I'm operating on Simone's driving me, like she says what was the one thing, like the thing from aliens? Oh, you, yeah, like a power loader. We're not separate people. I'm the alien suit that you're using to punch through reality power loader.
You're the. You're ripy, you're, oh, ok. Okay. That's the other way around. We'll see. That's the way we both feel about each other. I adore you. I love these conversations and I know we have to pick up the kids now. But I think you're gonna make another dish tonight, so I'm gonna have fun. Oh, yes.
Another base camp cooking. We have a little side playlist if anyone's seen it. Where I try to come up with new dishes. Let's see. Get it right. You get to see the colleges household at night, what happens after, and I'm not hearing them now. Yes. All right. See you soon. Love you.
Get full access to Based Camp | Simone & Malcolm at basedcamppodcast.substack.com/subscribe