Speaker 2
Wouldn't it be great if you could go platinum in quality? Not quantity. Bro, this guy's, you know, he might be like, oh, yeah. He might not be platinum in quantity, but this guy's quadruple platinum quality. Yeah. Did I say I might have said quality two times? I don't even know what I said, but
Speaker 1
you know, point taken point taken. Yeah.
Speaker 2
But to me, honestly, man, I would, if that were a thing, I would absolutely strive for that because I think all of us have always known that, yeah, of course we want the numbers. Of course, we're like you and I are both whether we like it or not, just by doing something creative, the goal is number go up, right?
Speaker 1
If it's tied to the same economy, that's how you make pay rent and so forth. Right. Yeah. So, but
Speaker 2
yeah, man, think about like that. If you could somehow make a qualitative shift and I don't know, actually, AI probably could make that distinction.
Speaker 1
Let's steal man. I think it's even more fucked up. Like because if AI, because
Speaker 2
if AI starts looking at, hey, is this person leveraging controversy? Is this person using low hanging fruit to appeal to people's lesser nature? Is this person doing all of these things to make number go up? Then their quality rating goes down. But are these other people doing a net positive on society? Like clearly, clearly you are, you know, clearly like you're you're taking people to this very deep introspective healing cathartic place with your music, with your gatherings. So your quality rating in my algorithm would go up. You know, like, imagine if we can build quality. Yeah.
Speaker 1
It's your algorithm. Like, I mean, we have to write that algorithm. And I had a friend. This is,
Speaker 1
more than 15 years ago, he was applying to work at Google DeepMind. So the experimental arm and they were working on AI stuff. And back then it wasn't much of a, you know, a public thing. And he told me in the interview process, he got the job. Uh, that they asked him, what do you think we should teach AI or program, essentially? And he said, uh, this came to him. He said in an ayahuasca journey, which is what this is what gets even stranger. And he told them, it's like, I think we need to teach it how to love because we have to teach it that. And then we'll start to do like what you're saying. But I question whether we have the capabilities human beings to put aside our short-term financial interests or power interests to say, let's program it to do about, um, how something makes you feel, or is there a net benefit? Cause we're not doing that with AI, with social media, for instance, or even in the development of AI itself, supposedly, if you believe that the people who are doing this and say this is an existential threat for us, if that's true, but they're also not stopping because they're like, we're in an arms race with other countries and other companies and we can't lose. So, so I can write there, we're sort of like, we can change it, but we choose not to because we're saying no of the other guy or other gal won't. Right.
Speaker 2
Yeah. Where do you fall on that? Like, have you gathered, do you think the requisite information to form an opinion on that? All galaxy brain, the shit out of this. Yeah. Let's go out and see brain it. Let's grow science it. Let's galaxy brain it. Let's do what we do. Um, so yeah, specifically the question of AI, does it, does it, the notion of it being an existential threat, does that?
Speaker 1
Yeah. I just put a little, my opinion is just a little bit of a twist. I don't think it's like Terminator style where it's like the AI itself takes over the military and it is killing us. I think it is, it is handing us the acts to kill ourselves. And I say that because it's already the case. So fairly, let's say unsophisticated at this point, AI algorithms are picking up what you see on YouTube or even your Google searches or certainly on social media. That has increased the polarization. That has created actual inability for government to function as even a little bit better than it did before. And we are killing each other by indecision, whether it's around climate initiatives or anything that we could have collective agreement on COVID was a perfect example. Perfect leveling event where it's like this will affect every human being, no matter how rich or poor or where you are on the planet or where you live, you would think now we can unify around something and we didn't. And why largely because the information we are ingesting about it was corrupted and we corrupted our minds and we willingly allowed it to happen. We knew that these programs were running and in changing the way we think we have evidence against this. We see what's happening and yet we're making, they're making a ton of money and they have too much power and it is not stopping even though people now are fairly aware of it. And they're fully addicted, addicted, meaning they can't stop. So that is AI shifting human behavior. So just turn that up by a million X, not just on social media. That'd be a part of it. Like if you don't even know it's true anymore, it's hard to know now. Imagine if you really can't find out anymore, even base, even like medical facts or basic history. Cause there's just now there's just like anything could have infinite fake articles about it or even taking real articles and creating any argument you want. Yeah. That is a strange world to live in. It's just like what we live in, but more. And I would think if we increase the polarization and you add on top of that, a climate that's collapsing and you add on top of that, the rich are getting richer in the disparity of wealth. We know from past history, at least for the wealth part, just that people revolt. It's a matter of time. I mean, happened in 2008, we just put our fingers in the dam and we moved on, but we did not change the system. The system has only increased the financial, you know, the debt ratio and so forth. So in that way, I think AI is an existential threat. If we don't change the way we use it or program it, because we'll just do what we've been doing, but far, far, far more, especially even now where they're like, I'm not quite sure how it works. You hear certain engineers say that because we started having them all talk together and we don't quite know. Uh, well, that seems pretty wild. If it's starting, if it's out there in the public and it's, it's inside a capitalistic profit-driven growth system. Right.
Speaker 2
Yeah. All really good points and a good nuanced take. Um, what, something I want to look into because I feel like this already must be a notion, like in terms of being a, a known philosophical thought experiment or something or a, maybe even like an equation, but what does it mean when there's a agent or technology that has an infinite capacity for good, but it also only requires one or very, very few bad actors to completely wipe out, either, either wipe everyone out or do massive, massive damage to civilization. And what do you, how do you morally navigate that? Because I think that's what we're approaching is you have a technology. That seems like it might be able to wipe out diseases that might be able to
Speaker 1
or create new diseases, right? That's the
Speaker 2
right. Right. It'd be very easy. Yeah. We're, we're, let's say 99% of people use it for a net neutral or positive or just maybe a little negative. That's not really hurting anyone except themselves. Just a little fun, but then, but then, but then 0.01% of people, what if all they would need to do is get the jail broken, you know, version of the full blown chat GPT four or whatever it is, which I understand you can already do. If you know how to like run it on your own server and shit and they're using it for nefarious purposes and they're doing it purposefully. Like there has to be a tipping point, right? Because just imagine any technology that has infinite capacity for good, but it only takes a tiny, tiny micro percentage of people to do irreparable harm. At what point do you say no to that technology? And not that I'm saying we should say no, or that I think we even could say no. It's, I think it's like the ship is set. That's the point. Like I don't know if we can. Yeah. I don't know if there's any choice. Yeah. We're so deep into it already. I literally use AI almost every day in my creative output at this point. Like I use mid journey all the time. I use chat GPT fairly often. And it's so helpful man to bring creative visions to life and do thing and not have to steal other people's art directly. And then what's funny is you still get accused of stealing people's art because you're using an AI art generator. It's
Speaker 1
a piece of art. Yeah. Yeah.
Speaker 2
It's like that's a whole different conversation that I that I've had. Very many times, but it's funny because literally basically someone in the same comment was like, I love this video so much.
Speaker 2
you stole a bunch of people's art by using AI. You should have just used actual human beings art. It's like, so I should have just stolen actual human beings art. And then, you know, like you basically told me it's you love it. You're mad at me. Don't steal art, but then steal art, like all in the same comment, you know? But anyway,
Speaker 1
yeah, we're maybe we don't have a choice. Why don't
Speaker 2
we don't? We don't. I think there are always there has to be some level of choice that that matters, I think, but we don't. I don't think we have a choice in terms of like we're living the Pandora's box. We're living the we're living the Promethean myth. Like this is a great opportunity for mythopoetic reflection is it's like, there are these things you don't go back from. Once something happens, you don't go back from it. And AI is one of those things we're not going back from. There's too many factors, too many things in motion. It's silly to think about trying to put it back in the box at this point. But how much it worries me? I'm just the jury is out for me on how much I feel like I should be worried. The one thing that does genuinely give me concern is there is a paper that was published around the time. Chatsy PT version four came out where these engineers were using the full unlocked version of it and they were identifying all these epiphenomenal things that it was learning how to do that. It was not programmed to do. And I don't remember what they were off the top of my head, but it was like. That is crazy. And if it's, if it's gaining these epiphenomenal abilities, I know one example I heard was like a language it shouldn't have known. It figured out like it figured out. It was some Indian language. And I think they covered this on that 60 minutes piece. If anybody saw that, I think they briefly mentioned that like, yeah, we, we did not give it the necessary information to know whatever dial like the
Speaker 1
big man figured
Speaker 1
I think largely in the conversation I'm finding people are too focused on the current state of AI. It's like, yeah, you have to take into account that it can be exponentially learning. I mean, that's, it doesn't matter what it's doing since he right now. It's like that will be such a moot point once it learns this and then learns that and then it can get rapidly. There will be a point where it crosses past like human beings. It already probably has, but as far as it being, and then can then race off. And yeah, another thing we're not talking about much. Is quantum computing. Yeah, right. Yeah. Is a real thing that's getting close to qubits or coming into place. And again, every country is on this and every major company, IBM and so forth, are working on this. Yeah. You put that with AI. I don't even know what that means. All I can think is like the ability to compute because some people said, oh, we don't have the computing power anyway for this to really take off. And I'm like, I hear that. Um, but it's also like, what if that increased exponentially as well? Right.
Speaker 2
Um, yeah. Yeah. I remember seeing it was a public facing thing that Google released maybe like two years ago, they built a whole specific building and team for quantum computing. And they were talking about the things that are possible with quantum computing. And it is mine. It's just, it's mind blowing it. Like it's to your point. It's like we don't, it's like saying, you know, it's possible. 48 trillion, Guidrillion, like you might as well just say that because we can't visualize it. And that's what they're saying. And what's, what's remarkable is that we have figured out how to parlay extraordinarily limited human intelligence into superior forms of intelligence that even we don't understand, like the fact that we have somehow put that fire in a box that you can push a button on, close the lid and then have this thing unfold like a fractal of intelligence. And then at a certain point, we don't even know what's going on in the box anymore.
Speaker 1
It is remarkable. Isn't it? I mean, it's so crazy. It, and we built it. I mean, it, there's something to be proud of there. Uh, I, you, you've seen like maps of the internet ever. I don't know. It's more like a diagram. Okay. And it looks, it looks like a, a michelial, of course, uh, sphere because it's around the earth. And then you also look at a neuron. It looks, they all three look almost the same. I just, it's a random thought, but I mean, it's hard not to step back and be like, it's sort of a marshal, Mclew, an idea that we're the sex organ of the machines, you know, that, that, I mean, if you really step back, like, what are we doing? It, it's so incredible. We've literally put fiber optic tubes all over the planet. Right. The connectors, those, I mean, they're
Speaker 1
our tubes and it's satellites too. But the point is we have built something that we don't even now understand what it is or where it's going. And yet we continue to build it. It's remarkable because we see applications, but even though we recognize fully, there are many, many more applications that it may do on its own that we don't even know what they are. Totally.