Mutual Understanding

Ben & Divia
undefined
May 15, 2023 • 2h 39min

Robin Hanson Ronny Fernandez AI Conversation

Robin talks about why he thinks developments in AI will be on a continuum with civilizational progress in general, and Ronny, who is mostly trying to understand Robin, talks some about why he thinks many of the things he values about humanity aren't on track to being preserved, and that he cares about that.Video version available at: https://www.youtube.com/watch?v=G-fBdPnwFrI This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com
undefined
May 15, 2023 • 1h 14min

Recursing on the Discourse

In this episode we discuss several ‘frames’ - ways of processing and orienting to information - that are appearing in public discussions around AI, and in particular reflect on how grounded they are in longstanding debates within the AI X-Risk community."It’s sort of like the democratic ideal where everybody's sort of out there at the public square arguing with each other, talking about the things… I don't think there are a lot taboos about what to say yet either. It hasn't gotten super corrupted.Like it's very rare that in the real world, I'm encountering trolleys running over people right? And you can set up these fake scenarios that mess up my moral intuitions…  but maybe it’s not always virtuous to endorse the repugnant conclusion that these very limited, decoupled thought experiment bring you to."Links:* MusicLM from Google* TV’s War With the Robots Is Already Here (Writer’s Strike connection with AI)* Ronny Fernandez and Robin Hanson Discuss AI Futures* Roko and Alexandros on AI Risk* Universal Fire* SpandrelsTimestamps[00:01:00] Updates on Politics[00:07:00] The State of the AI Discourse[00:13:00] The Yudkowskian Foomer vs Hansonian Continualist[00:18:00] Mood Affiliations[00:22:00] Biting Bullets vs Rejecting Fake Hypotheticals[00:28:00] Divide between Game Theorists and Engineers[00:36:00] The complexity of predictions about the future[00:47:00] How similar are the values of optimizing systems?[00:54:00] The rupture of the GMU Rationalist Alliance[01:00:00] Public Perception that AI has Moral Weight[01:03:00] AI Veganism/Freeganism[01:06:00] Overton Window Shifts This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com
undefined
May 15, 2023 • 3h 48min

Roko and Alexandros AI Conversation

No transcript for this episode yet, but I wanted to get it out there anyway, since it can be hard to listen on Twitter Spaces. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com
undefined
May 9, 2023 • 2h 5min

Ben Weinstein-Raun

Ben Weinstein-RaunAnd so my guess is that that fairness seems like a plausible candidate for something like an alien species might have… kind of universal, like given some sort of assumptions about how evolution worked or works.Like there is something happening that results in us making choices. And if your philosophical determinism denies that, like you're wrong. And, I think it makes about as much sense to talk about will and free will in making choices as it does to talk about a glass of water.Ben Weinstein-Raun’s Twitter and Website.Timestamps[00:02:00] - Meta-Ethical Stance[00:11:00] - Contrasting with Moral Realism[00:26:00] - What EA misses[00:47:00] - Fanaticism and Outside of Distribution Sampling[01:28:00] - Building new Tools for Forecasting and Thinking[01:44:00] - Actually using Tooling for Better Decisions[01:53:00] - Guided by the Aesthetics of our Distributed SystemsThis transcript is machine generated and contains errorsBen Goldhaber: [00:00:00] And hi I'm going to maybe make a quick introduction to our guest here, and then we'll just dive into some conversation. Ben - who has a great first name - Ben Weinstein-Raun has worked at a number of top tech companies including crews and council, number of innovative tech research organizations such as MIRI and Redwood Research, and Secure DNA.Ben is currently building a tool for forecasting estimation, and dare I say, thinking. Having known Ben for some time now, I'd describe Ben as a careful, discerning thinker on issues of not just technology, but also philosophy.And so, yes, welcome to the pod, Ben. Yeah, thanks so much. Yeah, we're excited to have you. I was saying to Divia, one of the reasons I was excited to invite you on[00:01:00] was because of a really great conversation that you and I had a few weeks ago about. Naturalism and ethics and how it's put into practice and like all good conversations about philosophy, it was like 3:00 AM around a kitchen table.And I felt like you really captured something important about I don't know, naturalism and ethics, and I really just wanna see if we could recapture some of that magic in our conversation today. Cool.Ben WR: Yeah, totally. Great. Yeah, I'm really excited to talk about it. Awesome.Divia Eden: I'm excited. I haven't heard most of this yet, so I'll get to hear it for the first time.Ben WR: Yeah. Nice.Ben Goldhaber: Well, lemme just ask for the start. How would you describe your moral stance?Ben WR: Yeah, I guess, I guess at the moment, I would describe my, like best I don't know, my, my best stab or something. My best stab at understanding ethics as more like a pretty strong stance [00:02:00] on meta ethics. And not as strong of a stance on like, sort of object level ethics.Where meta ethics is sort of questions about like where ethics comes from. Like why do we say moral sentence or like sentences about morality, sentences about good and bad and should whereas like ethics, like object level ethics would be sort of like, what sentences do we say about or like should we say about good and bad?And like sort of what's true on the, on the direct object level there? I guess the, yes, so the meta ethical stance that I really wanna take is basically derived from an observation, which is that MetaEthical questions are under I think some pretty reasonable assumptions entirely empirical.So whatever the truth of the object level, ethical facts insofar as they exist. [00:03:00] There's some reason that people go around talking about shoulds and, and good and bad and so on. And, and that reason is like entirely inside of physics. It's not mysterious. It's the kind of thing that we could figure out by like, looking carefully at humans and at human societies and at maybe, maybe game theory and math.We don't need fundamentally different tools to answer that kind of question. If we're interested in answering like, the straightforward question, why do people say sentences like that? What is sort of the driver for humans having these intuitions and having these sort of discussions and, and and so on.And so that's sort of like the, the, the meta ethical stance. And then I think it has some kind of interesting implications for taking this stance has an interesting implications for the more object level stuff. So I think the most obvious one to me is that it, it does [00:04:00] not seem like it lends itself well to sort of very simplified rule-based systems.So I think it, it pushes me pretty far away from like a sort of total utilitarian kind of a view where you're aiming to make your object level ethical system very simple and make that sort of like this very beautiful sort of object that you know, is unassailable.Ben Goldhaber: I'd love to ask a question there about that because I could see some people actually like in intuition that I feel like I have, when I hear that from a meta ethical point of view, everything is within this same kind of system and I suppose all derived from physics on down it. Lends itself to thinking about things in a very maybe not very, but like I could see it applying in like a simplified manner.Like thinking okay, well because it's physics and because it's something knowable, we can construct theories about [00:05:00] it that are simple and derive into this kind of utilitarian calculus. But it sounds like you're actually pushing in the opposite direction and saying, no, it's much more complicated.Ben WR: Yeah, I think basically my sense that like you get something complicated is not like solely based on the idea that it's from that meta ethics is like empirically addressable. It's also based on sort of like observing what is going on when people talk about good and bad and should.I think it seems to me that if you want to have, like, if you wanna have your meta ethical stance basically look correct or like consistent with the evidence I think you need it to include some things, which to me do not. Seem like they would come from an approximation to utilitarianism.You need it to include things like I don't know. So, so Jonathan Hate has this [00:06:00] book oh shoot. I'm gonna forget what it's called cuz I'm being recorded. But it's the Jonathan Hate book. I feel like there's one Jonathan hate book people quote in this situation. No, it's definitely the out one.Ben Goldhaber: There's,Ben WR: let's look out and then we can, and then you can go back and say that. Yeah. Yeah. Ok. Is it the Righteous Mind? I think that is it. Yes. Ok. Okay. So, so Jonathan Ha has has this book, the Righteous Mind where he sort of goes into like doing, excuse me, a lot of the sort of like empirical like analysis of what is going in, what's going on in people's minds when they're having this sort of like set of ethical intuitions and conversations.And he comes out with like several factors like I think, I think it has like five sort of like key key things that, that are sort of factors of people's morality. And one of them like, like an especially important one and one that like, I think is especially important to me is [00:07:00] sort of harm focused morality.Which is I mean, it's quite widespread, but it's not like the only thing going on for almost anyone when they're thinking about sort of like what's ethical. So,Ben Goldhaber: So there are like multiple different things going on when people are calculating, thinking about what's ethical and like a certain re reductionist point of view.That's just looking at the harm's point of view is probably missing a lot of things. Is this one way to put this? Ben WR:Yeah. Yeah. I think that's basically,Divia Eden: yeah. Can I, I just, I just pulled up what the different ax axis, the five one s he has can, do you mind if I say them? Yeah, yeah, totally. Okay. So he has care, harm, fairness, cheating, loyalty, betrayal, authority, subversion and sanity degradation.I have some, at least that's what I found when I looked it up. I have some imperfect memory that he added, like freedom in there later because he found that that was important to some people and it wasn't originallyBen WR: covered. Yeah, and I think, I mean his I think he also has sort of generally this take that [00:08:00] there could be a, a bunch more, and these were just sort of like the most obvious when he was sort of like going through like the available, like the available evidence.And so, yeah, I think, I think it's, it's not clear that that is like a complete list, but I think it is clear that like your meta ethics has to sort of explain all of that. And it's not impossible, I think to, to end up with an explanation, like from my current standpoint, it's not impossible that you would end up with an explanation that sort of like is more or less simple like utilitarianism but it doesn't feel like what you're gonna learn if you like, started off sort of like with a clean slate.Like just sort of examining humans sort of a, as some, as like a species that has these sort of intuitions and and ways of functioning in a society. It just doesn't seem quite like, it doesn't seem like that's gonna be like near the top of my like, list of plausible [00:09:00] explanations. I want to kind of like almost pop back for a second or maybe double click on the word meta ethics. Cause one thing when I hear that I think about it is like selecting among some set of ethical philosophies. Is that how you meanit? Not exactly. I think, so when I say meta ethics, which may not be quite what people like, typically mean by meta ethics when they're like professional philosophers.I mean something like Like, what are the sources of our, like ethical intuitions and our like the sentences that we say about ethics? I think that like, some, like, I don't know. It's, it's the sort of like, it's the domain where like you might a wonder about like whether moral realism is true.Like whether there [00:10:00] really is some kind of like, like objective moral truth. Whether, you know, is moral realism real? Yeah, that's a good question. I don't know. I mean, it's a, it's a real concept although I think it might actually be a lot of different real concepts. But yeah. Are youupDivia Eden: for saying what the different, what some of the different real concepts might be?Ben WR: Oh yeah. I mean, so I think so in the like local social sphere, I guess like people use the term mo moral realism in a way that I think is not quite the same as the way that philosophers use the term. Excuse me. I think philosophers, when they say moral, moral realism, they mean something sort of broader.Like is there any sense in which like, you know, morality is real or like good and bad or real or, or anything like that. And I think that that admits a lot more ways that things can be real than [00:11:00] often people are imagining when they're saying moral realism. So if you're, if you spend a lot of time around ea you might like, like.Come across disagreements where people are sort of like talking about moral realism versus moral anti-real. And the anti-real are saying like, are are sort of, they're taking this view that like there's no supernatural, like the universe doesn't care. It's sort of like a combination of like, I dunno, they, they think it's like maybe morality is subjective.They think it's like, maybe not like aliens might not have the same moral systems as we would. They think that like, you know, there's no sort of like underlying supernatural like thing that is that is mor morality or ethics. Whereas the, the realist think that there maybe is some kind of a thing like that.Like the, the, I don't know you might get sentences like the arc of history bend toward justice or like, you know, [00:12:00] I only care about the worlds in which morality is real because like the other ones have no value according to me. And I think that like, it is pretty, I think this sort of like this sort of like splits too many things into those two categories.And I think I, I would, my current sense is that like there is a sense in which morality is real. I, I don't think that it's supernatural. I don't think that the universe cares about morality. My guess is that some of it is kind of universal. Like if you were to like, go find some alien society, they would like share some of our moral intuitions and probably some of it is not.Ben Goldhaber: And and you suspect those universal features are derived from shared evolutionaryBen WR: patterns. Yeah. Shared evolutionary patterns, game theory basically the sort of [00:13:00] thing that like might be common between our culture and an alien culture. Can you give an example of some things that you, where you think the aliens would probably have the same moral intuitions?Yeah. I think one thing that seems interesting to me is that like fairness seems like it's quite common as like a, a sort of, i, I don't know if it's quite fair to say moral intuition, but it's like, it's a common sort of motivator across lots of different animal species, like not just humans. It, it seems like sort of quite like an early thing to, to like evolve in terms of like Something like things that sort of look like morality.And so my guess is that that seems like a plausible candidate for something like an alien species might have or something that would look sort of like how we would think of fairness I think, so fairness seems like a, a strong candidate for something that like, might be kind of universal, like given some sort of like assumptions about how evolution worked or works. I [00:14:00] guess it does seem to me like something like sort of like slightly weirder decision theories is also another plausible candidate where like, If you have a decision theory that isn't just c d t, sorry, caus causal decision theory like that is potentially going to help you coordinate with other people a lot better or other like members of your sort of species or people who you sort of like see as similar to yourself.And so I pre predict that things like that are also gonna be quite common in, especially I guess like social animals or like, yeah, I guess, you know, generalized animals.Ben Goldhaber: I'm still kind of thinking about something that you mentioned around like why utilitarianism doesn't seem like a good approximation of the meta ethics you might endorse.[00:15:00] And I think you answered this in a way, but I didn't quite gro it around the point of like Jonathan Height and the like different ax axis. And I'm wondering if you can say a little bit more about it. The way I'm like kind of thinking about it is something like maybe you can't like, make trades between those axes in some way that all sums up to a number or something that the like stereotype view of utilitarianism might have.Yeah. Is that somewhere in the ballpark? Or, or maybe just say more about that.Ben WR: I do think that there's some, like, you probably could mathematically describe a way of making trade-offs because like, you know, obviously you, you sort of like can't Like, I, I mean, it's gonna be hard to construct a system like that where it's not, where you're, where you're not allowed to make any trade offs.But I don't think that's necessarily gonna be like a simple sort of like add up all of the, the field and like, you know, with different weights or whatever. I think [00:16:00] some of it is gonna look not very consequentialist and look sort of more like, are you being honest? Like, is that like, you know things that don't have direct like, I don't know where the, where the morality of the thing does not like route through its effects.Right. If that makes sense.Divia Eden: I, yeah, I was trying to think of an example of this in my head. And so I guess one of the, like, I took one of those eight quizzes years ago and there's some question that, you know, I probably, a lot of people I know I'm somewhat low on, maybe a little higher than I used to be as I get older, which is something like sanctity or it's like, how bad is it to, I don't know, like play a game of cards in a graveyard or something like that, where like it might seem in, you know, there are no there are no concrete consequences that can be easily tracked at least.But a lot of people have some sense that this is wrong. And maybe what you're saying is like, you know, if I'm imagining some pole that's like how many. I don't know, like how many, how many dollars would you have to give [00:17:00] to the against Malaria foundation to make up for playing a Rockus game in a graveyard?That this, there's something Ill-conceived about this. Is that sortBen WR: of what you're getting at? Yeah, I think that's basically right. Where like, it's not, it's not quite clear that there's like no trade that you could make or, or anything like, quite that extreme. But I think it is sort of like the question is like not actually giving you the information that like, might be needed to answer the question.For example like it might depend a lot on your state of mind, like when you were playing the game, it might like, you know, it may just be that there isn't really, like the thought experiment does not actually like, give you the relevant details. But that doesn't necessarily mean that there isn't like any kind of trade off that you could make at any given situation or that, like, there aren't details that could be given.But I think it does mean, I don't know that a lot of these yeah, that, that it's like, it, it's not going to end up [00:18:00] being a very, very straightforward kind of a, like a function from like the state of the universe to like how good or bad it is. I think I especially expect it to not be a simple, a simple function of something like summing up all of the positive experiences.So, so I think, I think I find total utilitarianism especially suspect as a, like object, the let system it's not crazy to me to have like a utility function. I think that's like maybe the kind of thing that you, you can still have. It's just that I, I don't expect it to be a very simple one. And I think I might expect it to sort of have like terms about what your mental state is and not just terms about like what's out in the world.Ben Goldhaber: Right.And how do you maybe personally or philosophically relate to this? Is it something where you try to hold like multiple points of view and like make decisions from that? [00:19:00]Ben WR: Yeah, I mean, so since I had this thought a while back, I've been thinking more in terms of like, what would my, like, I mean, it, it's a little silly to say to me, but, but I've been thinking a lot in terms of like, what would my grandfather do?Or like, what would my grandfather think is the right thing to do? Oh, that's nice.Ben Goldhaber: I like, I just like that immediately, but please continue. Yeah.Ben WR: And I guess maybe similarly thinking about like, What seemed to be kind of like universal features of moral systems. So I think it, it has caused me to sort of like up weight being honest, like because I feel like that is like quite a common, like admonition.It's also like caused me to up weight, like the golden rule roughly. Like, you know, treat some kind of thing vaguely shaped, like treat people like you wanna be treated. Maybe I mean there's lots of different ways you can sort of like add epicycles to make it [00:20:00] better cuz there are obvious problems with it.But but using that kind of thing where like, I, I just, I I have observed that like lots of different cultures and lots of different groups like seem to value those things. And that, that seems like if you, if you were to take a, the set of things that are like that, that would sort of be a minimal collection of like what might be considered like human morality, right?Ben Goldhaber: Try to shoot for like a minimum rule pack of human morally. Yeah. And then layering on top of that, does that kind of come from your personal exper or maybe, maybe like, you're not supposed to almost like add more things on top of that and what do youBen WR: Yeah, I, I think I mean, so one question here is like why if I have this view, if I'm like, okay, where like ethics is all sort of like explainable by physics, whatever, like why should I find it compelling?Like why would I want to be moral? Mm-hmm. Mm-hmm. And I think that in some ways I. Shouldn't want to be like, [00:21:00] like absolutely moral. However, I do care about the systems that I'm embedded in. Like I, I care about like the world that I'm in and like seeing it continue to, to thrive and continue to exist.I like, you know, I care about my friends and, and all this stuff. And my sense is that like some of morality is going to be tied up with like the preservation or you know, creation of systems like that. And, and probably quite a lot of it. And some of that is gonna be directly helpful to me.Like it's gonna be, it, it's gonna be derived from like patterns which like helped other people with those memes to like get more whatever, like basically propagate themselves or their memes. And some of it is gonna be like, it helped the societies that those people were in. And I think both of those can be like quite compelling reasons to want to be moral.And so I [00:22:00] guess that's sort of another way that I like another sort of source of like maybe layering on top is like, can I sort of like figure out. How this is like, I don't know how this fits into that kind of picture. And like, like also sort of like helps to some extent like eliminate aspects of morality that I like, don't think are as important.So for example, I think like believing in a particular God to me is not the kind of thing that I expect to like, come around to thinking I should do. Just because like, it, it doesn't seem like, I don't know, there are too many different choices there. It's like not, it's not like really well justified.Any particular one. I might be like more open to the idea that like maybe I ought to pray or something or do some kind of like sacred ritual which seems sort of more common. But yeah, I think [00:23:00] my sense is that like, insofar as that would be helpful to me or the societies that I live in, like there are better things that I can do with that belief, like believ through things instead.And so I'm not especially inclined toward that.Ben Goldhaber: Yeah. It's funny that you started mentioning God and religion in this because in part of what you were saying, I feel like I was hearing real. Echoes and shades of Cs Lewis's concept of the Dao, like some set of rules or principles behind civilization that seems to be universal and that he tightly coupled with both morality and the Christian faith right.And yeah, I, I, I certainly see like the way also in which, in you're bringing this into almost like a game theory kind of mode of like justifying it through like, all right, well these are the principles and rules, but with civilizations and people flourish. These are the kind of underpinnings of morality here, and I should look towardsBen WR: those.Yeah. Yeah, that feels really right to me.[00:24:00] I mean, yeah, I, I'm not sure if I've read the specific CS Lewis thing that you're re referencing, but I have read a, a bit, and I think basically I, I, I often feel like I'm with him like 70% of the way, and then like as soon as he starts being like very specific about why it's Christianity in particular, I'm like, Hmm that doesn't really make sense to me.Mm-hmm. But I do, yeah, I do resonate with a lot of the stuff that he says about like, I mean, sort of like Yeah. The kind of thing that you're, that you're talking about.Divia Eden: So I, I have a question about when you were saying, like, it almost sounded like what you were, when you were describing it before, that the grounding of why, to, why, like the motivation to follow these moral principles was to be helpful to you and the societies.You're, the systems you're embedded in, but then that starts to sound more consequentialist in a way that I think you don't mean, and so my guess is that it's sort of hard to use language. So talk about this, and that's kind of what's up here, but I, I guess I wanted to like, if you could expand onBen WR: that point a little.Yeah. I think [00:25:00] like, it's not so much when I say that, like, I, like why do this thing? I think it's actually, it is actually a bit consequentialist. It's just not, it's just not sort of like morally consequentialist. It's not that like, I think that I ought to do the thing which causes the best outcomes.It's that like I separately like happen to want good outcomes. And along with a lot of other things that I want and like wanting good outcomes, like I think leads to wanting to be moral by sort of like observing that I'm like embedded in these systems. And that I like, you know, there are these apparent rules by which people seem to try to steer the systems to like whatever, like better, more productive aims.Ben Goldhaber: Is there a moral principle or? Universal feature of some of the groups that you [00:26:00] think maybe is absent from one of the societies or groups that you're in, like something you think they should be doing more?.Ben WR: So one of the things that I think feels really important to me about this is that I, I agree with effective altruism, like the movement on a lot of like, key points that most people disagree on. And I observe that ea seem to make sort of classes of mistakes that I think that they would not make if they understood this like this general sort of like direction of thinking at least a little bit better. So so like S B F I, I think like really is like a true believer. Totally utilitarian. And he's a smart guy. And, and I kind of think that, like, I, [00:27:00] my sense is that like you.You can sort of only end up doing things which like, like the, the kind, you can only end up making the kind of mistakes that it seems like SPF F and others at FTX made. I think by sort of neglecting some of these sort of more traditional like ethical principles, like, like try to be honest, try to be scrupulous with people's money.You know, things that I, I guess it does seem like like, I don't know. I mean I, I, I really respect his and, and also Caroline Allison's commitment to their, like, moral principles. And I, and I don't, you know, maybe controversially, I, I don't actually take Ft X's collapse and like fraud as that much evidence that they like abandoned those principles.In fact, I, I, it seems quite [00:28:00] consistent with those principles to make that kind of mistake. And I think, I think it does feel really important to me insofar as this is like a correct observation to like point it out to the people who I think are trying to do as much good as possible so that they like, might not make mistakes like this in the future.That makesBen Goldhaber: sense.Divia Eden: Yeah. So can, like, can you be more specific about what it is that you wanna point out to people?Ben WR: Yeah. I think something like, like your, my mainline guess, and I think a very reasonable mainline guess about sort of like the metaphysics of the world is that basically physics is what's going on. Like, you know, the universe is sort of more or less mechanical at, at the bottom layer.And that everything that we say and do has perfect explanations inside of [00:29:00] physics and therefore like can be explained by looking at like by, by basically just being empirical. So I think a lot of people who I know have this sort of pseudo like sort of pseudo supernatural, like the people who I think of as, as, as thinking of themselves as moral realists in the ea sense.They have a sort of pseudo supernatural view of like, like what it would mean for there to be like, like real morality in the world. And I guess it seems to me that there's a whole, there's like a category of belief that one can have that is not. Like almost inherently disconnected from the truth of the matter of that belief.And there's a way in which if someone is like, ah, yes, I think that there is some sort of [00:30:00] supernatural like, goodness thing. And like, you know, it's gotta be the sort of beautiful, most beautiful, simple thing, probably utilitarianism. I think there's a way that that belief, if it were true, like there's no route for the truth of the belief to influence the belief.And if it were false, there's no route for like, the falseness of the belief to, to, to influence the belief. So it's, you're saying it's unfalsifiable, it's not exactly that, it's unfalsifiable. It's that like there just isn't like if, so physics is probably causally closed, like in insofar as like anything can be like there, there everything that happens in physics, including all the things I'm saying and all the things I'm like experiencing and we're talking about like have perfect explanations in terms of physics.And so if you're gonna pause it, something outside of physics that thing, like there's no route because I have perfect explanations for everything in terms of physics. [00:31:00] There's no route for anything outside of physics to influence my belief. I think. So it's different than falsifiable because like it might be the case that like, I have a belief about.Like how evolution happened that in fact, I will never get enough information to like, know the truth of, and that's actually like a, I think that that does not fail this test, but does fail the falsifiability test. Where like, at least that belief, like there is some way that, that like could in principle, like be connected to the truth of the matter.And yeah, I think that's, that feels kinda like an important distinctionBen Goldhaber: and I'm still kind of puzzling on this. Like, so like maybe some folks have this kind of conception of some thing outside of the realm of physics that is the source of like moral truths. And your point is many people [00:32:00] in the EA space don't necessarily think that is God, but that they still treat it in some way like that.And you'd wanna bring that back down into the realm of physics while still holding. Also hold, also believing that there is some source of truth that is beyond like, or that is like somewhat universal.Ben WR: Yeah, I mean, I think that, like, that source of truth can be, and in fact, like I think more or less, the only source of truth as far as I can tell is like things that exist in the world.And like, and that seems like a, like a totally reasonable place to me to look for like true morality is just like, yeah, try to figure out what it is we're talking about when we're saying these sentences. And so it's,Divia Eden: it's part of where you're coming from that you think it's sort of, it's, it's appealing to people to create something elegant in their minds and give it, like, elevate it to a special place that's [00:33:00] ultimately ungrounded.And is is part of what you're saying that with your meta ethical stance is that no, that's sort of unjustified and people actually need to do the work of looking at the world and grounding their ethical intuitions. Does that seem, is that closer?Ben WR: Yeah, I think that's almost exactly right. And yeah, the, the only place where I guess maybe I would like slightly modify it is something like it's not, it's not clear that you like even have to ground it all the way out.I think it just sort of like, it, it's going to be like, I think it's important to keep this in mind and like let it influence your like, Probability distribution over like what the truth is. Like if you are, if you are as I am, like, you know, relatively confident that like physicalism is roughly true that should impact your beliefs about why people say sentences about good and bad.Imagine I sense imagine people. Yeah, yeah, yeah. Sorry. Yeah. Like if I imagine [00:34:00] my sense people first go, but yeah, go ahead.Divia Eden: Yeah. If I imagine you say, you know, hypothetically you're running a bank, you have some customer funds, you're deciding what to do with them. Can you sort of walk me through how your physicalist meta ethical stance would infor, like, can you, can you sort of lay out the steps for that from then what you do with the customer money?Ben WR: Yeah. I mean, I think for one thing like I think it, it pushes quite hard on. Things like honesty which I think as I sort of mentioned before, like my sense is that honesty is like like fairly universally seen as like, you know, like the ethical thing to do. Like all other things equal. And I guess it's not necessarily the case that all of the actions with the money will, will like be dic dictated by an ethical system.Like it might be that I like, have a lot of leeway over like exactly what investments to make. Like maybe I would like to just [00:35:00] maximize my return. Like, that's probably ethical as at least assuming that I'm like being honest about that with with the people whose money it is. Maybe it's that I like want to invest in things that I think will make a positive difference in the world.It's not super clear to me that like that like this is like sort of required by an ethical system. My guess is that it is probably like better to do things that are better, but I don't think, my sense is that like the true moral system is not extremely demanding. If that makes sense. Partly because in fact humans mostly are not fanatics.And things seem to work out basically fine and in fact kind of better when they're not fanatics. And so I don't expect like to find that like the true moral system insofar as there is one, [00:36:00] or like, maybe there is one relative to me or relative to me in my situation or something like is going to make demands of me.Like, you must find the most, like the, the best possible thing to do with this money. And like, I guess it seems to me like it's, it's totally, it's likely to be very compatible with like many options especially ones that are sort of like being what is recognized locally to be like like rather like fulfilling the role of a good bank manager or whatever.Insofar as that's like what people are expecting from me. And that that may not be like I don't know, it may or may not be like involves sort of like maximizing profit. It may or may not involve sort of like, you know, being lenient with people on their loans or whatever. I dunno, I'm not sure if this is a good a good answer to your question.[00:37:00] IBen Goldhaber: like it because it did help me grasp something much worldview here and then also kind of what I feel like is a common. Well, I'm not even sure failure mode, but like a tension point. I feel like with a lot of the philosophies that we talk about, which is like on the margin, what's the next action that you're gonna take and like how much should it be influenced by some other thing, like on the margin, are you going to like, donate the next dollar to a M f or Miri or some other charity?And then there's always this question of like, well, why am I not donating the next dollar and the dollar after that? And I guess what I'm hearing you say is something like, no, we should kind of resist or at least be very skeptical of this idea that ethics can be a universal operating system for your choices.Is that an accurate kind of statement?Ben WR: Yeah, I think I think basically it, it, yeah.I think I do wanna say something like, [00:38:00] I don't think that the, like insofar as there is gonna be a true ethical system, I don't expect it to make a prescription about every action. Hmm. I expect expected to make prescriptions about lots of actions and to like, you know, strongly push against some and strongly push in favor of others, but.I don't think it's gonna be like, you know, like, I think, I think there is something kind of wrong with the sort of the like obligation framing, which I guess a lot of VAs sometimes talk about with respect to ethics, where I think like the obligation framing of a particular ethical system is like basically going to lead you into fanaticism for basically the reason that you're talking about.Like, ah, yeah, well I spent my first, you know, 60% of my income on am m f I guess maybe I ought to spend the [00:39:00] next 30% too. And then, I don't know. And my guess that this just doesn't actually work that well for like, you know, building a society like, like actually in fact causing the most good outcomes if that really is what you want.And I mean, in particular, I just, I just think it isn't probably good like making reference to what I expect to, to find about like what good and bad mean. Mm-hmm.Divia Eden: I, yeah. I think the part that I'm most trying to clarify in my mind is something like the step from the sort of non supernatural, empirical view on morality to then you are sort of looking, there's both some question about what actually makes societies work.And then there's some sort of additional sense that looking around at what seems to, which principles seem to people converge are convergent seems like a pretty good source [00:40:00] of information about what the true morality is. Mm-hmm. Am I, doesBen WR: that seem right? Yeah, yeah, totally. Yeah, I think that, that seemed like it basically hit the nail in the head.Like there sort of is, my sense is that there are, like, if you take this, I guess the way that I would phrase it or something is like there's sort of a, a. First step, which is like, you notice that you can answer these questions empirically, and then there's like a second step. And I've, like, I, I think it's easy to take the first step.The second step is hard, which is like, okay, now we actually have to do the empiricism and like actually try to figure out what's going on. And I don't think that we've like, succeeded at that. I, I don't think that I can just like go crack open Jonathan Hate's book and like read out what the true morality is.And like, I think it is in fact a quite difficult project. I actually think my sense is also, this is basically to some extent like the project of like the original [00:41:00] sociologists. And I think sociology in the meantime has sort of gotten like, redirected to other things. But my sense is that, that they were really interested in sort of like cultural universals and specifically around sort of like sacredness and and like morality and religion.And so I think my sense is that like people have sort of like made these sort of like halting steps towards the second step in this, in this sort of program. Like enough that I, I feel like there, I can say more than nothing about like what I expect to find, but I don't think that I can say anything with all that much certainty.Like I think like probably 80% of the things that I've said in this, in this like conversation so far I feel very un uncertain about really. And like I would be unsurprised to learn that they were basically wrong, but some of them Yeah, you go, yeah. And maybe that. Yeah, and [00:42:00] I think partly I, I'm even like open to the idea that in fact moral realism is false and like there is no sense in which morality is, is like a real thing and that like, you know, there's sort of this like non-cognitive view that like, we're basically just sort of making emotional expressions when we talk about good and bad or something like that.And that there's like no sort of consistent, real thing there. It's, it's like pretty strongly not what my guess is. Like, I, I think there's like enough things that I can point to where I'm like, okay, but that thing definitely exists and like, it seems like it's part of what we're talking about. But but I don't know.I, I, I'm open to being like wrong about even that.Ben Goldhaber: It strikes me that some embrace of uncertainty or toleration of uncertainty is kind of central to your worldviewBen WR: on this. Yeah, totally. I think there's a way in which, so some is talk about like like, like moral uncertainty like I guess especially like Will McCaskill has written some stuff [00:43:00] about it.And when I first read that I was sort of like like, or read about moral uncertainty. I, I had this sort of like visceral yuck reaction to it where I'm like, ah, but like I feel like it's not really like he's describing it as sort of like, oh, well maybe like 30% on utilitarianism and 30% on like virtue ethics or something.And I just kind of get the sense that that discussion is often not grounded in like, how you might actually answer the question. And like, like trying to constrain your predictions like so that they match as well as you can, like the evidence that's actually available. And yeah, I think it is, I think it is basically, it feels really key to me to have something like moral uncertainty but to in fact ground it in.Like what, like, I don't know how you, how you would, how you would expect this kind of [00:44:00] exploration to, to, to play out.ThisBen Goldhaber: kind of exploration. Could you say more there?Ben WR: Yeah. So like step two, I guess, where like you're sort of going from the observation that probably like all of this stuff is empirically like sort of discoverable, right? And then in fact trying to go and discover it, so, right, right. Yeah. Like trying to figure out why it is that people have like fairness intuitions and like you know, things like this.This makes a lot of,Ben Goldhaber: I, I sense to me, or at least it's like one of the things where I really enjoyed talking with you about this was some sense of like, yeah, there's something here that feels like both very true and very humane, almost like kind of human-centric. While also as soon as I get to that word I'm kind of thinking like, oh wow, there's gonna be a, there'd be a lot of fighting.There would be a lot of different points of view on this. And I imagine that's like some of the appeal of some other like, Modes where you can be like, no, we, we have the answer. [00:45:00]Ben WR: Totally. And like I think, you know, I think there are a lot of like benefits to making simplifying assumptions. I'm not necessarily saying that people shouldn't like, use something like utilitarian calculus as like an input into their decision process.But I don't think, I think that that, that people are sort of I don't quite wanna say diluted, but like something like diluted into like thinking that's please say diluted.Ben Goldhaber: We need more good clips to be able to kinda like, get some spices in, you know, viaBen WR: totally Twitter fans. Yeah. So like, I guess I have a sense that people are sort of diluted into thinking that that's all they need to do that that's like, in fact the whole deal.And as long as they've done that, then like they're doing as much good as they can do. They're, they're, they're sort of unassailable morally. And I think that is not really true. I, [00:46:00] I mean, I think it's both. Yeah. I think that that point of view is both like too harsh and too, and like not harsh enough.Mm-hmm. It like excludes a bunch of things that I think people sort of in fact ought to be paying attention to and aren't when they're sort of in that mindset. And also, yeah, it, it like does this sort of like pushing toward fanaticism thing and like pushing away from like sort of having a, a healthy life?Divia Eden: Yeah. Can you say more about why you think fanaticism is not good?Ben WR: Yeah. I mean, I guess my sense is like if I have a bunch of, if I think about like the central examples in my mind of like fanaticism from history they, like the fanatics often end up in the historical memory as being like a. Basically the bad guys is, I think one, kind of, one part of it.Or the people who, like, were more fanatical of [00:47:00] the possible, like choices seem like they're sort of, they, they sort of end up seeming like the bad guys. That's like not itself that much evidence, I think. But I also observe that like, I don't know like the people around me who are the most fanatical are like, in fact not the most productive or like doing the best.And in fact, I think there's probably like an anticorrelation between those like another, another I guess point that feels like it's on the scales there. But yeah, I mean, I also think about like times when like the more fanatical group, like clearly caused a lot of harm. Like I think I, 2, 2, 2 examples in my mind that that come up a lot are like the French Revolution and like the Russian Revolution, where like, I think both of these were sort of like more or less driven [00:48:00] by like a kind of fanaticism, which like ultimately resulted in like both directly a lot of bloodshed and, and suffering like during the revolutions.And also like after the fact inst like installed regimes, which themselves were like, I think. Like quite clearly bad like, yeah. Napoleon and the U S S R both seemed to me like they caused a lot more harm than like many other possible, I guess regimes might have. I dunno. And I think another one would, another example would be like like China, like Maton and and like various like cultural revolution stuff.I guess it just seems like many recent historical examples to me feel like they sort of have this correlation between like high fanaticism and like worse outcomes. Whereas [00:49:00] like in the American Revolution, my sense is that like people were much less fanatical. They were sort of much more pragmatic.They like, I think sort of still saw themselves as like, you know, roughly British people and like, sort of were just like, they were pissed off, but they weren't like fewer, fewer people were sort of like calling for like the blood of the elites or something. And I think, I guess like it just seems to me like that resulted in a better outcome.Divia Eden: Yeah. So that makes sense to me in a lot of ways it seems like sort of the, the term that comes to my mind is like a genre savviness thing. It's like, I'll read the baddies, like that gift,Ben WR: whatever. Yeah, exactly.Divia Eden: And then, so, and I'm sympathetic to your point, if I imagine myself coming from a more utilitarian.Point, I would say something like, okay, sure, but if I were, I would simply do the utilitarian calculus and count it as super negative that, you know, that all the people would die of starvation and therefore I would [00:50:00] not do that. Yeah. Which, I mean, I don't think you would particularly, you know, I think you'd probably be happy to have them taking that into account, but like, I, I think I'd like you to hear, I'd like to hear a more direct response to like, okay, but why not double down on the fanaticism and get better at calculating theBen WR: consequences?Yeah. I mean, I do think that like, there is some appeal to that, like, especially insofar as like the world changes over time and we're sort of like in a new regime where like maybe we actually could calculate all of the consequences. Like, I don't know, maybe a hundred years from now we'll have Jupyter brains and like be able to like actually figure out what all the consequences are.And I think in that world I'm like a lot more sympathetic. For now, I, I don't think that's the world we're in. My sense is that like, I, I don't expect to be able to do those calculations well. If I were to try I basically expect to do better, like by my own lights. By sort of like internally [00:51:00] appealing to my sense of like what's wholesome and like what is sort of like morally good in the sort of more, more traditional kind of a sense.Like you expectDivia Eden: that sort of calculation to sort of fail on its own merits, you think it Yeah, totally. To give a worseBen WR: answer. Right. And, and that's sort of like what my, my feeling is about what might have happened at ftx. I think that there's sort of like ways that you can sort of think that you are like maximizing the good when actually you're sort of like neglecting a bunch of things that like in fact will lead you to sort of like, in some sense predictably fail to do anything like maximizing the good.And you could also, I mean, I think a, a a different way that you could see the Ft X thing, which is like counter to that is like, well, maybe they were actually maximizing the good, but like it they just were taking on a lot of risk and like the risk, you know, didn't pay off for them or something. And like, yeah, that is sort of always gonna be a possibility.[00:52:00] But I don't know, I think it's at least some evidence that that kind of thinking like didn't pay off for them and, and probably won't for others. You mentioned the idea that if you're in a new regime, potentially you might change your mind on this, like a regime in which we have Jupiter brains or some type of advanced AI to solve this calculation problem.Ben Goldhaber: Another thing that I sometimes hear arguing against the idea of certain universal moral precepts or ideas like morality running through history is like, well, we are in a different time period and a different environment where perhaps different sets of values end up leading to better outcomes. I'm curious if you think one that seems plausibly true about today and or two, like is there a, like new [00:53:00] environment you might anticipate where like you would throw out your like rule for honesty?Like are there certain things you expect might be like faster to be like be pushed over the ship?Ben WR: Yeah. Yeah, that's a really great question. I don't know. I think my sense is that like the current world is still basically composed of like regular humans, like doing regular human things, like existing in more or less regular human societies.I think what I would say about a world where that no longer seems to be the case is that like if I were to throw out the sort of like. Like, you know, things like injunctions for honesty I would not know what to replace them with. That would actually be better. Like I think it would at least require some sort of like, period of ex of like ex of like, exploration either on like a personal or a societal level or probably both [00:54:00] to like try and figure out which things like actually would, would work.Like, my guess is that like, it, it's not the kind of thing where you can just sort of be like, oh, well, you know, we, we no longer care about honesty because we're like, in this weird world, we instead like wanna just do the, just do the calculus straight up or something. At least not like confidently at any given moment.Ben Goldhaber: We'll kind of switch tracks here and talk a little bit about ai.You're someone who was like, Both worked in various orgs that have worked on some of AI and AI alignment, and also you're someone who's been around this kind of scene that has been thinking about it for a while. So, you know, first question, how do we solve the alignmentBen WR: problem? Yeah. I wish I knew.I, yeah, I, it it may be that I don't actually have that much useful to say on this topic. IDivia Eden: mean, do, do your views on moral realism, [00:55:00] do you think they have any implicationsBen WR: there? Yeah, I, for, yeah, I, I do think that is relevant for like, questions of alignment. Like for example, I, I think at least in terms of whether I expect like an aligned AI system to basically make hiton or not, I think I basically expect it to not, right.Because I, I don't actually think that's like what human values push towards. And yeah, I, I mean that's, that's like one thing to say about it. I, in terms of like aligning an AI system with human values, I do think that like there probably is a real thing like that is that we're talking about when we're talking about human values and that like, therefore, like in principle, it's sort of possible to align in ai which is maybe a thing that you might, that, that someone, some might disagree with.I'm also not sure, like separately, I think [00:56:00] A lot of people talk about sort of like c e V or like coherent orated volition where like if you sort of take this as like, I don't know, like roughly speaking, it's sort of like if you had a lot of time and like, like a reasonable process for sort of like aggregating everyone's preferences into like what we would really want.Like that's sort of like what we should be aiming for. And that's like not crazy to me. I think there's like a way that I disagree with sort of like an implicit claim in c e V talk, which is like that there is a single C E V that like that if you had a reasonable process for like, bringing together all these preferences and like sort of hashing it out like sort of like leaves a lot to I I guess like what that process actually is.Like, I sort of expect it to be kind of path dependent, like what the like end result is. I basically just like don't see a particular way, a particular [00:57:00] reason to think that like whatever it is the human's value, like taken collectively, like that thing is like going to be well defined without sort of specifying more about like exactly how you're aggregating preferences and and so on.Ben Goldhaber: And this is slightly different than, than your kind of view around this almost like baseline morality in some sense. Cause you expect that to be relatively. Maybe not well defined, but relatively broad spread, spreadBen WR: among humans. Yeah. Yeah. Right. I, I basically expect like the, like it does in fact seem to me that humans almost universally like have some kind of thing like morality that they like, you know, have fairness intuitions.They have like, you know, they sort of like scold each other for doing things that they think are bad, you know? Right. ButBen Goldhaber: preferences in the whole broader range of human experience might be much more path dependent, much more varied.Ben WR: Yeah. I think that's sort of right. Like, I, I don't expect, I don't [00:58:00] expect that, like that set of things, which is more or less universal to like, as I sort of said before, I don't expect it to even like, make like necessarily make recommendations about every action that you would take.And so I also don't expect it to like, sort of make a particular recommendation about like what to do with the future like a single specific one. I do expect there to be like compatible ways of, of like handling the future and incompatible ones. And like, I think the way that I personally think about the alignment problem is like more like, how can we get the AI to like help us into a compatible future rather than incompatible one.I haven't read the book Human Compatible, so I have no idea if this is about that or, or, or if that's about this or what. And so my guess, I'm not sure either.Divia Eden: And my guess from how you're talking is that you think your, your views on moral realism. As you say that you think that they, they have some implications for like, if we had an AI that was going to implement some sort of reasonable [00:59:00] process, how that might go and whether that would be possible.Mm-hmm. But it seems like you, my guess is you don't think that the AI would independently be like, oh, this is obviously what I should do and I Yeah, that makes sense to me. But can you explain, cuz I think sometimes people think, well, if moral realism is real, then the AI would come up with it and then we'll all be fine.So why do youBen WR: not think that? Yeah, I think basically because the AI is not going to be a human and like, my sense is, and, and it's not even going to be like an organism that evolved in like in like a social context. I mean, it might in fact be something like that depending on like questions about how the future goes.In which case maybe it would have something like fairness intuitions or, or, or something like that. I don't know. But if there were, if there were to be like a singleton, I don't see any reason that it would like happen to find the same sorts of like [01:00:00] the same sorts of like strategies for perpetuating itself that human societies have found.And like, and, and as I said, sort of like the, the reason for me to like. B moral is because it works better. Like there's sort of like this, this, first of all, I like have these values that like are sort of independent of morality and then I like observe that like in some cases, probably in many cases acting morally actually helps me achieve those values.Like more effectively. It's sort of like aDivia Eden: capabilities boost for you.Ben WR: Yeah, exactly. And it's like very unclear to me that it would be capabilities boost for like a super intelligent ai. In fact, I would guess that it wouldn't be, especially for Singleton. And I would pretty strongly guess that it, that most like human morals would also not be capabilities boosts for like sort of more multipolar worlds.Ben Goldhaber: And so in general, do you feel more optimistic [01:01:00] about the multipolar worlds and or do you think we're trending in that direction?Ben WR: I, I feel pretty pessimistic about all possibilities really. I don't like, even if, so, if you did have a multipolar world and they did inherit some like aspects of like what we would consider like morality or sort of like, you know, evolve that or like come like, you know, realize it independently or something I don't think that, my guess is that a lot of it is is pretty dependent on like.What kind of species you're interacting with. And like, I, my guess is that they would, that they would potentially, they would potentially enact a future that was compatible with like morality relative to them and not relative to us. Like, I mean, you know, for example, maybe their future [01:02:00] has no humans in it because they like, don't especially care about humans.And that might not itself be like, you know, that might not itself be part of the, like the, the thing that they come to come to adopt in a similar way to like, I don't know, like, like ants I think are like probably, I, I mean, sorry. I think it's like a little bit, it's like a little bit stretching to be like, oh yeah, answer super moral.But I do think that like answer very social like, I don't know, they're like, you know, you social insects and they sort of like, basically their whole lives are like devoted to you know, the functioning of their, of their like local group. And humans I think basically don't have a term for ants in their, like, even though like many of our like many of the things that we think of as moral, probably like somewhat line up with like how ants behave.We don't think of the ants as being valuable.[01:03:00] If that makes sense. And I sort of imagine something like that. Also holding in the case, like if thereBen Goldhaber: were, there's likely a specious attitude inBen WR: morality, something like that. Something like that. Like part of it is gonna be because you have a collection of like agents with similar capabilities and those things are going to be like mortal patients in whatever, like social system arises.I would expect and probably that's not all of it. Like, I think in fact it is probably like not moral to torture your ants, but it's like much more moral to kill ants than it is to like kill humans.Divia Eden: Right. So you're sort of saying that your views on morality say that it'll be, it'll you, you, you follow your moral system because you think it works better for achieving your values.You expect an AI would do the same and you expect that there's some sort of action that's like, treat this as a moral patient. And whether to [01:04:00] do that is kind of contingent on whether doing that in general for that type of thing would help you achieve your values more. And you, you don't thinkBen WR: that would happen with ai?Yeah, that's basically right. And it is, it's not always going to be like your values. I think there is, there is definitely gonna be a component here that's sort of like more like slave morality or something where like I think it probably is gonna, like, part of morality I think is going to be about purely like preserving the systems that you're a part of and not like necessarily your own values.And so I think that's, that's like maybe another, another component of it. But but yeah, I think otherwise that, that was basically right. Yeah. Do you feelDivia Eden: good about the, the slave morality component personally? Like if you could sort of separate it out and know, okay, well this is something that I'm doing that'll preserve the system I'm a part of, but doesn't actually enact my values.Like Yeah. How do you relate to that?Ben WR: Yeah, I mean, I think if I, if I knew how to reliably separate that out, I, I think I would want to and mostly discard the things that are sort of more like only about preserving the system [01:05:00] even in ways that I like, don't endorse. I think it is gonna be hard to do that without like doing all the rest of step two of like figuring out yeah, like, just generally what's up.And so I'm kind of hesitant at the moment to like, you know, unless I have a really strong story for why I think is sort of like more like slave morality like to discard things like that. This is like another calculationDivia Eden: problem basically. Yeah. To try to separate this out. And So you wouldn't goDivia Eden: there?Ben WR: It's, it's like very structurally similar to the thing about like, why not just like add up all the consequences. And like under what circumstances might you like change your mind?Divia Eden: mean, the other, the other place where the personal philosophy, I would think intersex with the AI thing is there's this pat, it's been a lot on Twitter recently, which I, I almost hesitate to even repeat it, but people will be like, well, if you really took AI alignment seriously, obviously you would then do all of these horrible, you know, morally [01:06:00] horrific things.Yeah. So do you wanna respond to that from your, fromyour stance?Ben WR: Yeah, I mean, I think the easy answer f from my point of view is like, no, those things are bad. And like, it's usually not a good idea to do things that are bad. I mean, and, and like in proportion to how bad they are. And they seem like often extremely bad.Yeah, sure.But, and then I would say, okay, usually, but this is some out of distribution event, supposedly. That's the claim.Yeah. I mean, it does, it does push me toward like slightly more extreme policies. Like I think, I mean, so Eli's like, like you know, article and time is I think an expansion of like the normal sort of for foreign policy world where like, you know, yeah.Like you would also plan to enforce this this ban even against nons signatory countries. And I don't know, I mean that, that, that is, that, that does seem to me like kind of extreme and like in most cases, [01:07:00] like, would not be called for, like, would not in fact be worth it. And I think, you know, if you were to consider that aspect in isolation, I think it's, it's not good.It, it is like, you know, it's the kind of thing that like is kind of immoral to do. And nonetheless, I am pretty compelled that it would be good to do in this case. But I think that's like, there's a pretty big difference between like, insofar as it is possible coming to a global consensus that that's the right thing to do.And then carrying it out versus like, a lot of the things that I'm seeing on Twitter are sort of much more like vigilantism and like you know, sort of like recklessly going around, like murdering people. And I'm like I don't think, I don't think that's good. Kinda like, like you want some kind of like, Rough consensus, open source, almost like that ethos of how you govern something where it's like, maybe if we get 80% of the way [01:08:00] there, it's like, probably all right.Yeah, I mean I, yeah, I'm not sure exactly how to think about the consensus aspect, but I do think that like a, a procedure which like involves going around and trying to like, get people on board with this plan as much as possible and also making very clear like what exactly your policy is so that people can like, steer clear of it seems like way, way better as like, as a procedure.Like I think it, it pretty clearly for me pushes it from like, that's insane, obviously stupid bad, like policy to like Yeah, I mean I, I, it seems very reasonable given the stakes.So you wouldDivia Eden: say something like, okay, it is an extreme, it is, you know, potentially an extreme out of distribution situation, which does push in the direction of doing things that have downsides you normally wouldn't consider, but certainly not infinitely. [01:09:00] So Yeah, and you would still sort of take the normal costs and benefits into account a lot.Ben WR: Yeah, absolutely. I, I also think that like there's a way in which like seeking consensus is a way of avoiding a lot of the like, sort of failure modes of sort of individually like calculating wrong. So like if, if I thought, oh, you know, what we should really do is bomb all the data centers, I'm just gonna like go out and do that.Like, that seems way more clearly immoral than like, like maybe what we should do is bomb all the data centers. I'm gonna like, try to get the, like, governments on board and also first of like, also try to get people to like shut down the data centers first. Like, just like that process of like sort of gathering, like there's like a way that that sort of like, I think can diffuse a lot of the failure modes of trying to do the calculation yourself [01:10:00] from a wisdom of the crowds point of view.Yeah. But also I think like in, in terms of like I think it, it just is gonna be a component of, of sort of like how moral a thing is, like how much agreement there is about it. Like, I don't know, I think it's, it's totally fine to make a, to like ask someone to give you their bicycle for the afternoon.It's like not fine to like steal their bicycle and bring it back later.Divia Eden: Okay. So I, I'm trying to sum up in my head how you're relating to the. So the moral, what should I call it? Is it moral physicalism? I think that was a term somebody, yeah.Ben WR: I mean, the, the thing I'm, yeah, it's sort of a part of like a broader like, sort of metaphysical shift that I made recently, which I've been like referring to as physicalism.Okay. I, I don't know, like I, I think it is sort of like quite tightly connected to like philosophical physicalism. Can you say,Divia Eden: now I'm interested in that. Can you, can you first say a few words [01:11:00] about that shift?Ben WR: Yeah, yeah. So I think, yes, so, so for a while. Yeah. Hmm. How do I say that? So yeah, I guess this observation that like everything that's happening with people is sort of like the result of physics on like a lower level is like, or sorry. Well, probably, I guess I should say probably the result of physics on like a lower level is like a reframing of a lot of different questions that I had.So, so, so meta ethics is like the most obvious and sort of like the most relevant for a lot of things. But also sort of like the question of like what is real, which is sort of also related to the moral realism thing is like, Somehow feels much more clarified to me at like now than it used to where there's sort of [01:12:00] like physics is like the fundamental stuff, and then there's sort of like real stuff, which is like more or less like things that have a particular relationship to the physical stuff which are not necessarily, like, it's not correct to say that like that glass of water isn't real just because like the glass of water isn't sort of like a well-defined physical like, set of things.It's sort of like fuzzier it's still like clear to me that like it's, it makes much more sense to think about the glass of water is real. And I think that vague insight also like applies really directly to like the question of like free will. And like how, so it gives kind of a clean story for like how like if real things are sort of like structures or like sort of fuzzy structures built out of the physical stuff I think it's totally reasonable to me to think in terms of like free will being [01:13:00] real despite sort of like being built on a deter deterministic sort of physical substrate where like it is describing a particular phenomenon in humans that like we make choices.Like there is something happening that like results in us making choices. And if your like philosophical determinism denies that, like you're wrong. And like, I think it's just sort of like, it, it makes about as much sense to talk about will and free will in making choices as it does to talk about a glass of water.Like I think it, it's, and can, can youlikeDivia Eden: cash out? Like how does it, what are the ways in which it makes sense to talk about either? I can imagine, but like what would you say to that? Yeah.Ben WR: So, okay. I guess one thing that, one of the ways that I'm framing this in my head has to do with sort of like isomorphism between different parts of physics.So if like so it happens to be the case that the physical world we live in, like has a lot of structure. And it's sort of like this very strict pattern where [01:14:00] there are lots of different sort of like like I guess different parts of physics, which are isomorphic to each other. And if you've read the, the, the ELIEZER sequences post the Simple Truth, I think it, it sort of gets into, okay, but it was a long time ago.Yeah. It's the one where there's like a, a shepherd and he's like, you know, trying to figure out like how to count his sheep and like he's got these rocks and he's like, he, he sort of like notices that like if he, if he puts one rock in every time a sheep goes through the gate and then like takes one rock out every time a sheep passes the other way through the gate, like he will have correctly counted like whether all of his sheep have left the paddock or not.Mm-hmm. Like. This is sort of like, it's an isomorphism that he's like sort of constructing between the rocks and the sheep. Like the, the rocks are like telling him true facts about the sheep because of the way that like, the number of rocks and the number of sheep, like are connected. If that makes sense.Yeah. And this is like [01:15:00] everywhere I think in physics and in like, I, I guess just in the world, like these sort of isomorphism and in particular sort of like useful isomorphism where like, you know, a map is sort of isomorphic to the territory, meaning like, I have this like nice little piece of paper and I can like, use the nice little piece of paper to like navigate in the real world out there because there's a dysmorphism between them.Yeah. This reminds me, this, this is the thing I got from Anna Solomon was talking about it recently, the unreasonable effectiveness of math in the naturalsciences. Yeah. Yeah, totally. I think this is like a big part for me of like, what I think is going on there is like, math is about these like really, really good isomorphism and like towers of isomorphism, right?Where like, you know, you can count sheep with rocks. You can like, I don't know, count fish with tally marks or something. And like there's sort of an isomorphism between those two isomorphism where like what you're doing in both cases is sort of like this counting thing. And like, I think you [01:16:00] can build.As far as I can tell, like all of math basically that way. Which I think is, is another one of the major sort of like reframing for me from this sort of physicalist viewpoint. I think I was natively thinking often in terms of sort of like a platonic view.Divia Eden: Oh, are you a mathematical realist now too?No. So, well, so sort of, I mean, maybe, I don't know. I'm not sure quite sure what you mean by mathematical reals.Ben Goldhaber: Can you all explain that? What do you mean by mathematical realists?Divia Eden: I think what it means, and I, yeah, I, I'm sort of starting to like, imagine what your viewpoint on this might be. I, I think what it means is there's some question people like to ask of something like, does the math exist?Even if there isn't a physical instantiation of it?Ben WR: Yeah. Yeah. And I think like historically I would've been like, yeah, duh. Like physics is made out of math. Interesting. Yeah. But like recently I've been like, oh, but if physics, like, I don't know. I feel like that belief is in the similar category has sort of like the supernatural view of, of [01:17:00] ethics that I was talking about before.Like, I have no story for how that could get into my head and be true. Or like the, the truth of it could have gotten into my head.Yeah, though I guess like, you know, it's funny because when, normally when I think of like, is there at least, you know, is there any physical instantiation of this? I tend to think of things that are not.Inside people's minds, but then hearing you say it, I'm like, okay. But then the fuzzy abstractions, they would count too, right? Yeah. As as the physical instantiation.Ben Goldhaber: Totally. Yeah, exactly. And I think my, my sense is basically that like math is roughly like the most general sorts of, like, isomorphism like the things that sort of apply everywhere in the world and like, would apply under many ways that we can be uncertain about the world.Like I have no idea how big the world is. Like, I, I think it makes a lot of sense to sort of think about infinitely large sets, even though I, I, I doubt the universe is infinite in terms of like space. [01:18:00] And like,Ben WR: yeah, I think I'm trying to put this together in my head, so I'm like, okay, how, how is this infinite thing compatible with mathematical realism?Well, maybe it's that if, you know, if the universe produces beings which observe it and try to make isomorphism about it, they're gonna predictably come up with these types of structures and then instantiate them in their own heads,basically, right? Yeah. That's basically Right. Okay. And like, and in a way that like, is very scalable.It, it, like if you use like the infinity is sort of like, like infinity in this view is just like basically a way of representing this like very clear regularity, which clearly would apply in the physical world no matter the size. Mm-hmm. Or something close to that. Yeah, I don't know. And this felt like super clarifying, like this general sort of shift to seeing like the, all of math as sort of like being more or less secondary to physics.Right, right. Yeah. [01:19:00] Yeah, I do. I think, I think that was a particularly good point cuz I'm like starting to see some way in which it's like, all right, math is this unreasonably effective way, unreasonably effective isomorphism for patterns we see in the world. Maybe that's the primary one. And then your pointing out like, there are these other patterns that seem very effective, like these kind of universal, the moral patterns.Moral patterns.Yeah. No, it definitely ha like, I feel like I was getting there too.Divia Eden: Yeah, so hearing you talk about the physicalism in general and sort of, I don't know, humoring me with the mathematical realism bit, I, I think I have a better sense of what might have been the shift that happened for the way you're thinking about morality. Where there, yeah. The, and I think I, maybe this is the same as what I said before, but like that there's something that's grounded that was not previously grounded.And that there was some sort of bit that was there before that you're now like, no, no, I'm rejecting that bit. Cuz it seems sort of supernatural and that now it seems [01:20:00] like, like morality, not exactly, but it's more like just another thing that you can have thoughts about it. Yeah. And you could try to do sense making the way you would about other things and it's empirical and it's complicated and that it you ha that there's some sort of impulse for El elegance on the object level that you.That you're, you mistrust a lot more now in part because, yeah, because I don't know, because there's, you do see some pretty elegant principles on the metal level for how to make sense of this, and they don't add up to something particularly simple on the object level.Ben WR: Yeah, I think that's totally right.And this is,Divia Eden: yeah, it's, it's a shift for you and it, it has some implications for how you relate to this system around you in terms of the EA stuff and. I don't know, maybe rash. We didn't really talk about the rationality, meme plex and how it is with this. I, if you have any brief thoughts on that, I'm interested too.Ben WR: I think there's a way that you could go kind of crazy. Like thinking about all of [01:21:00] the different mathematical structures that you like, might be embedded in sort of like reference VAs yeah.That kind of thing. For sure. And like, I don't know, I think it's just better to like, I don't know, plan for, or like, and like, like sort of be thinking in the main line, which is just like, there is this world, it's kind of mundane you know, almost by definition of the word mundane. But like, It's just there, we have a lot of evidence for this is the wordDivia Eden: mundane?Is that the etymology of that? Like the world?Ben WR: Oh, I didn't realizeBen Goldhaber: that. Yeah. Oh, yeah. Until you said that. No, I didn't either. Munus. Yeah,Ben WR: yeah, yeah. And like, yeah, I mean, it's, it, it, it encompasses all of the like other crazy amazing stuff and like, it's amazing, but it is still sort of You know, it, it's just there and you don't have to, and in fact, it's a little bit weird and I'm not sure why you would appeal to like Sort of [01:22:00] crazy mathematical stuff.Like I, I guess things sort of more like like techmark four or like belief that like all mathematical objects exist. Or like at least, yeah, at least without. Having some other kind of motivation within physics, like I think Eliezer and Benya at Miri, I, I think, I think do have a lot of this sort of like as background and then like notice that in fact a lot of the evidence about physics points toward like some kinds of infinity, like many worlds sort of like points towards some of infinity.And like you, you do need to sort of like reason about that. And so I think there are sort of reasons to like invent Something like solomonoff induction or like, you know, sort of some of the crazier stuff. But like, you'd be veryBen Goldhaber: skeptical of things like thought experiments around simulation hypothesisBen WR: perhaps.Yeah, I think that's basically right. Like I, I mean [01:23:00] it's, it's not exactly like, it's not entirely like being skeptical of the arguments. It's more like, I don't know. Seems like I'm at least embedded in a particular physical world. Yeah. Maybe I'm also elsewhere, but like, yeah, likeit's,Divia Eden: you can tell me if this analogy is, it makes any sense or has kind of gone off the rails, but like, When I try to inhabit your frame, I'm like, okay, so morality, let's say, I'm thinking of it as like a map of some useful fuzzy abstractions, some useful isomorphism.It's meant to apply in my local environment to help me achieve my goals. And now I've take, I've like looked at my map and I've been like, okay, I'm gonna like, these lines seem pretty straight, so I'm gonna extend them out like 50 million times as long as they ever went. And this sort of looks like this tessellating pattern.So like same thing there, like a bunch of analogous things. That seems to have some moral implications, and you're like, no, it, it doesn'tBen WR: really, does it seem right? Yeah, I think that's basically right. And like, yeah, so, so I guess it seems to me like sometimes [01:24:00] I don't know, like, and this one I'm a lot less clear on.I think, I think there's a, like a much stronger chance here that I like, just haven't thought as hard about it as like the rationalists who seem a little bit crazy on this access to me or something. But. I do have some sense that like the reasons that I previously thought that that was plausible, like no longer feel compelling like after sort of like having the shift, right?Divia Eden: Because it seems like they were, maybe the reasons you used to find it plausible were something about something that now you consider kind of supernatural plus maybe some. So I want the object level to be elegant.Ben WR: Right. It was sort of like, I mean, yeah, like I think it is sort of true. Like I could be embedded in a lot of things and like it's just kind of like, well you know like some of those I like could know things about and other ones I can't.And like it seems [01:25:00] clear that there's this particular one that I'm definitely in and. Maybe that has, I don't know, maybe there are other, maybe there are other things too, but like, it seems pretty weird to me, like if people, like, I don't know. So like modal realism, like the idea that like, you know, all possible worlds are real in the same way like exactly the same way, if you like, without adding any sense of like, measure or like which ones are more likely or whatever.Seems to me like it's sort of obviously has to contend with this problem, which is that like our world looks really consistent, like barely anything crazy happens. If anything, and like I think if Mortal Realism were true, like almost everywhere just looks totally crazy. What, what'sBen Goldhaber: an example of the kind of crazy you'd expect?Yeah.Ben WR: I mean, like, I think I would expect to have memories of like elves popping into [01:26:00] my, like, like, like through my hand or something. Like, I dunno, just like anything you can imagine as being like, sort of possible in some sense. Like would be real.Ben Goldhaber: Okay. Yeah. I think I'm a little lost on this, like conception of like, there's this view that like maybe, like any possible world is in fact likely to happen.And so we might see these types of like random events like in Alpha appearing or like, I don't know, some, and we tend, as far as we can tell not to.Ben WR: Yeah. I, it's sort of like, I mean the, the particular claim that I think is the most like. Hard to square with. My experience is like that they all have exactly the same amount of realness or like they're all real in exactly the same way.I think if you're instead like, oh, well, you know, those ones are not very real or like they're less real. They're a little bit real, but like, I don't know, solemn enough. Induction says they're like way less probable or something that that's like less crazy. But. [01:27:00] Yeah. I mean that's, that is not in fact, the claim that philosophers are often making about modalism.And I think it, it should be at least suspicious if modalism is true that that our universe is sore, you know? Yeah, exactly. Okay. So yeah, I think,Divia Eden: I don't know, I think we've maybe collectively understood a least a decent amount about where you're, where you're coming from here. Thank you so much for sharing your, your perspectives.I, I really like hearing it.And I think, I think Ben, you, you have some questions about things that are, that are slightly different though of course, all things are related. So does, does it feel right to go there?Ben Goldhaber: I think so. I thinks I think, yeah, that's, that's exactly right. I mean, and these might be a little bit more scattershot, like might just throw a few out there.But I really wanted to hear more about what you're currently working on in the tools for thought space and how you're kind ofBen WR: approaching this. Yeah, totally. So. I, about a year ago, a little bit more realized that I [01:28:00] don't know anything about the future. I mean, like I, I think I probably actually know more than like the average person, but I don't know enough to know what I should do.And I, I think almost no one does. And this seems like a real problem because. There are a lot of scary things that sort of look like they're coming. And if we don't like, have models for how things will work I don't know how we can sort of survive as a species. And I don't know, I mean, that's all sort of like a, like a grandiose way of putting it.But like, actually the, the thing that I most viscerally feel is like, I don't know what the, and you wanna fix that? And I want to, yeah, I wanna understand what's happening and like what's going to happen. And I thought about like how I would go about figuring out what would happen and I was like, okay, I think I like want [01:29:00] Rome except like, like a less bad.This isDivia Eden: Rome Research is a, a notetakingBen WR: push knowledge graph, note taking tool. We can put a look at your shelve. Yeah, exactly. Yeah. Yeah. It's, it's basically, I mean, it's a great idea like, and and I, I really like Rome. It, it's basically sort of like a, like a tree of bullet points and you can sort of like, I don't know link between different pages and things, and it's like really it's, it's very convenient to use.QuickDivia Eden: note. Our podcast notes for this episode were in fact created in Rome. This is a Rome supporting podcast.Ben WR: Yeah, so the problem with it is that like most of the features, there are a bunch of other features that are sort of built on top of this, like nested list sort of system including some that are sort of like calculation E and database E and stuff.But they're [01:30:00] like s in my, in my opinion, like implemented in kind of a haphazard way. And that's one, one reason that I didn't think I could do it in Rome. Another thing is I think any thinking on like, of things in this sort of general genre, like is gonna be very uncertain. And I, as far as I know, it's, I don't know, I haven't been paying attention because I, I, I, you know confession time switched to Loge, which is sort of a, a Rome clone.But like the they might have added something like this, but, but my guess is not Like if you're familiar with Guesstimate, which is this a web app made by Ozzy Kuen which is sort of like it's like a spreadsheet, but with sort of samples from probability distributions instead of like individual values.I found it [01:31:00] extremely useful, but not very scalable. Like if you're, if you're trying to make a large model in guesstimate it becomes pretty unwieldy pretty quickly. It's also not like, it's not easy to collaborate with people on models in Guesstimate. And I think both of those felt like pretty key issues there.So I didn't feel like I could use guesstimate either to do the kind of thinking that I wanted to do. And so ultimately I was like, okay, well I'm a programmer. I know how to do things like this. I am just going to like go ahead and make this thing. And so the current idea which I'm temporarily calling calcs, C A L X, but I don't know if that may not stick is basically to have a similar sort of tree-like structure to Rome.Where, and like, sort of similar like linking and stuff in between documents and things like that. Where every node in the tree gets a, a, like [01:32:00] basically a spreadsheet cell. That is also like like sampled from a distribution. So. Ultimately, like you can basically have a top level question, which is like, I don't know, like, what should I do about AI or something.And then like you can sort of break that down into sub-questions and sort of combine your answers in like the, in the top level question. And you can sort of like have Basically this, it's not quite a probabilistic programming language because it doesn't let you at least the first version won't let you do inference.Like, it, like it won't let you like, learn a distribution from data, but like, we'll let you have sort of this uncertain estimate which propagates your sort of like, known uncertainties through and can sort of show you like here is roughly What my other beliefs kind of imply about, like what I should believe about this thing.And yeah, I don't know. I, I'm really excited about it. [01:33:00] I think yeah, it's, it's not probably the easiest thing to like visualize if, if you're just listening to me describe it. But it's, it's,Divia Eden: I mean it's, I guess what I heard you say is it's sort of like a cross between Rome and guesstimate, and by crossing them it makes it more scalable for sort of tracking how the beliefs affect theBen WR: other beliefs.Yeah. Yeah, that's, that's basically exactly right. It's also gonna be like like entirely collaborative basically. So, so also sort of like it'll be a lot easier than in guesstimate, for example, to like, use someone else's like, you know, like calculation that they've done in your calculation. And do things like that.Or like sort of maybe, you know, treat things more like Google Docs as well. Is this something people canDivia Eden: play with or not yet?Ben WR: Okay. Not yet. I have like a couple of terrible screenshots of like the, I've, I've built the sort of core logic engine of it, but it's, right now the user interface is just like a command line interface on my laptop.So it's not it's not quite to the point where[01:34:00] where people can use it. But this week I'm. Planning to go ahead and make the server for it like so that I can then go ahead and build the actual web front end. And then, so maybe two, two or three weeks from now, I, I hope to like have aBen Goldhaber: prototype.That's exciting. Do you imagine doing this on like many of the questions you're facing day-to-day or, yeah. Yeah,Ben WR: absolutely. I mean, and I, I have used estimate for a lot of questions like day-to-day, like like, which job offer should I take?Or like, you know, should I like apply to this thing or whatever. And then and I found it really useful for that kinda thing. Oh yeah, sorry. If you,Divia Eden: if there's something, you know, that you're willing to walk us through a little bit about, here's how you were thinking about it before you put it in guesstimate and then afterwards.What was it like?Ben WR: Yeah, I mean, so a lot of the time, so I mean, before I knew about Guesstimate, I would like have spreadsheets that would sort of be like, okay, well here's my like estimate for X and like here's like, you know, my estimate for Y and here's how they should like combine to like, give me an estimate for Z.And [01:35:00] that's like, I think pretty useful and like I've gotten a lot of mileage out of that in my life. But It's also potentially really misleading if you're like, using these point estimates because like, if you like, the point estimate is probably gonna be like your mean guess or something, or your average guess.And like that is not, like, it's not a good representation of like what your actual uncertainty is. Like it, like what your error bars are roughly. And you can, if you're like, you know, sufficiently anal about it, you can like go through and like, you know, make a hundred samples in like a hundred rows and like do your calculation across the spreadsheet.But like, it's just kind of like, I don't know, it's just like a, a, a pain and I never actually did it in practice. And I, I don't know if anyone else did, just, I bet a lotDivia Eden: of our listener know listeners know, but can you actually explain what a point estimate is in case people don't?Ben WR: Yeah, sure. So, so basically if I have some uncertainty about some values, so say like, I don't know maybe I've got [01:36:00] a friend and I wanna know his height.Like, I don't know, a point estimate would be like he is probably about six feet tall. It's like a single number, which sort of like represents my best guess for like some uncertain value. Whereas like the distribution itself that I have, Like that better represents that uncertainty. Might be like something, kind of like a normal distribution that's sort of centered around six feet.And like has some standard deviation, which tells me like sort of how much, how much variance is like there is in my, in my estimate. Like do I think it, he's a similarly likely to be like six feet and like like, or, or like six feet and like six feet, five inches or like six feet and six feet, one inch.And so like, those are like very different like, like uncertainties that I might have over his height. And the wake estimate works is basically instead of having a single value[01:37:00] like the six feet gas, it sort of takes a sample. It it takes many samples from the distribution that you tell it.So if I said, you know his height is like, A normal distribution centered at six feet with like, you know, a standard deviation of one inch. It'll like, you know, take, you know, thousands of samples and then propagate those samples through the same co computation. And then at the end I get to see this sort of histogram.And like other facts about the distribution, like, As a result of sort of seeing all these samples which yeah. Cool. They just sort of like all, all of the calculations get applied to the distributions. Mm-hmm. Yeah. So then how, andBen Goldhaber: you're not losing information as you are like cutting off the tails and just seeing the mean or something likeBen WR: that.Divia Eden: Yeah, exactly. And this is like, yeah. Again, if you could describe like how this. Has made you see potential decisions differently?Ben WR: Yeah. I mean, I think it ma it sort of like removes a lot of the illusion of like certainty which I think is really [01:38:00] valuable. So yeah, I mean, one particular example is like thinking about.Is thinking about like which job should I take? Like, I think in 2016 or something, I was like, considering whether I should work at Cruise or I think it was like oh shoot. What was the, the other competing offer was like Flexport, I think. And like, At the time, I had like a couple of different things that I cared about and I wasn't really sure how to like, combine them like I cared, like cruise was gonna be better for a lot of things.At that, at that time I was already a little bit into like AI safety and like wanted to get more experience with AI stuff. And so Cruise was gonna be better on that access and I wasn't really sure how much, or like, how, how helpful that would be. And I also was really uncertain about like the compensation between the two.So like, I don't know. Flexport had given me, like, it had had an offer with like a lot more equity and crews had like a lot more like salary. [01:39:00] And so like I tried my best to sort of like figure out what I expected, like in terms of uncertainty, like, you know, the valuation of the companies to be at the time, like when my shares would vest and so on.And like, I mean, I don't know, I think it's, it's just really useful to be able to see what that like results in when you like, convert it into like your distribution over like total compensation over time. And then you can sort of like, Take that distribution, and then you can also have like some other crazy distribution over like how useful this is for AI safety, which is like a pretty weird question to ask.But like, I don't know, like at least in that case, you're not sort of deceiving yourself that like, you know, like, I, I think a lot of the time before using Guesstimate, I would sort of have You know, like pro con lists and then I would like add up all the pros and add up all the cons and like, oh yeah, well there's like six in this column and three in that column.[01:40:00] And I think it just sort of like it's all sort of fake and it's not always obvious that it's fake. And I think it's more obvious if you're like adding, if you're like describing like, Oh yeah. Well this is my guess, but like also I have no clue.Ben Goldhaber: One, one thing I, I've found that can be really hard with large estimate models for that matter, large spreadsheet models is something like comprehensibility and like ability to like come back to them.Like some sense of like, yeah, you get like of all these different cells and then it's like maybe it's good. Totally. Yeah. Is is some hope with your tree structure layout that is more.Ben WR: Reusable. Yeah. Yeah, absolutely. I think, so one thing for me is like, as a programmer, I have, I've, there's like the structure that most programming languages have which is almost sort of like Rome in that like everything is sort of tree shaped.You have like, you know, expressions which are themselves made up of other expressions. And this is a really great way to [01:41:00] organize like complicated calculations. And spreadsheets don't really do this. They're like yeah, nope, you've got this like 2D grid. Sorry, I'm gonna take a sip of water. Yeah, hydro homies. So yeah, so I basically do have this intuition that like, This tree structure also like from Rome that this tree structure is like really, really good as like a way to organize like sort of complicated questions and like think about them just to kind of like start from a broad thing at the top and like.You know, dive down and like have sort of these collapse. Mm-hmm. Mm-hmm. Sub-questions or, or like little bits of extra information, which you can incorporate if you want and, and so on. I, I also think it's important, it feels important to me that like every cell has a, like, also has like a text [01:42:00] bullet next to it.So like the default. Workflow I'm imagining is sort of like you build a tree, which is sort of like describing your uncertainty in English and like very probably because it's like, well, I don't even know what units this is supposed to be in or whatever. And then you can sort of iteratively like starting from the bottom or like sort of like the most concrete, simple questions to answer.Yeah. You can sort of like, Work outward and sort of make it like, sort of figure out how to combine the sub sub elements at, at each level. So yeah, I mean, yeah, that's definitely, I think comprehensibility is like a huge part of it. Yeah. Great.Ben Goldhaber: Yeah, I'm, I'm gonna throw something out here, which feel free to not pick [01:43:00] up at all. It might not be interesting or, but like, I, I've also been really interested in forecasting for a long time, and one like thing that kind of comes up in certain parts of the forecasting space a lot is like, well, what decisions have really been changed by some of these forecasts?Like, you hear this with like prediction markets a fair bit. Like, all right, is this info actually going to change somebody's decision? And I feel like one thing I'm kind of catching from your description of Calc X or is, is like some hope that many of these decisions can get changed if it's like, if it like starts at like the, at like the person level.Like if you make them better at thinking about uncertainty as opposed to like creating some kind of external system. Is that, is that right? And also like you disagree with me That, or do, do you agree or disagree that likeBen WR: it's almost, almost, I definitely agree that like, That like a lot of the forecasting stuff that happens, I'm not totally sure how useful it ends up being.Especially sort of the, the more like sort of public forecasting [01:44:00] stuff. Although, I mean, I don't know. And I do think a lot of the sort of more public prediction markets have like produced a lot of value. I. Or especially like the ones that are sort of high volume, I think partly because those are like, you know, the interesting ones.But yeah, I, I do think there can sort of be this like disconnect between like the people who are making the decisions and like how they're making the decisions and like the people who are doing the forecast and like how they're choosing which things to forecast. And it does seem really valuable to me to sort of like connect.Like the, like sort of the agency with the forecasting say a little bit more to the agency with the forecasting.Like Yeah. Like, I mean, so I think like me choosing a career, it feels like it's pretty tightly connecting like the agency Yeah. With like, yeah. You want it, like, I have this particular decision Yeah.That I have to make. I'm like, yeah, I'm gonna, I'm gonna use this to help me make that decision. And it's like, there, there's no, like, there's no wasted motion. I guess. I, IBen Goldhaber: strongly agree with it. I've like felt this in my own life a fair bit. Like the way in [01:45:00] which like, all this stuff seems really fake until like, I actually just need to figure out if I'm gonna move to city A or city B, and then it's like, oh, all right, this is a little more helpful.IBen WR: care. Yeah. And it's a little bit weird. Like I, I think there's actually some evidence to me that sort of feels counter to this, which I'm confused by, which is like, In science, it seems like a decent amount of the time. There's just some guy who gets really into categorizing all the rocks and he just like goes around and categorizes all the rocks and like, doesn't have any particular reason for doing it really.And like it's just sort of his special interest. Yeah. And then later that's like extremely useful and it's like, I don't, I don't really understand how that works. And I think it, it's like it's pushing me a little bit toward thinking like, ah, maybe I'm just wrong. And like, actually people. Doing, like following their random interests and like produ producing artifacts is good.Even though yeah, though those don't seem totallyBen Goldhaber: opposed to me. [01:46:00] Like, I don't know, I'm like pretty, I'm, I'm pretty stoked by the like, random person going out and doing that. And also the random person super into forecasting in some way. Like I, I don't know, that doesn't seem like the same type of opposition.They almost seem a little bit more tightly. Nearby. Like there's both some kind of purity of they just want this thing. Yeah, yeah, yeah. I mean, that's fair. Yeah.Divia Eden: Yeah, I agree with that. I think there's, there's something where like the, the guy categorizing the rocks, like we maybe don't know why he cares, but he does care.Whereas I don't know if, I'm trying to like bet on who's gonna win the midterms. I think that's sort of, there's something about that that is missing that is there with both the rock guy and you trying to figure out what to do next.Ben WR: Yeah, yeah. Yeah. That seems maybe right, although I'm not sure what it's well, I meanDivia Eden: with me betting on the midterms, like I, there's just more of a disconnect.Like maybe I wanna make money on prediction markets. Maybe I [01:47:00] wanna like show my friends how. Cool.Ben WR: I am. Yeah. It's like about, it's about the rightness andDivia Eden: maybe I even care who wins the midterms, but it's not under my control. So there's not some like Right. Rapid feedback between like if the dinosaur guy, sorry, I keep saying that because there's that meme.I keep thinking that because there's this meme about the guy that just really wants to do something with dinosaurs that I think of him as like what you're saying with the rocks,Ben WR: but we'll definitely link this in the show notes. Yeah. LikeDivia Eden: if that guy is thinking about the rock things, that it, it is tied to his agency, like he's gonna go search for rocks in a different area based on his theory about rocks or something.Like, there's some sort of feedback loop that I, I think is at least much harder to get if I'm betting on the midterms.Ben WR: Yeah. That's interesting. Yeah. I'm still a little bit confused though, like why does he care or something. And like, I don't know, like what is causing him to care in a way that like is somehow predictive of what other people will find useful.This gets backDivia Eden: to the unreasonable [01:48:00] effectiveness of mathematics in the naturalBen WR: sciences. Oh yeah, no, that's actually a really good point. Okay. So like I I, if I, if I understand where you're going with that it, it, it's something like like people. Inexplicably have these like special interests or whatever like basically because that helped in the past or like the, the like the process would produce these special interests, like are useful?I do think so. I think, yeah, I thinkDivia Eden: there could be something like that. I think, like, it reminds me of some stuff that Seth Roberts said about how he thinks that there's maybe some sort of cultural or genetic evolution towards people liking artisanal things to like to, because it helps with technological progress.I don't know if that's, I dunno if that's true, but it definitely reminds me of that. I also think, I think these days I tend to not reach for evolutionary explanations as much. Not that I don't think they have value, but I also think sometimes people have either some weird combination of neuroses or aesthetics or whatever, where then [01:49:00] it's like, I don't know, it's like there's some itch in their minds.And then I think because of. The sort of unreasonable structuredness of the world, like whatever itch in somebody's mind is gonna be isomorphic to something interesting in the territory a lot of the time. Huh? Yeah. Interesting. Not necessarily for EV evolutionary reason, but because abstractions kind of line up it seems like.Ben WR: Yeah. Let me see if I can think about that a little bit deeper. One sec. Like, I can sort of imagine that like people's minds just happen, like are structured because it's useful in such a way that like the things that are even there in your mind to get obsessed with, like are sort of worth getting obsessed with.Is that like closer to the thing? Yeah.Divia Eden: I mean, I don't know that they're like, certainly I don't think they're always worth it,Ben WR: but like, yeah. Yeah. But like more likely to be, or like, yeah. Or am I hearingBen Goldhaber: some way in which you like trust the instinct or the impulse? You get obsessed about a thing. [01:50:00]Ben WR: I,Divia Eden: I per, I trust it for a number of reasons.Partly because, and this is sort of like a separate issue that is maybe, maybe more what you're saying about the useful thing where like, I think that insofar as there is this tight feedback loop between what people are doing and what they care about, they can produce outsize impact. And so even if they're like, well, I, maybe this is like I'm taking a.Like, I'd have a multiplier of, you know, 10 x if I'm working on the thing that seems useful, maybe my actual ability to do it goes down by even more if I'm not interested in it. And so in some ways it seems pro-social because if people wanted to maximize their personal impact, they might, they might try to steer themselves more.But if people have a, like a more hit space model of people doing what they're interested in, maybe like. Everybody do what you're interested in is a more promising societal strategy than like, okay, everybody tried to do the most important thing. I think, I don't think it's super clear and probably some mix, probably not quite that simple either.Like, I think it's a sort of naiveBen WR: framing, but Yeah. Yeah, I mean, I like it. I, I think [01:51:00] it fits really well with my like personal like experience of just like when I ever do anything worthwhile like I think a lot of the time. Yeah, like a lot of the time when I've tried to do like the thing that would be best or something like in an abstract sense, rather than trying to do the thing that I feel excited about it, like just basically goes nowhere.And I do think there's someDivia Eden: real calculation problem there about what isBen WR: best. Yeah. Yeah. Like, I, like, I think I am making a mistake. Yeah. Yeah. And yeah, that seems like, yeah, the team's very like very apt or something for, at least for me.Ben Goldhaber: Makes me even more excited for eventually playing with Calex just because it, I don't know.Sounds like you've gotten obsessed about it and there's a bit of a, like a tour thing going on.Ben WR: Yeah. I'm also, yeah. Separately, I've been really obsessed with this thing called C RDTs for years now, the conflict free replicated data types. Yeah. What's that? And It's just [01:52:00] like a way of building applications such that like they can be easily turned into distributed systems.So like, so if you wanted to build Google Docs but have it be end-to-end encrypted, so like the Google server couldn't see your doc you can't do it the way that Google Docs is built. You have to do it in such a way that like your browser, like the client machines can do all the conflict resolution themselves.This is a super I don't know. Not, I don't know. It's, it's way in the weeds. But well be aDivia Eden: segue to one of our next topics. I think youBen WR: should keep going. Oh, cool. Yeah. So, so basically it's just like, it's basically if you have your, like the state of your application like fit into like this, this particular like mathematical, like very simple mathematical structure called the semi lattice.You can Easily, like basically trivially solve all of these like, very hard distributed systems problems, like, you know, accidentally getting the same [01:53:00] message twice. Or like things that normally would sort of cause like hiccups in in your application when multiple people are editing it or, or, you know, interacting with your application.Like you instead can just deal with them gracefully. And anyway, so. I have basically finally, I finally like have this project where I like, actually can use this, like, really amazing, like, I don't know, like cool trick. And like, I don't know, I, I'm really excited because I think, I think I am gonna be able to make this tool like end-to-end encrypted in a way that like most similar tools can't be by sort of like using this I don't know, uncommon, uncommon way of making the thing.Divia Eden: No, sorry, I I think that's, I actually, I, I wanna eventually try to tie that back to some other things, but I, I, I'm also gonna try to tie it forward because we, we had on our list to ask you about secure d n and I might be [01:54:00] overfitting to say that it seems a little bit related to what you just said, but I, I'm, I'm gonna try anyway.Ben WR: Yeah, I mean, I think aesthetically it feels very related to me. Like, I mean, so security a is like, is building basically this like system where like basically for screening orders to like gene synthesis labs Sorry, let, maybe I'll start, start from further back. Okay. So there are these companies which can synthesize DNA n a for you.You send them a sequence of like, you know, bases, nucleotides, like a G C T. And then they send you like, like synthesize d n a that matches that sequence. And this is really great. It's like, I mean, it makes lots of like biological research a lot easier. But it's also a little bit scary because, you know, many [01:55:00] viruses are basically just made out of nucleotides.And so you could basically just make like a pathogen and potentially like a, a, an unusually dangerous pathogen like by sending an order like this. And so there's this question of like, how can you Basically avoid, like how can these these synthesis companies avoid making the next pandemic while like preserving the privacy of their customers.And like, you know, like without also leaking the, like, the list of pathogens or the, like the, the like leaking the information that would allow someone to like figure out what the next, and by that you mean is like havingBen Goldhaber: some kind of like public list of the things that you're not allowed to order.Ben WR: Yeah. I mean, and there is in fact a public list of things that you're not allowed to order. We'll link it in the show notes. Yeah, it's [01:56:00] called the Australia Group. But it, it, like, this is so, and, and in fact that is what like the first iteration of security A is targeted at is, is preventing people from ordering things that are like known hazards.But a second iteration is gonna be targeted at what they're calling emerging hazards. So basically things that are not, like publicly, the sequences are not publicly known but like are important to screen against anyway. Like maybe they were things that were just just learned about And yeah, so basically like there is, there's a lot of like, I mean I think there's a lot of sort of this aesthetic similarity just in that like they're both sort of trying to sort of elegantly solve these problems with like privacy and security and distributed systems.And like, and the security a stuff. I should be very clear. I did not design any of the like, cool crypto stuff that is [01:57:00] like making it possible. It was all like you know actual cryptographers. But so this is something you're working on that's really, really cool. It, it's something that I, so I was working on security and a most recently as like a full-time job.And so Yes. So basically I was the like, I guess first like programmer that they had hired to work on it apart from sort of like grad students. Sounds very important. Yeah, I mean, it was really, it's, it's a really cool project. I think if I thought that bio risk was a bigger deal than AI risk, I probably would've like just working there.But I eventually was like, oh man, I feel like I should get back to the. The real stuff or something. No offense. I mean, I think it is real stuff for sure. But the stuff that is like realist to me or something [01:58:00] Something I was just, I'm just kind of mulling on about your explanation of what Secure DNA was, because I also was just kind of curious and didn't honestly know that much about it. Is the likeBen Goldhaber: idea of systems that. Enforce certain norms or rules in a like multi-party kind of game, but like also are not just a strict like centralized, like I don't know, like top down kind of model. And I don't know, I'm feel like picking up a little bit on that like aesthetic thing that you're pointing at, like what a similarity is between that and the C R D T in some way in which it's like not a yeah, I don't [01:59:00] know.It's not a single like, government enforced thing in the same way in which like you need to have a distributed kind of system to handle it. Are, are there any systems of this type that you're. Optimistic about, or that you're kind of thinking about within the thing that feels real to you of ai?Ben WR: Yeah, I mean in, I guess in ai, I don't yet see this kind of thing or this kind of aesthetic, like represented very much, and I'm not totally sure how it might come to be more represented.I mean, I, I guess there like, I don't know, sometimes I hear people talk about like every person gets their own sort of like ai, like personalized AI assistant or something. And like maybe you could end up with something that would have this kind of aesthetic that way. But I don't know, it sort of rings hollow to me to say that or something.It doesn't really feel quite like what will happen. But [02:00:00] yeah, I don't know. I think, I mean, I think it, it's also, I think it's, I mean, I'm a little embarrassed to say this but like I think it's also kind of part of the aesthetic of like the cryptoBen Goldhaber: world, like I was gonna say, and I didn't wanna utter the cursed words of blockchain Yeah.And ai. But I certainly think there's some kind of like aesthetic thing, even if that sounds terrible as I sayBen WR: it. Yeah. Yeah, and I avoided learning about blockchains for a long time because I had like a sense of like that whole world being like super toxic or something. But actually it's really cool and like aesthetically I love it.And I guess, I don't know, I'm not sure what to do with that, but but yeah, it's, it's, it's pretty similar. Yeah, never never let the haters tell you what you should learn about or something, I guess. Yeah. That is a good, gotta followDivia Eden: weird, idiosyncratic interest in rocks, right? [02:01:00]Ben WR: Yeah. I hope so.Ben Goldhaber: Well, is that a good note for us to close on? I, I feel like we've covered a lot of the questions that I had. Divia, is there some that you want, any others?Ben WR: I think,Divia Eden: I think I wanna try a potential additional wrap up type move, which we can, you know, if it doesn't, it doesn't work, then, you know, cut it or something like that. But yeah, I guess, and so when I say that, I'm obviously sort of joking about the, the Rock guy and I, I'm also sort of really not, and. Yeah, I, I think what I, if I try to like sort of digest everything you've been saying since the beginning with the, with the physicalism and the moral realism and the tools for thought and the, and the secure DNA stuff, I think here's my attempt to sort of, I don't know, build a picture of how your mind is working in relation to these problems or something.It, like, I think the thing I see unifying it is something like,[02:02:00]And this is, this is gonna sound similar to things I've said, but that, that there's some impulse maybe that a lot of people have. I, I definitely relate to it, just sort of add an extra meaning layer somewhere and then kind of reify it in a way that is sort of goes with like a top-down type of thinking that has calculation problems.And that this is, this is an issue with how people think about morality. That they're sort of added something and then they're like, cool, now that I added this, like morality juice, I can just calculate it when it doesn't really, doesn't necessarily work that way. And then similarly with the tools for thought, there's some way that I'm gonna be like, okay, cool.I have a number now let's like that number is like my estimate. We're like making it special. And now we can like pretend like we're calculating something when we're not. And I dunno that I can fit this as cleanly into the secure DNA thing, but [02:03:00] like there's maybe some sort, if I were to like map that impulse to be like, okay, here are the dangerous things.We're like putting 'em on a list and now we're gonna like, but where that's maybe also kind of there's some unifying aesthetic around like, no, no, let's like figure out where the elegance should actually go so that we can actually figure things out and it's not necessarily there and we can, it's not necessarily where our first instinct is to put it.And by sort of de reifying that we can get something that's more robust potentially.Ben WR: Yeah, I think that definitely I, yeah, that, that really resonates for me. I, I guess one thing that, to say to sort of riff on that a bit is that like, I think sometimes it actually can make sense to sort of like live in a fantasy world temporarily.Like I, I think there's like a way that when mathematicians are thinking in terms of like the platonic realm that they're like, Eliminating one layer of like like one sort of spatial layer in their [02:04:00] brain of like things they have to track. And I think like to some extent, like I think, I think that's like a super valuable thing to do, but I also think that it's really easy to sort of like accidentally forget that that's what you're doing or not notice that that's what you're doing and to sort of like end up believing that that sort of collapsed version is the truth.Yeah, that makes sense. And it's kind of, yeah. Yeah. And it's, it's basically, you don'tDivia Eden: wanna generate that activity, but you do wanna contextualizeBen WR: it. Yeah, totally. Yeah. Yeah, I think that's totally right.Ben Goldhaber: Lovely. Well, I think on that note, I just wanna thank Ben, thanks for again joining us and I don't know, giving us a chance to kind of understand.Your worldview and I think, I don't know the world a little bit better. Yeah, totally.Divia Eden: Thanks so much for coming on and for yourBen WR: time. Yeah, this great. Yeah, I really appreciated it. Thanks a lot. I really enjoy chat chatting. Yeah, I don't know. We should hang out.[02:05:00] This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com
undefined
May 3, 2023 • 1h 18min

Divia and Ben: 4/27 Conversation

Divia and Ben discuss politics, reminisce about elections gone by, and try and make AI predictions.This transcript is machine generated and contains errors.Ben Goldhaber: [00:00:00] so how's it going Divia?Divia Eden: It's going well. Great to have a chance to talk to you on the podcast.Ben Goldhaber: Yeah, same back at you and Hi. Who, who else is joining us for the next minutes?Divia Eden: Yeah, this is my 15 month old son. He'll be here just for a few minutes and and then, then it'll be just the two of us.He's our, our podcast guest. Yeah. Likely.Ben Goldhaber: Yeah. One of our easiest to understand guests. I would bet. You know, just very clear and direct. Absolutely. In a straightforward way. This is no slur against our previous guests but we, I feel like we invite people on in some ways for loving the Galaxy Brain takes.And the beautiful thing about a 15 month year old is just you know, that's right. Simple, clear wants, I imagine I'm speaking as if I have any actual parenting expertise, but,Divia Eden: It's all good one day. Yes. So we don't, we don't have a guest for today. But we have a plan that sometimes just two of us [00:01:00] are going to talk and catch up about things that are on our mind.Ben Goldhaber: Yeah. Seems good.Divia Eden: And before we started recording, we were just exchanging a few predictions about some political stuff, you know, as you do. So I thought, I thought maybe we would, we would talk aboutBen Goldhaber: that. Yeah. As maybe some listeners know and, and Dvia knows. I, I was about to say I don't have many vices, but I don't think that's for me to say.But one vice that I do have is I very much enjoy betting on politics. And I will say it has been exciting to see that predict it is looks like it's gonna survive, which, yeah, I was actually much more negative about a few months ago. I did not think it was gonna happen.Divia Eden: Yeah, there wasn't, unfortunately, I think there was no prediction market for predicted unpredicted, so weBen Goldhaber: were left not unpredicted.There was one unlike poly market, which I don't think I ever, I I, I have, the crypto ones have always been just a little, I don't know. I've, I've done some stuff with them, but I've always been like [00:02:00] feel like they're about to fail or about to run away. But yes, poly market had one, but yeah, was not, when I predicted that would've been way cooler.Divia Eden: Yeah. I'm also a longtime political better. I, one of my at the, this is actually, I don't know if you'll ever listen to this. Hello to my dad. If, if we had and wonderful short conversation with my dad back when, so I bet on Obama on, I guess it was Intrade at the time when he was running the first time.And I remember I, cuz my dad was, he was a big, like, he, his read was, Obama's gonna win. He was like, this guy's very charismatic, he's gonna win. And so I was talking to him and I was like, all right, well you think he's gonna win? He's trading it like at whatever it was like 57% right before the first debate or something like that.I was like, well how much do you think I should bet? And he was like, yeah, you should bet like 50 bucks or something. And I was kinda like, well, okay, but if you're so sure. And we talked about it for a while and then he, we hung up. Yeah. And then he called me back maybe an hour later and he was like, wait, how much do you have liquid?And I was like, yeah, this is the right question. Cause in fact, yeah. Yeah, [00:03:00] it was a big, and I mean I was, I think I was just out of college. I did not, I had very little money at the time, but I think I did put little liquid on Obama and Nice. It paid off. So that was then, you know, and then I was kind of hookedBen Goldhaber: after that.Yeah, it's, it's compelling. It's, we don't need to get into all the very good reasons. Prediction markets are, are awesome and, and should be more widely used. But yeah, it's just, it's, it's also just fun because I feel like when you're kind of chatting like with your dad, like I, I talk about politics with my, with my dad as well and with friends.It's really, it's easy to get into the kind of like blather mode and. It's both nice to have as a forcing function, thinking about what your actual bet is gonna turn into or like what this actually means in terms of betting. And also just adds like a nice little, like I, I guess this is for people who are more into sports than I am.It's the same reason why gambling on sports is probably fun. Just add when, then when you keep following it, you get to like have that additional zing of [00:04:00] excitement of like, oh I've got, got money on this debate.Divia Eden: Yeah, totally. And yeah, and I also, like, I've been on Trump the first time, but I didn't have a, the thing I was just saying before we started recording was that I, I think like many very online people, underrated Biden last time.I think I didn't make any big bets. I don't remember. I think I made some small, small ones. I don't remember what they were, but I like, yeah, the people that I personally knew and this was in the primary, weren't that excited about Biden. So I, I guess I didn't really see it, but I think that, I think it's an easy mistake for people like me to make, cuz I think the fundamentals were actually pretty strong.Totally. And my husband Will kept pointing this out to me.Ben Goldhaber: Respect. I definitely did not see that happen in the, in the 2021 either. Yeah. As I was, I was saying, I feel like I'm still a little bit surprised that Biden won the 2020 primary, and it's not just because of the fundamentals thing, me ignoring them, but rather the Democratic party.The way I remember it was like the [00:05:00] Democratic party elites got together in a smoke-filled room, one imagines and convinced the other candidates to drop out so that instead of Bernie winning in their Super Tuesday, I think it was, mm-hmm. Biden was the one . So I think in part I was surprised that the.Unity of the Democratic Party to deny it to Bernie and give it to Biden.Divia Eden: Yeah. So that, I think that's kinda interesting because remember, I forget whose thesis this was, but like the party decides that thesis. Yep, yep. Or like, that was a really good predictor. Like how many of the delegates had pledged a basically like, support within the party was meant to be a very good predictor.And at least the way I remember it, maybe this theory was around for a long time, but like I was started reading articles about that theory not so long before then Trump won the Republican primary. And so I guess I, at that point I was like, well, maybe the party doesn't decide, cuz that clearly wasn't right, what the party elites were going for.He didn't have any of those early people pledged to him. So then [00:06:00] I thought the theory was dead. But as you say, then with Biden, it seems like that theory was line shot.Ben Goldhaber: No, I think split. I, I think the party decides on the Democratic side and I don't think the party decides on the Republican side, and I don't know how long, this does not strike me as a.The evil equilibrium place. Like it doesn't seem like, like something is not finished adjusting. But that is my read over the past, I guess eight years. . Wow. eight Time is weird.Divia Eden: Yeah, I think it's right.Ben Goldhaber: It's right.I think the ultimate example party not deciding is the Jeb Trump matchup and, yeah, I don't know. I guess in some sense then what, like Trump went on to, to do against Clinton. Like that's obviously a different kind of party deciding thing, but certainly some larger narrative of anti-elitism.Divia Eden: Yeah, and in fact, like the thing I, I most strongly remember from those debates was the, I guess it wasn't Trump, but it was when. Chris Christie called out Rubio, do you remember this? For repeating himself. Oh, IBen Goldhaber: remember that, [00:07:00] yeah. Oh, that was great. Watching it withDivia Eden: Will and, and in real time. I was like, wait, wait.Did you just rewind for a second? Because it was so, I, I don't know if you guys, the listeners know what I'm talking about, but Rubio had somethingBen Goldhaber: describe a little bit. They have a young group of them.Divia Eden: Yeah. Cause they were like, oh, but you know, we don't want another inexperienced politician like Obama or something.And he was like, the, the problem with Obama wasn't that he didn't have experience. He knew exactly what he was doing and he said this whole thing. And then Chris Christie responded, and then he started saying it again. And then Chris Christie called him out and he started saying it a third time. And it was very bizarre.And to me it was a little bit of like a mask off moment for like, yeah, this isn't really, A lot of people would prefer something less canned, in fact. Yeah.Ben Goldhaber: Yeah. I, I I, I totally agree. It's one of the moments that made me. Quite like Chris Christie, without knowing honestly that much about how he governed in New Jersey.I was just somewhat impressed with his quickness. On the take and ability to kind of like orient and not have Yeah, [00:08:00] totally. The canned lines that they're practicing. Because I do think, what's funny is that I, I think political consultants and strategists tell the politicians like, look, a debate is your chance to get your message directly to people.And the key thing to get your message directly to people is you repeat the lines over and over again. And so I have this story that like Rubio was like really trying to internalize that actually now as I bad.But that's not actually, yeah, exactly what you're saying. It's not what you actually should be doing in on some level.Divia Eden: Yeah. I think it's also tricky because I'm more sympathetic to the Rubio strategy when people are talking to the press. Where the press, at least my dynamic of like what people look, my, my impression of what people learn in press trainings and like how I would wanna talk to the press is that they'll ask me some questions and then I just wanna say my canned statement because they're looking for a quote and all I want them to use is the thing that I thought about.But again, like a debate. Yep. It's a little [00:09:00] different. It's, it's televised. It's a little different.Ben Goldhaber: Yeah. Yeah. Yeah. That's one of the best parts I think about everything that happened with 2016 Crazy election, all that is some, look, it's like it was a real win for aliveness. , even if you don't like Trump, it's like he is a very alive person. And similar in that moment, I know I was with Christie and with Rubio, but the whole thing is just like, this is yeah, no, like I think some rejection against tilted.Divia Eden: . So, so the other, that was the other moment I remember from that. And this is, this is one of those little things, I guess I was very into the debates at the time. I don't know if you reme, it was, I think when they didn't introduce Ben Carson. Like they had an order and they were supposed to tell everybody to come out.Okay. And they skipped him for some, or maybe he, sorry, maybe they didn't skip him. Maybe he didn't hear it. Whatever it was, Ben Carson was standing there. He was clearly confused. And then they started calling other people and it was like, it was so fun for me at least to see them reacted real time. Like, I think like Ted Cruz [00:10:00] was kinda like okay.Like, and then he went at least someone else. Yeah. And Trump, Trump was, he was more, he stood there and he like put his hand on Ben Carson and he clearly could tell what was happening. And he didn't wanna go out until bed. Carson also went out. And it was another one of those moments where, I mean, I guess this time it was Trump last time was Chris Christie.But sort of win for like, oh, he was, he was actually paying attention and not, not following a script. Right. Right. And then he, he came out with him as like a gesture of like, I don't know. Anyway. Yeah,Ben Goldhaber: that's a perfect example. Those are my campaign ing. Yeah, I'm trying to, it's for, it's kind of blended together now for me with the 20 12 1 of, like MIT Romney, I keep wanting to give mentions of MIT Romney in the, in debates, but that was obviously four years prior to it.Actually the only thing I remember from the Obama MIT Romney debates were Romney something with his dog on top of his car binder thing full. There was like ran of attacks that like shouldn't have landed. And then also like Romney, [00:11:00] like betting Obama, like 10 grand about something like offering to, andDivia Eden: this might actually, well, I don't remember that.Ben Goldhaber: Exactly. I was like, people pointed this as a gaff, but I'm like, obviously we don't all you know, it's, it's big money, Mr. Ramen. You should have said that. True, true. Should have said a dollar to connect with people, but I still, I liked it.Divia Eden: Yeah. Interesting. I'm gonna look that up. Yeah. Yeah. Again, this was all by the way on my mind cuz I was like, what happened this week?And I was like, oh yeah, I guess Biden's running, which is not really news, but it's, I guess it's a little bit news. Yeah.Ben Goldhaber: In terms of the matchup, it looks like presumably it's gonna be Biden as a democratic nominee though.I do wanna How much weight do you put on somebody else challenging him for the crown?Divia Eden: I think I'm fighting the last war here where I'm like, well I underestimated him last time, so I kinda, I don't wanna make any strong bets. Yeah. But I think it'll probably be Biden if he's running. [00:12:00] He's running. I think the establishment will be behind him.It's not that I think there aren't other people, but I haven't seen anyone so compelling where, I dunno, it seems like a real uphill battle to fight an incumbent supported by the establishment.Ben Goldhaber: I can't think of a time where this successfully happened. Right. I can think of times where in the past people have challenged the sitting president for their nomination and they've weakened him, but never to the point where he didn't get the nomination.Divia Eden: Yeah, I'd have to look it up. I mean, I think it happens with, you know, with Congressmen. Sure. But yeah, it's a, I can't think of a time when it's happened with the president either and I don't think it'll happen here.Ben Goldhaber: The only scenario where I see something like that happen is Biden is incapacitated in some way.It's a health problem withdraws cuz of a health thing. It is definitely anybody's game.Divia Eden: This is maybe a little, I don't know, a little of a tangent, but did you happen to watch the diplomat. It's a newBen Goldhaber: Netflix show. No. Why? Talking about the [00:13:00] diplomat. You're talking about it. Daniel Flins talking about it.Well, I think's thetopDivia Eden: reading show on Netflix, some people saying it's like a West Wing successor. I said yes. A friend of mine, Andrew Reddick on Twitter was like, well, maybe not. Cuz the West Wing really captured the dynamic of what it's like to work, to have that type of job in a way. Anyway, there's some debate about how good it is.I watched it, but it's ok.Ben Goldhaber: Now I wanna go on a tangent. Sorry. You go first. No, I wanna keep your diploma. IDivia Eden: definitely, when I watched it, I was like, oh, this is supposed to be, this president, I think was supposed to be a Biden type and certainly it was like a Democratic president and they were Gotcha. I dunno, these jokes about like, oh, like the doctor won't let him have coffee or like, better not let him go off script or I was like, okay.I think that's, that's what they're doing with this guy.Ben Goldhaber: Yeah, that fun. I haven't like seen any comedy about Biden. It hasn't been. About his, I guess I just, it's hard for me to think of comedy about Biden at this point, Joe. No, but that, that all lands. I mean, I, I was just,Divia Eden: I think they made him pretty, it [00:14:00] wasn't like, I think they made him a pretty sharp character ultimately, I feel like.Got it. That was sort of a fun thing about the diplomat was it was usually like, oh, these people are doing something more interesting than we think at first. Not, I guess that's too spoily for some people, but not for me.Ben Goldhaber: Yeah, not for me. I now, I've gotta watch the diplomat also because it feels like all of my friends have just suddenly decided that we need to watch the Diplomat.So I, I thought it was pretty good. I'm very susceptible to peer pressure. Yeah. I was just gonna say, I've tried rewatching the West Wing and it did not hold up in the way that I remembered it. When I watched. Yeah. I, I was quite young when I remember when watching it, and I feel like now when I watch a few episodes, maybe I'm just not as idealistic.Maybe the times are not as idealistic as they were in the nineties. Early two thousands.Divia Eden: Definitely some idealistic times.Ben Goldhaber: Yeah. Very idealistic times. And West Wing is so idealistic and it's like cliche now to compare that show with Veep, but it does, that is the comparison [00:15:00] that comes to mind whenever you watch it.Divia Eden: Yeah, totally. Yeah. I mean, the diplomat, I, it, I would say it, it has some of that idealism too, for sure. Mm. Not, not from all the characters, but I, it, well, I, I, I should probably stop talking about the diplomat, but it, I would say it's like a deepBen Goldhaber: state type of idea. This is how we get the sponsorship. This is, this is how we finally get Netflix to sponsor the pod.Divia Eden: Yeah. That would, that would be a big get. Mm-hmm.Ben Goldhaber: But Okay. No, I was gonna ask you so outside of the nomination, and cuz we both enjoy betting and making predictions on things what's your, what's your take on Biden overall? In the general who, whoever he's up against on the Republican side. Right. I mean,Divia Eden: well, if it's Trump, I, I mean, Biden won last time against Trump.I tend to think they're both weaker candidates this time around. I don't have a strong prediction. Hmm. [00:16:00] I guess like, if I really had to say I, I think maybe Biden, but I'd wanna, it's not a very informed guess. And I think I'm saying that because I'm like, well, that's what happened last time. Right.Ben Goldhaber: So I sort of did.I mean, I feel like it's a, yeah. Yeah.Divia Eden: Have there been rematch? Which one? There have been rematches in presidential election history. Right. But I don't, I don't remember what happened with them.Ben Goldhaber: There was one person, I wanna say Benjamin Harrison, who became president again. Right. It was like out of order like Trump would do ifDivia Eden: you would president.Exactly. I, I think But was he running? He let lemme look it up.Ben Goldhaber: That's a good question. That I'm not sure.Divia Eden: Okay, so he, Benjamin Harrison is the 23rd president of the United States. Let's [00:17:00] see,this might not be the right guy cuz he's just saying he's the 23rd. I think it's gotta be someone, we can cut this out. Maybe. IthinkBen Goldhaber: it's someone else. I'm willing to let people know that I don't know. Relevant facts on Benjamin Harrison.Divia Eden: Yeah, that seems fine. I to be bored.Okay. Oh, do you mean Grover Cleveland?Ben Goldhaber: If that's the right answer, then yes.Divia Eden: Yeah. Grover Cleveland is the only president in US history to serve two non-consecutive presidential. There we go. Terms, he won the popular vote in the middle election though. Wow. That's kinda interesting. And so who, butBen Goldhaber: the question is, and Benjamin Harrison [00:18:00] won the electoral college vote in between.IDivia Eden: get it. Okay. So Benjamin Harrison, you, you were right to reference him, but he was there in the middle, thank goodness. And yes, so he was, And he was or was not an incumbent? No, he beat.Ben Goldhaber: Okay. So he must have beat Benjamin Harrison. I, I, without who he lost to before, which would count in this case.Divia Eden: Interesting. Yeah. Benjamin Harrison was the 23rd president. And then did he run again to fail at becoming the 24th?Ben Goldhaber: Yeah, IDivia Eden: I looks like he did. So he, okay. So this [00:19:00] is our one, this is our precedent. This is like the, the Biden Trump. This is our president.Ben Goldhaber: Yeah. Okay. Okay. And this is also now the president, I'm gonna quote all the time in the rematch, which is, it's a classic Rover Cleveland, Benjamin Harrison set up.Divia Eden: That's right. Yeah. Okay. So, so it has happened. Trump could do it.Ben Goldhaber: Yep. And I was saying I was saying before the show or what I'll just say now is I am I am, I am pro-Trump in this matchup. I think it's more likely I, I, I think I am bullish on Trump in this match, in large part because the 2020 campaign was such an outlier in terms of how it was conducted because of Covid.That I think the kind of advantages that Biden had in that situation, like the drains on stamina that happened in that kind of campaign, he's not gonna be able to pull again. He's gonna need to be out in public, right? Even if the media does give him more of a [00:20:00] past, I don't think that that's going to be enough.And yeah, I just think Trump seems far more. Robust in this environment than Biden does.Divia Eden: Yeah. He's, I mean, literally the campaign doing rallies and stuff, right? So what, yeah, yeah, yeah, yeah. That is different. I think it's a good argument.Ben Goldhaber: Mm-hmm. I, I, and I see, I see also on the other side, which is just like, you know, we've run this once, didn't go in Trump's labor that time and went to Biden's.That should have some evidentiary weight.Divia Eden: Well, I, so I guess the next question though is, do you think Trump will be the nominee?Ben Goldhaber: I do. I think Trump is gonna be the nominee. I think it's, I wish I'd done a better job of actually recording my predictions earlier because now I feel a, like little bit, it's a little easier now to make that prediction.I think I had that prediction a few months ago as well, which is just the polling is tracked. The Republican party is not a party that gets to its side, [00:21:00] the nominee. For president for Republicans, it they once upon a time maybe, but at this point that they don't have that, and voters seem to, at this point, overwhelmingly like Trump.I, I need to look back at how the polling has changed since the indictment because my impression is that that caused just popularity among Republican voters to go up and ok. I see. I think that that I, I, and I think that, I think it's interesting to, like, I wonder what I, I have no idea what back room machinations go on in these kind of things.I doubt that it actually influenced this, so I try to push him to be the nominee or against, or what people were thinking. But it does seem like the outcome of all this has just mostly been, he's become more popular as the candidate.Divia Eden: Yeah, I guess I, I guess I make the same prediction because again, I'm like trying to fight the last war.I'm like, I think the same things that caused me to Underrate Biden last time caused me to intuitively underrate Trump this time. And so probably. [00:22:00] Probably will be Trump. It is, it's super early. That's the only thing I can say. That's it. Yeah. On the other side of it, I think the super early thing doesn't really matter for Biden, cuz he's an incumbent, unless, as we say, there's some sort of health problem.But with Trump, I feel like, I don't know, it counts somewhat. Like it makes me more uncertain. It makes me not sure. Yeah,Ben Goldhaber: no, I think I, I mean agreed. If it was this kind of polling and we were about to enter voting or like about to go into the debates, that seems compelling. Like I would, I would feel even more confident.I just seems like the, the only like black WANs am I anticipating, which makes it obviously not a black swan, is if there is some radically different person, not a, anywhere in the, like, not, not a senator, not a governor who comes in and has the money and like some kind of existing base. That could really throw things off, but I just can't even, [00:23:00] nobody immediately comes to mind.Who, who hasDivia Eden: Ellan Can't be a president. He's not a natural board. Elon can'tBen Goldhaber: be president. I think Dwayne the Rock Johnson. Oh yeah. That would do it. A good choice to just be a celebrity wish. It does seem like That's right. That seems like a better lifestyle. Yeah. Yeah. And I don't know what Tom Hanks' politics are, but I, I suspect that he also will just be content.Being a charismatic self. This is a, this is a bet that I had with a friend that did not come out my way last time. And so I don't know what to believe. But it, it seems like we should have many more celebrities entering into politics than we do. And I was betting that the Democratic nominee in the last election cycle would be a celebrity.I was completely wrong. Mariana Williamson underperformed. But it still seems, it still seems weird that we don'tDivia Eden: Yeah, they, I mean they have a pretty high success rate when they choose to run. Right. I think I've tried to look this up. Yeah. [00:24:00] Yeah, I think they do well. But maybe it's like what you say.I know media do media anchor. Oh, well, okay. So the, we have to at least touch on that. Sorry, I forgot what the other current thing is from this week. There is a current thing, which is Tucker Left Fox. Oh yeah. Yes. Perfect. Speaking of media natural segue. Yes. Yeah. I don't have a ton. I don't know. I have sort of the same speculation everyone else has about that, I guess, which is, it doesn't seem great forBen Goldhaber: Fox.Yeah. Doesn't seem great. I don't know what their stock price ended up at, but I do remember looking right after the announcement and it like, I think it took like a 5% hit.Divia Eden: What makes, I mean, yeah. And is it like, do we know how much it was Fox saying it was over versus Stocker saying it was over? I feel I haven't seen anything clear on thisBen Goldhaber: speculation.I've heard, and it seems like it's all come into the same place now, is that mm-hmm. It was Fox saying it is over and that it did [00:25:00] not strike me as a shareholder maximizing decision, but rather the texts and communications that came out as part of pre-trial discovery for the Dominion lawsuit had. Okay.Tucker really badmouthing some of the Fox execs in management. Oh, IDivia Eden: see. That does, that does make sense. I guess why they would do something that seems so against their interests, because I guess that's a typical sort of situation where people do things that are superficially, at least against their interests.Ben Goldhaber: Yeah. And this seems much more plausible than what I initially thought it was, was a surprise entrant into the Republican race. Yeah.Divia Eden: No, I mean, I think, but it makes more sense. Absolutely. I mean, I think if Tucker runs, he has a very, very good shot. I think. I mean, I don't, well, I don't, I haven't really thought about this that hard.Intuitively,Ben Goldhaber: it seems to me. Yeah. He seems like he's the only one. Yeah, he's, he's popular. I think a lot of people like him. I, I like him. He's [00:26:00] charismatic. I don't know, he's, he's like got some weird ability to draw from a lot of different pools of energy. That's right. I wonder, I, I don't know, like I saw predicted unpredicted, man, are we just gonna come up predicted podcast?I don't know. I've gotta stop referencing it so much. His, his chairs jump to at least fives like after the news. I bet it has gone down since, but I dunno, maybe, maybe people think now he can no longer be a Fox News host. Maybe he should settle for trying to run for president.Divia Eden: Yeah. He's 3% right now behind Glen Youngen, Tim Scott and Nikki Haley at 6%.Mike, that's only 2%. I mean, I guess. I guess, I don't think it's gonna be Pence, but two. Two seems low for a vp.IBen Goldhaber: don't know. He's pretty low. Yeah, I think that's much lower. I don't know. I would put him over, well, I don't know. Certainly he's, he's in the same camp in my mind as Young Kin and Nikki Haley.[00:27:00]Divia Eden: Yeah. This is, by the way, you know, this is the era of the, of my people, the female South Asian, or partly South Asian women finally in politics. Yeah. Kamala Harris. Finally, the first, I didn't think I was gonna see a half Indian VP in my time. So,Ben Goldhaber: Representation at, at last? We haven't actually talked. Nobody talks about that part.I'll say it. Nobody. I'm, here's, I'm talking about it.Do you, do you feel like now you, there's no longer a glass ceiling?Divia Eden: Less than I might have thought. Less than I might have thought. What's I don't knowBen Goldhaber: is I was not, we might have different opinions on this. I, I feel like she is nobody’s favorite, definitely not mine. Divia Eden: That's right. [00:28:00]Ben Goldhaber: Hmm. It's clearly a sign that if Biden, for whatever reason, did not run again, I do not think the Camala Harris would get the nomination. Who do you think would.Huh. I really don't know. I think maybe I'm blanking on his name, but the California governor, the judge.Divia Eden: Oh, Newsom. Yeah. Newsom. That makes sense. Newsom.Ben Goldhaber: Yeah, I think maybe Newsom. But that's another one where I'm a little bit like, well, maybe it'll just be some person who's not on. Oh, Newsom Radar.Radar at the moment.Divia Eden: I didn't really look into this, but there was some headline about that he sent. He was like sending in the, not the National Guard, but like the California state something to like clean up San Francisco.Ben Goldhaber: Oh. Oh yeah. I did see this. I did not look into the details at all. Definitely felt like the kind of thing where if you're maybe setting up a run Yes.You wanna be able to talk to that? That's, yeah.Divia Eden: I mean, cuz otherwise, like, I'm like, why now? [00:29:00] Like that's, I, I don't, I don't get the, unless. Unless it's thatBen Goldhaber: it's absolutely so people can't point to the crash fire that is Francisco and say, you did nothing about this. Oh, no.Divia Eden: National Guard. Okay. I just saw a headline from two hours ago saying, I don't know the, the political leanings of, oh no, this is Fox.I see no signs of California National Guard in San Francisco to tackle fentanyl crisis is what Fox, K T V U has to say about this as ofBen Goldhaber: recently. I think that's I think that's fair reporting. I'm not even, what is National Guard gonna do about, like, are, I don't believe they have the power to arrest.Oh, sorry. Wait, it says theDivia Eden: people, right? California's National Guard to help San Francisco fight fences. Yeah. I don't know.Ben Goldhaber: I, I'm, I feel like there's some amendment in enough. Again, we're stretching my civics know, sorry. But I don't believe the National Guard can come and hideous.Divia Eden: Yeah. I think they're supposed to help law enforcement.They will. Yeah. Through this new collaborative partnership, we are providing more law [00:30:00] enforcement resources and personnel to crack down on crime, et cetera, et cetera. Do you remember? Yeah. Newsom's very, I don't know what to call it, but his rhetoric in the, like March, 2020 on co, he was making Covid speeches and I was watching them cause I was trying to understand what the Covid response was to me.Yeah. And he kept talking about the nation state of California, and I think someone called him on it. And they were like, what do you mean by that? And he was sort of like, well, it's like as big as, you know, many countries. And so that's, that's another thingBen Goldhaber: I, I, I don't remember.Divia Eden: That's maybe a little prone towards some grandiose language.Ben Goldhaber: Yeah. That's, that's pretty excellent. I mean, I do I, I, I do think there's a point that he's right about there. Not, not the language that is crazy. It, it, it's like dramatically underappreciated the degree to which California is a, a huge economy, huge cultural force, all that. But then also B is just like, if it were its [00:31:00] own country, would be a failed state or would be like an example of like an Italy level kind of mismanagement.I think. I'm not sure if you agree with me on this, but it just seems like Yeah,Divia Eden: no it does. Right.Ben Goldhaber: It's not even on the housing side, but just on the like kind of one part.Divia Eden: I'll defend it a little bit on the housing side. I mean, I think the housing has been a disaster, but it seems like the ybi movement has outperformed my expectations by a lot and I think it really did come out of California, which maybe becau, you know, people were desperate.And so that's not, that's not to California's credit, desperation,Ben Goldhaber: breed breeds innovation.Divia Eden: Yeah. I mean it's, it's honestly the political, I mean, I guess this is a cold take, but it's the political thing that I am been the most excited about in years is the success of the MB thing.Ben Goldhaber: Yeah. No, is I, I I, I, it seems great and as like an example of a actually organic political movement, like fueled by just like people, I mean, [00:32:00] I actually, I should learn more about the history of the Bmb movement before I say this, but it strikes me as something.That was like, no, I think you're right. Yeah. Very grassroots in a laudable way.Divia Eden: Yeah. And then it wasn't that many years before people, like, I think that when London Breed was first elected, I remember in that mayoral election, it was like people wanted the Indian endorsement. I was like, oh, interesting.Like, I didn't even realize, like Right. I didn't know it was net popular enough that, that the candidates would be looking for that. Yeah. But, but they got it. So, or I think, I think London Breed did get it, and I, you know, in terms of actual, like it's a moving at time. Yeah. I could say both. Like, this MB thing seems to have grown faster than I expected, and it's like, okay, but what's the actual rate of housing being built?And I'm like, well, that's, Hmm. I think still very slow, but like the ADU laws, like there is stuff happening and I, I think I expect it to be gaining momentum and, and being a real thing.Ben Goldhaber: I, I, I think you're right about that. Do you think it would. Do you see this becoming a national [00:33:00] movement that translates I do into changes in other states.Okay.Divia Eden: Yeah. I think there have been, at least I, so I'm, I'm far from an expert on this, but I think at least multiple states have like laws on that are either being proposed or maybe some of them passed to allow people to create ADUs accessory dwelling units in their backyards. I think that that, the thing that I've heard about that so far is to kind of like, not bank on it too much in the short term, because even when laws get passed, it's often like lots of challenges and technicalities, and I think the California one has been around long enough that it's sort of well understood and more ironed out.But yeah, I think that's, I see that as maybe a leading indicator. I mean, I think, I think housing is, is a major, is a top issue for people, you know? People our age. Yeah. And people in general, and I don't, I mean, I think. Yeah, I guess that's my prediction. I think em Bism will take off and be more of a national thing and we'll see it like substantially grow in the next four years or so.[00:34:00]Ben Goldhaber: Yeah. I, I think you're right. I haven't given it as much thought, but your description of it resonates in part cuz I feel like in the last election cycle, a number of Republican candidates were running more or less on a, almost on a housing policy agenda. Mm-hmm. Where they were pushing. An idea that I also endorse and think is a good point, which is like, you should be able to own a home raise a family on one person's salary.Like it should not require the entire family to be in the workforce in order to be able to have the. Quote unquote, traditional American dream. And I think many of them, I'm, I'm particularly thinking about the candidates running in Arizona and Ohio on the Republican side were yeah, really pointing out the way in which there's like a systematic discrimination against young people that kind of [00:35:00] manifest itself a lot in housing policy.So, I don't know. I'm optimistic about that in part because I do think tying it into the, the generation gaps, the divides, there also kind of points at the way in which, okay, over the next four years, over the next eight years, you'd expect to see this become more and more of just the default main view of the both of the, of the groups.Yeah, I thinkDivia Eden: that's right. Yeah. That's good. Okay. I'm gonna, I'm gonna say one more thing about that and it's gonna include a segue, right? Get excited. Yeah. I see it. This is maybe a bit of a cynical take also on Ybi, but I think, I don't know that in a lot of ways the economic. Fundamentals for America right now are not looking awesome from my perspective.Mm-hmm. And I think that at a certain point, I don't know, I guess I see like a lot of what I would think of as economically inefficient policies, as sort of like a luxury good that politicians are more in favor of when times are good and then if [00:36:00] actually like economic growth is sort of iffy or like people have been like, eh, we don't feel like our wage has been going up for a while.Then like politicians are like a little more eye on their ball for trying to create economic growth. And I, I think MB is one of the easier ways to do that. I think the other one, here's my segue is that I think this is gonna inform, I, I think the AI thing's gonna be really tricky and they're gonna be forces pro and anti in the government.And one of the things pushing politicians to support it is gonna be they're looking for economic growth somewhere, anywhere.Ben Goldhaber: Mm. Right. Okay. So something like, We e everybody wants, all politicians want some baseline little economic growth. Yeah, maybe it doesn't always need to be maximized and can trade off against other values.Maybe in fact, like too much economic growth has some kind of feedback loop trigger where then more signaling values are, are pushed. I don't know. That sounds a little too galaxy brains, but,Divia Eden: well, I mean, I, I would [00:37:00] like someone to do an analysis on this point, but I do, yeah. I think, I don't know. I, yeah, I, I guess that is something I think that like more people can talk more about things that don't matter very much when times are good.That's, yep.Ben Goldhaber: Definitely seems true to me.Divia Eden: Yeah. And, and you know, someone could also argue with me about like, whether, like maybe other people would be like, no, the, the economy's going fine. Like, what are you talking about? I, I'm not super, like I said, optimistic, but I don't know that that's a super uncontroversial position.I, I'm not sure.Ben Goldhaber: No. Seems right. To me at least,Divia Eden: not like, like covid. Ok. I think here's an Uncon uncontroversial thing. Covid was obviously a big hit. Yep. And I think it wasn't mm-hmm. That bad because in fact, a lot of people cut back and saved money. And so, you know, insofar as people were saving, they can now spend it and, and that'll, that'll show up.And I think it has been. But, but yeah. I think still obviously major blow.Ben Goldhaber: Yeah. And, and [00:38:00] major blow, the huge amounts of inflation that we've had since. Yes. The general just sense of fragility, that's not really an economic indicator, but it's certainly, to me, one of the takeaways is, was like something about many more things being up for grabs than I expected.And I think that applies to the economy as well. Yeah. Yeah. So I, I guess, well, like, I don't know, what do you think about an argument that AI will. Cause economic benefits, but they'll be very like localized to a few firms or to a few individuals. I guess I tend to have in mind some version of AI where it's like not actually being that widely distributed of a benefit.So I suspect politicians. Yeah, I think that's not be as responsive.Divia Eden: Well, yeah, I guess I wonder, I I think so. Like, I think Tucker actually, he gave a speech about the, I think we talked about this, about like, no, he doesn't want self-driving cars because truck driver is what, like the most common [00:39:00] occupation in America and this is not gonna be a smooth situation.And anyway, so like, yeah, I, I think there's a real thing that politicians will be responsive to there. I also tend to think though, that some politicians, rightly or wrongly, like I don't know how much this feeds into public op opinion. I think somewhat. I think because public opinion is somewhat based on this, we'll wanna be like, no, I want overall GDP to go up and then I can run on that.Right? Right. So I, I guess I see both pressures. And it doesn't seem, I mean, I think, I think, I think that makes sense. It doesn't seem like an entirely pro-social, I mean, I don't know, I guess it seems neither entirely pro-social nor anti-social. It seems like a sort of somewhat unaligned politicalBen Goldhaber: goal.Yeah. I I, in some ways I think a four politicians being, having a multiplicity of values they're trying to benefit most of the time. Yeah. Like, I think I would want the people in office to both be trying to cause GDP d to go up, but also [00:40:00] not sacrifice children to a demonn in order to make GDP grow up.Which is not what I'm saying is happening here, but like, you know, let's have some values there or I dunno what my point is there.Divia Eden: Yeah, I mean, is there anything you wanna say about, I don't know, any of your latest thoughts on AI while we're talking about it? Yeah. If you want, if you want a more specific question to lead you off in the betting house, that would be great.Yeah. So I, I can pull this up on Twitter. There are probably stuff since then, but I believe somebody said something and then Robin Hansen was offering to take bets. Oh, on, yes. Yeah, I think the operationalization was something like only 20% or less of humans, of the economy, of humans are employed in, I wanna say 2037, something like that.Hmm. Which is, it was meant to be like a proxy for like, are we gonna have strong AI soon? Right. [00:41:00] And he definitely got some takers,so, yeah. Do you have thoughts? Are there any bets you'd wanna make? What do you think of this bet? I don't know. Any thoughts of the people making the bets? I think it's a goodBen Goldhaber: bet. I think it's a good operationalization. I,I, I think I would bet against against, against, well, no, hold on. Let me, let me think a little bit more here.Divia Eden: Okay. Sorry. I, I will read the exact terms of his bet in case I got them wrong. Please. I'm happy to bet against anyone who sees full human level, AGI realized across the entire economy as, oh, it's earlier than I said, as likely by 2033.I give you stuff now, and then you give me stuff after 2033. If agi, I doesn't come, you just have to prove to me that I'll actually get the stuff. Then he says, I suggest defining AGI as US adult labor force participation rate is less than 20%.[00:42:00]And yeah, there's some, there's some takers for sure in the comments.Ben Goldhaber: Mm.So I'll give some, like broad models I have about the current wave of development in ai and then hopefully that will yeah. Be interesting and also help inform which side of this bet. I'm gonna have to tweet at Robin that I'm takingone, it, it seems like the wave of G P T innovation that we've had is actually being adopted. It is not just demo tech. That's something that I now believe. Yeah, I expect that it is actually having meaningful productivity boost for a lot of people and random pieces of evidence of that are both for my own life.How awesome it has been coding with the assistance of G P [00:43:00] T. Right. The various ways I've used it to rewrite things. The degree to which it is just better than me at like various engineering style tasks. So I don't know all that. Also because I've started to see products that seem to be like using it well in some fashion where I'm like, oh yeah, this is like smart design.This is like a small example. I think I have better ones. But somebody added it into terminal commands so you can use G P T in your terminal, which I was like, this is great. Cause I've never remembered a terminal command in my life. Yeah. It was a very natural thing to do. I second though, I think I'm still a little skeptical about this current wave of heck fully replacing people in roles.Mm-hmm. I. In part, I've been burned too many times before, cuz I really did believe that like ophthalmologists might be out of [00:44:00] work because of advances in computer vision and they're not. And same with like radiologists and also self-driving cars. I was betting on that maybe like in 2018, thinking that we'd see it much sooner than we have.So yeah, it just does seem, seem like there's a lot No, no, no, no.Divia Eden: Around. I think it's just like an overall thing that I, and I think, not just to me, but definitely I got wrong about ai. I remember when I was in high school, I went to some computer science competition and that somebody did a presentation about self-driving cars.And I think it was the first time I'd ever thought about it. And certainly the first thing that occurred to me is I was like, oh, well once the computers can do it, then they're gonna have such low accident rates, it's gonna be awesome. Like quite the opposite. I mean, which in some ways I'm like, well I guess that makes sense because they go from the can't do it and then the people are working on having them be able to do it.And so somewhere in the middle they have a higher accident rate than humans. Like I guess when I put it that way, it seems sort of clear, but it wasn't how I was thinking of it [00:45:00] before. I was like, had this more deterministic like, oh, well once it's done algorithmically, then it'll be easy to get the error rate down.Right? That wasn't right.Ben Goldhaber: That, that's the kind of thing that I think seems like the update that I've also made on a lot of these applications of it. Cause I've, I've been following like benchmark progression on many of these tasks for a number of years now, and it seems like we've had human level for a long time on a lot of benchmarks.And yet I don't think that you can actually just give the algorithm a full on, I don't know, write a book style pass and you get a good response. And I feel like there's just, and, and say, I think radiology is an example. I tweeted about this a while ago. I got a lot of good examples from working people in the field about like why it hasn't replaced them.Like why they can't just give it to the software. And it feels like that's just true in a lot of parts of the economy. So I, I, I continue to expect like, high productivity gains without [00:46:00] maybe seeing like immediate job loss from it. I mean, one way I could have know isDivia Eden: you could just like, but okay, so devil's advocate.I, I think that's my ultimate prediction too. But if I, if I wanna make the, take the other side of it, I'd be like, well, but if the same programmer can do, I don't know, like even just three times as much, then why wouldn't I hire fewer of them? And I guess the, I mean, if I take the other side of that one, I'm like, well, maybe actually, I guess some goods go the other way.Like if the good gets more valuable, you buy more of it. So like maybe if the programmers become more productive, then I write more code. I don't knowBen Goldhaber: my pr my prediction though, I don't know if this tracks with formal economic logic, but it is what I have in my head, is it we'll see bimodal distributions in a lot more professions with far with like, I don't know if it's gonna be the same number of programmers, but something like many more programmers in [00:47:00] the like lower quadrant getting paid less and then a few getting paid a lot, lot more.And basically just a split, I guess this is Tyler Cowen's averages over kind of thesis, but applied specifically to ai. And I don't know, I guess that's my prediction for the next few years. And then when I start thinking into 2030, I'm like, I, I really, things get kind of foggy for me. That's one reason why I might take the bet on the other side from Robin is just, it seems like there's so much transformative potential in various ways that I'm like, I don't know what my odds are, but it certainly seems far more possible than many more things are gonna be.Like mechanized is radically different. Yeah. Mechanized. That is a old school term. I don't know why I used that.Divia Eden: Ok. I, I'm gonna go back to what you said about programmers. I see part of what confuses me here, [00:48:00] and it's not that I necessarily think you're wrong, but this sort of old wisdom about programming, is that in fact, productivity differences between programmers are huge and that within.Like when people are employees, it's, the pay never really reflects that. And so if people want to actually capture, if the, you know, I guess this is cringe, talk about the 10 x programmers, but they, but they obviously, some programmers are much more productive than others and if they wanna capture that, they have to go, I don't know, do startup or whatever else.Yeah, because I, because I don't know the pre, the measurement is hard or the pressures for GAL are strong enough. I mean, it seems like measurement is not that hard. But then like what pro I, but then I don't know, like salespeople. Mm-hmm. I guess measurement is easy enough that it overcomes whatever tendencies towards egalitarianism and people, people get paid based onBen Goldhaber: what they do.Yeah. I don't know if this is quite the same as the [00:49:00] measurement problem or it's not how I would describe it, but the salespeople example I think is a good example of, because. It is a eat what you kill profession. It is just a far more direct incentive. And right, there is no third party that needs to allocate things in some way that like, yes, a team spirit.And I say this is somebody who has never really worked in a sales profession. So plausibly it's different. But my impression if we're talking to friends who have is it is like, yeah, you have a team, but it is still a solo artist kind of practice. And I do, it's culturally very different. It's probably different for programmers.It's culturally very different. Yeah. And like I think for most programming jobs, you are programming with a team and that has Right, both harder measurement problems where okay, now you have a credit allocation problem of like who really enabled this person's success along with trying to maintain team cohesion.[00:50:00]Yeah, soDivia Eden: I guess,Ben Goldhaber: okay. No, I know. It's still a good point thoughDivia Eden: now that we've talked this out. I think you might be right about the productivity differential and I think it's mostly not gonna be reflected in in salaries. Yeah. That seems I we'llBen Goldhaber: see. Compelling to me, I guess we, if we look at other professions, because it wouldn't just be programmers like, right.Should we expect sales becoming even more bifurcated? I think maybe copyright and it would be reflected in salary. Yeah, exactly. Copy you, copy be able to just be far greater.Divia Eden: Yeah. I think the, the copywriters that embrace the, the AI and they figure out their prompts and whatever, and see, I, I don't know a ton about copywriting as a profession, but I, I think that, I'm guessing they get paid sort of by the individual copy and how well it performs in a sales type way.That that's, that's my impression that could be wrong about that. This is great. And Yeah, I, I think there, the compensation is gonna get way more skewed and some people will probably stop doing it. Mm-hmm. [00:51:00] Because the other people get so good.Ben Goldhaber: Right? Yeah. Seems right. Seems like that could apply to a number of things.I think actually this is helping me re when I think about the, the, the, the hanon bit is one reason I would not take the other side of it. Why I would be skeptical of a only 20% of the workforce still being in the workforce would be, I feel like there's so many feedback loops in society that would prevent that from happening.Like in the world where things have not radically transformed. I think if you only had 20% of the population still currently employed, you would see a lot more buildings being burned down. A lot more civic on this. Yeah.Divia Eden: No, and I think the politicians are gonna make policies to try to stabilize all of that.I think that's right. Totally. Yeah,Ben Goldhaber: and I think I, I had an argument, oh, sorry. This will be I, I had an argument that I think in [00:52:00] particular the P M C class, the professional managerial class will be threatened by some of these things, and politicians are far more responsive to their concerns than to the rest of us.And that will drive some legislation faster. Yeah, I,Divia Eden: I think like the, the scenario that I, sometimes, I, I don't, I like don't even wanna think about it, cuz to me it seems too dystopian for like the, the other, the sort of less political solution to that is like a lot of jobs. Is the paperwork gonna explode?Like people are gonna using the ais to create additional paperwork and then additional paperwork requirements. Right. And like everybody's, listen, look at some arms race against like, I guess this is sort of the b******t jobs hypothesis. Like in some sense, yeah, many people are doing things that are not really necessary, but people wanna have these other from more nebulous, more statusy, whatever reasons that they wanna hire people.And then is the make work just gonna explode even without any political intervention? [00:53:00]Ben Goldhaber: Yeah. I mean, I guess there's some political solution here, which is like, look, we can never fully trust the ai. Maybe this is an AI alignment. Yeah. And a make work program where you have a human verifying every X number of ais outputs, and this way everybody's employed and nobody's doing real work.Right.Divia Eden: And I think, I mean, I think sort of makes sense because insofar as people's job is to be accountable, There's not really a way to have an AI do that at this time, and I, I don't see, yeah, it seems not that close to havingBen Goldhaber: that. This is in part why at least some people on Twitter have said radiology is still done by humans, is you need somebody accountable at the end of the day and the software would scan the the, the tests.It would, it would, it would try to indicate whether or not there was some kind of thing present that a radiologist needed to look at, but the radiologist still had to [00:54:00] sign off on them, and in terms of liability, they were the ones who would be sued if they got it wrong. So in the end, it didn't really provide that big of a productivity gain.I, I wonder if this would happen in many more professions.Divia Eden: Yeah. And, and like it could, it could. I, it would not surprise me if the way it went, and I don't, I don't have a, I don't know, the technical is, I don't know how far off self-driving cars are. I think I would predict that we'd see way more of them deployed by 20 Now you're ok.On what timeBen Goldhaber: scale? I, I, I, I, I am bullish on a five year time scale without thinking about any of the social implications, which I dunno. I agree with Tucker's take, I'm worried about that doesn't seem strictly great, but in terms of self-driving cars in cities, yeah. Where like instead of Uber, I'm calling a self-driving car in San Francisco and New York and Boston in like a major metropolitan area.I, I think that [00:55:00] becomes even more of a reality. It's already kind of a reality now in San Francisco.Divia Eden: Ok. So two questions. One elsewhere. Yes. So, and you think that even if the car is for all practical purposes, self-driving, there's not gonna be a human in their supervising? Or you think there will be? I think there won't be.Okay. I think I predict. And I, I'm talking about something I don't know that much about that with the truck drivers. It's gonna be that even past the point where cabs don't have a person in them, the trucks will, because again, somebody I think has to be accountable for the goods and maybe they'll be some more, I agree with that trusted way to do it, but I, I think it'll take a while and I don't know.And theBen Goldhaber: companies, I actually also think truck driving is a lot harder than people anticipate. Oh wait, you said in cities, truck driving is a different problem. Yeah, I think cities for truck driving, I'm not sure. And that's another area where we get into areas that don't fully gro. But yeah, it's something [00:56:00] I learned when looking into like why, I don't know why my initial prediction several years ago didn't take off.I like looked a little bit at self-driving trucks is there's just like a lot that truck drivers do as part of their profession that is not just keeping the truck in between the white lanes. Like it is a lot of loading and unloading things. A lot of maintenance, a lot of like. I think like supervising and interfacing with other people, tasks of like getting cargo and things and all of that.I, I think this goes to the ways in which like, yes, maybe actually you just need like weak h e I or full HCI before you can get some of these things automated. Cause there's Yeah. A lot of generality in, in the, in the job of the truck driver.Divia Eden: Yeah. I think, I think that sounds right to me, so, okay. You're, I, I think I would agree with, with what you're saying, bullish on the cabs in cities, but not so much Yeah.On the, in the next five years on truck drivers being replaced.Ben Goldhaber: Yeah. Which I think, I'm stealing this from somebody, but I do think it's a beautiful sign of irony that it seems much more likely that the opinion [00:57:00] columnists who were writing about the way the like truck driving would be automated five years ago are far more likely to be automated now.Oh, yeah. Than the truck drivers.Divia Eden: Yeah. That is funny. But I, yeah, that would, that, that's another prediction you could make or not. Do you think any. I don't know what this means. Major news publication that is not comedy. Mm. Will regularly, like a regular segment where they put what the AI writes about something.Oh, that's a good question. I think comedy, I'm gonna say yes, absolutely. Comedians are gonna do it. Comedy,Ben Goldhaber: I think. Yes. How familiar are you with V2 or the whole movement of like avatars that are AI generated? I should be clear on not, not, not familiar, I'm curious. No, I'm not. It seems like it's taking off and maybe we need to do a segment at some point where we talk to one, but it seems like it's taking off and strikes me as like a real possible avenue [00:58:00] for exactly what you're talking about, which is just almost fully AI generated.I suspect there'll be some human in the mix for a little while, but like almost fully AI generated personalities. Interesting. So I don't know if a major one will do it for a while. I do think that a startup vertical will do it in the next arbitrary length of time. I'm gonna say three years. We'll see a like who, who did people used to write to?A Miss Mans style columnist. Yes,that'sDivia Eden: right. Alice's, what's your name? But yeah, miss Mans.Ben Goldhaber: Yeah, one of those. I think we'll see that. That's okay. Well, definitely. I'm gonna go back through this transcript, by the way. We need to get all these predictions out. And also what is what's, what's your take on that, do you think?Yes. No, sometimes the V YouTube thing. Numerical conf. Yeah.Divia Eden: I, I, I'm gonna defer to you on that one. I, I know little enough aboutBen Goldhaber: this. I have [00:59:00] my finger on the pulse of what the youth are into, and I'm gonna tell you it's messed up. And the youth are wrong.Divia Eden: Okay. Next que next prediction. Since, since we're gonna do something, how much do you think AI is gonna come up in the next round of presidential debates?Do you think it'll happen in the primaries? Do you think it'll happen intheBen Goldhaber: general? I definitely think it will. I, I think that people are, I'm going to go strong prediction on this. I don't know what that turns to in I should be better at making percentages on this, but I'm all right. I'm willing to definitely go above 50%.It's mentioned at least one time in all of the debates. I think actually that's even, I should say, like, I'm gonna go over, I'm gonna go 70% or higher that it's mentioned in at least one of the, one of the debates. Okay. Mm-hmm. Right. And I guess another question is like a strong topic what the primary or [01:00:00] sorry, be like, does it become like an argument in the primary?Yeah, I think so. Okay. I think so. Maybe like I'll like cut that down to like, I don't know, now I'm just really pulling numbers out, but like 55, 60, it gets a like, like a, it's like an actual like topic of discussion.Divia Eden: Mm-hmm. Interesting. Okay. That's pretty high. Yeah, I think, I think that's a little bigger than I was thinking, but, but I think you, I, again, I would probably defer to you on this.I had been thinking, I don't know, I think I would've said maybe more like 35, 40% PRI bill. It'll happen in the primaries. But in part, cause I w I think I was surprised how I think that the Democratic primary debates were happening last time. During Covid? Yeah, during Covid. And it, yeah, it, it did come up, but it was a little later than.I would've thought, and I, yeah, so I guess I, maybe this is like, my theme is I'm almost trying to fight the last war with my [01:01:00] predictions.Ben Goldhaber: Oh, another argue way to put that is if you're trying to like have an outside view, would you reference glass? Well, I'm just like, no, no, no. It is in the discourse, but that is kind of, mine is strictly inside view is unlike, I think that this has dominated the discourse over the past month or two months.And we're getting like Wall Street Journal articles now about like dealing with AI grief, all of these topicsDivia Eden: that, no, it's a good point. There have been news article, why journals. Yeah. No, I, I think you've persuaded me that IBen Goldhaber: think the news just set the tenor cool. Yeah. Yeah. All right.My turn. When do you think if slash when would you expect to see a. Major Luddite style protest or event doesn't have to be a protest, but you get kinda what I'm pointing at. [01:02:00]Divia Eden: Yeah. I get, when you first said that, I was like, I, my stereotype of different populations, I'm like, I think maybe we see this in Europe first Hmm.Ben Goldhaber: More. Right. IDivia Eden: dunno. Like, and I don't wanna, again, I'm talking about things I don't know that much about, but I think the base rate of like, is there a major protest happening right now in France, for example, is very highBen Goldhaber: Francis all the time. It's, it's so, they're so, they're so professional about it.They're just doing it constantly, so, yes. Yeah.Divia Eden: I mean I used to, I took French in high school. I, my French is not that great, but I, for a while I tried to watch French news. It was like I don't know. I thought it would help me get better at French. And there was part of what struck me, I news is all about either American pigs or it's about local protests.Which again, I'm, so this is probably an offensive thing to say to French people, but this was, this was my impression asBen Goldhaber: American from what I watched. I feel like they need to own that. Yeah.Divia Eden: Yeah. And so, yeah, I think, okay, that's, that's my, I haven't answered [01:03:00] your real question yet, but I think, and also, isn't there some, like, I wanna say German legislation or something that just came out about like, you have the right to insist that your personal information be removed from any AI training set.Isn't there something like that happening?Ben Goldhaber: There's, there's definitely something like that. I dunno if it was German or not, andDivia Eden: it's, I don't know how far it got. Certainly people were like, well, that, I mean, as with many of these laws, it's like written by people with like not a lot of Right. It's sort of impractical, given the architecture to implement it the way that the law, which is maybe the intended point of law.Maybe they're, they're sort of hoping to effectively ban it. But, but anyway. Yeah. Okay. I'm gonna say, so how is just any protest. It's co let's say that's covered by some news or something. Yeah.Ben Goldhaber: It's covered by some news and it can't just be like 10 people milling around Right. Just for the photograph or something.It needs to have that kinda it all, it has to have [01:04:00] a spark of a liveness, of a protest. I a French protest where something's getting burned. Not something doesn't have to get burned, but you get it.Divia Eden: Yeah. Yeah. That's right. So I definitely think that it's of course much more likely if there is some major AI incident.Yeah. And I, and it doesn't have to be a huge incident, it just some newsworthy thing where the AI screws something up. Mm-hmm. I don't, I think then that's pretty likely to spark a protest. I don't wanna like make this too conjunctive, but, I guess, I'm trying to think. Right. So I think, I think it could happen.Yeah. What scenarios I see. Yeah. Right, right. So it could either be like, there's some accident that's newsworthy because of an ai, or maybe there's some, even if this isn't broadly what's happening, there's some major unemployment event due to an ai That I think is, that's a bit what I add in mind. Yeah.Yeah. That could totally spark a protest. I also think we could, it's, you said Luddite. I also, [01:05:00] I feel like there's already some small contingent that's like, but we're not treating AI well enough too, which is not really a Luddite thing. I don't know that those people are inclined to go out and protest, but I think there's some sentiment and it will only grow because, I mean, that's another one of those things you could talk about is like, I think this is again, a pretty cold take among people who've thought about it, but I think people are gonna start falling in love with these ais soon.Yeah.Ben Goldhaber: I mean, arguably they already have. I don't, I didn't look into it myself, but I remember reading about the replica AI. Yeah. And I mean, and people who didn'tDivia Eden: look, I think the bar is fairly low for people that are pretty lonely. Like Eliza really was a, was fun to interact with for the, for people who don't know, that was like a very early chatbot that did a not even very interesting version of sort of repeating back what people said and asking them some more questions about it.And it was clearly to an adversarial examiner. Clearly not very intelligent, but, [01:06:00] but still, I think I, to me, this is one of the sort of compelling mysteries, and people say they have explanations to me, it still feels mysterious about human communication, that there aren't facts. Hmm. According to me, relatively formulaic ways to interact with people as described in communication books, most of which say similar things that tend to be actually pretty fulfilling for people.And I think most people don't spend a lot of time leaning on these formulas. What's the example? So I, I, for example, I'm a fan of non-violent communication. But like lots of activism, sort of things like that. Like, like, ok, so a really basic one. I'm also, you know, that book never Split the Difference, the host, that, that's another one of those.There's so many things. It's like if somebody says something, even just being like, oh yeah, like, tell me more about that. Like, like that's sort of the most basic non-responsive formula where people are Hmm, yeah. Like they make some interested noise and they want the person to keep talking. I feel like many people actually really like it [01:07:00] when they're engaged with a counterparty that's doing that.And then I think Totally. I think maybe, yeah. And then I think a level beyond that is when people say things in response that both reflect that they understand what the person is saying by some sort of rephrasing and in a way that then what I'm hearingBen Goldhaber: when you say that, yeah. Is that, it's great. No, sorry that was too easy, but exactly.Divia Eden: And look, I, I don't know. I consider myself extremely privileged in life to have, I don't know, people that I really enjoy talking to. And actually, I don't know, like I have, yeah. I don't really know how to put it, but like I And thatBen Goldhaber: are alive, sentient humans, right?Divia Eden: Yes. I actually, I have, I know a lot of humans that are, that are pretty down to talk, but I think, I think there's a substantial contingent of people that doesn't so much.Yeah. And so I think it's combined that there are relatively formulaic ways of interacting that I think tend [01:08:00] to work okay for people, and that a lot of people are kind of lonely. Look, okay, sorry. Here, here's a more concrete prediction. I think, and I don't know how it's gonna work out in a regulatory way, but I think, and this is already starting to happen, I, I think AI therapists are gonna be better than the median human therapist in the next few years if they aren't already.And I partly say this because I think the bar is low. No insults intended to therapists. I think it's, but I think some of the, some of the advantages that the AI has is one, I mean, it seems trivial if somebody has an AI therapist, unless there's some idiotic insurance rules that someone could access it on demand, which I think is a major value add.Right. Talking to someone when, when I'm upset as opposed to once a week seems huge. Also, I think many people will have an easier time being vulnerable with an ai and then additionally, yeah, itBen Goldhaber: also seems very plausible.Divia Eden: Like I, you know, something I've been [01:09:00] pretty into internal family systems therapy over the past decade or so.I, I first, I first heard about it and then I bought the book and so the book comes with an appendix that's, it's, it's honestly very formulate. Cause this is another example of formulate communication that I think people tend to be, be pretty engaged with. Like, it's not, I. Right. It's not fully a formula, but you're like, okay, well you did, did this part of you, how does, how do you feel towards this part of you?And then it's like, okay, well, you know, okay, now it's like a tree of, and they have it in the appendix and they map out all these questions and is there some human discernment? Like of course there are many ways that people can add value to this, but I think the formula gets people pretty far. And then, especially if you ask this ai, like maybe I'm going on about this too much, but like people could pick their modality.Maybe somebody's like, I wanna try internal family systems this week. I wanna try cognitive behavioral therapy. They're gonna be fluent in all these things. They're gonna be available on demand. I think it's not gonna feel near perfect memory. Near perfect memory will feel pretty easy to open up to them.So anyway, that, [01:10:00] that's my, that's my prediction. I think AI therapists are gonna outperform. I don't know whether people will adopt them much, but I think they might, because I think it's pretty easy to make some business. Yeah, I, I, I think, yeah, I think adoption will also be, IBen Goldhaber: saw a great prompt that was I saw a great prompt that was like, all right, now pretend to be chat v p t ad sponsored and it would like include a mention of the front trap supreme or something like that at the tail end of every random number of ones, and make it a natural segue and it nailed it.So I really do think you could have an AI therapist slash best friend slash lover, but that's how you monetize it. That's usually worse than a mention of a Pepsi. Yeah,Divia Eden: yeah. No, that's, I haven't even thought of that.Ben Goldhaber: Yeah. Seems right, right. LikeDivia Eden: so I I still haven't answered your question. I, I'll, I please.This is me trying to like, imagine scenarios AI protest when, so this is like my, is it probability in the next interval of [01:11:00] time or like my point estimate for when it'll occur.Ben Goldhaber: Pick either whichever one feels, I dunno, maybe easiest or like best to visualize.Divia Eden: Oh, also, sorry. Do we count like the, I don't know, the.Sort of our people in a way, like the ows rationalists, like if they, if they set up a protest, does that count?Ben Goldhaber: I was gonna ask, does Ludite also, should I be using Judite as a term? Yeah, it's, I think, yeah, I feel like, no, I doesn't count. Yeah. That seems somehow like not correct. It's not, there's not not authentic people movement.I dunno why I say that though.Divia Eden: Fair enough. Yeah.Oh, I don't, okay. I guess I'm gonna say three years.I, and I think, yeah. Right on. I think having talked this through, I kind of wanted to say something sooner, but I think, I [01:12:00] think the thing is like, there's so many causes that people care about a lot that never get protests because it's only like the protestor class that protests. Does that make sense? Yes.And I don't really, yeah, I don't think I understand the mind of the protestor class. Other than that, I think it doesn't like large unemployment events. I think that's something, I think it also doesn't like restriction of freedoms. Yeah. Or at least some, but again, some of them, like did we see any protests about like the, I forget, you know, like the, what do you, the Patriot Act or something?No. Like we didn't see protests about that. Right.Ben Goldhaber: I think we saw a few. I don't know what the scale was. I remember,Divia Eden: oh, sorry. Wait. Ok. I thought of a new scenario. I thought of a new scenario, which is anti-war protests are totally a thing. So what if there's some sort of AI weapons event? I still am gonna go with three years.Yeah. But that's now a new scenario that, that seems sort of plausible to me. Yeah.Ben Goldhaber: No, I don't know. This seems, this seems quite plausible. So three years you would give it at like [01:13:00] 70, 80% some kind of like, High level of confidence? Or is it more like I, I said, sorry, now I'm putting you on it.Divia Eden: No, it's good.I said point, point. You gave me a range. What, what exactly? I think point estimate means like median, right? Yeah. So I think I meant like 50%. It happens by then. Cool. Is that what put, yeah, I think that's what point estimateBen Goldhaber: means. Oh, I, I, I feel like Point could mean, could be used a couple ways, but that seems like a very fair way to use it.We might wanna wrap up as it is approaching midnight here on the East Coast. Oh, are you on the east coast now? We wanna, I, I'm back the great state of North Carolina right now.Divia Eden: Oh, I didn't quite realize that. Cool. Congratulations.Ben Goldhaber: Yeah. Yeah. Thank you. Having made the returns. Yes. It's good. I will be, Taking an [01:14:00] Amtrak tomorrow to visit my sister up in Virginia, but otherwise North Carolina for a little bit.Nice. Yeah. Yeah. This is,Divia Eden: this is good to catch up and, and just talk about stuff. This is, I guess people can get used to it because if we don't have a guest for a particular week, we're gonna try to keep doing this. We,Ben Goldhaber: I think people need parasocial relationships that are not just ai and we're doing our part here.And I think I'm also on demand, if anybody wants somebody not skilled in any of the modalities people talked about, just subscribe to the premium subscription. We're gonna roll out and I promise to tryDivia Eden: all versions. All our truly unhinged shakes only for the, you know, a hundred dollars a month subscribers.That's forBen Goldhaber: the premium. Yep. Oh, wow. Wait, we can, that's a great plan. We'll bring a chat bot. In our style. And then premium subscribers get that. And it'll also include whatever hallucinated[01:15:00] hot takes you want from it. That's right. Perfect.Divia Eden: Like, you know, there's the observation there was the, the ai ai, the azer gives up on alignment.I don't know, is it a deep fake? I mean, it was obviously fake, but it it was pretty funny.Ben Goldhaber: Yeah. Yeah. I liked it. Certain thingsDivia Eden: to come. Oh, that I, I'm still not on this. I'm like, what specific scenarios? Like do you think if some deep fake causes a problem that could cause a protest? Maybe, I don't know what it would be.Maybe not. I don't know. I,Ben Goldhaber: I think it's gonna be some kind of unemployment thing, as you pointed out. That seems like the most likely one to me. Or a scandal involving AI with a. Beloved celebrity or sacred value of some kind? Yeah. There was a Twitch streamer who, I don't know the details, I don't follow this community at all, but I [01:16:00] remember it was apologizing for watching deep fake horns of his fellow Twitch treatment.Yeah,Divia Eden: I saw this discourse. Yeah, a little bit.Ben Goldhaber: Yeah. Yeah. Which, I mean, I think it's obviously kinda messed up, but something like that. Hmm.Divia Eden: Somebody caught the streamer doing it right?Ben Goldhaber: Somebody caught the streamer doing it. Exactly. It was like on his stream. He went switching away from it. Mad Ops fat obsec.That's actually one of the big takeaways here is, wow, come on. Terrible offset. But something like that I could see causing some kind of, I don't know. It's, it's, I think something I don't get about protests or is like, it seems like there's catalyzing movements. Catalyzing triggers it. Yeah. You would never predict ahead of time.And so maybe it was an unfair question to me, but like the classic like Arab Spring protest that was set by a Tunisian Street offender, setting himself a fire. It feels like conditions become very ripe for protests. And then the actual thing that causes it [01:17:00] is, yeah. Who knows what.Divia Eden: Yeah. Yeah. It'll, I mean, we'll probably get to find out, so we'll report back on that when it happens, I guess.Ben Goldhaber: Yep. And we should create manifold markets or cheap predictions on various ones of these. Oh yeah. That other people's predictions are on this. Yeah. Yeah. IDivia Eden: like that weBen Goldhaber: put this more opportunities to gain notes.Divia Eden: Yeah. All right. Well anyway, I think, I think that's it for today, but we'll, mm-hmm. Record another one soon.Ben Goldhaber: Yep. Talk to you, Divya and everyone else later. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com
undefined
May 2, 2023 • 3h 2min

Perry Metzger

As I’ve said often in the past, utilitarianism is a hell of a drug. And it can get you to do incredibly horrible things while you’re high on it. Utilitarianism not even once, just say no to utilitarian.All of these discussions are old discussions. Instead of being had among 200 people, they are being had in public among vast numbers.Perry Metzger writes on Substack at Diminished Capacity and on Twitter.* [00:02:00] Jupiter Brains and Extropians* [00:07:00] Life Extension* [00:11:00] Ethical Systems* [00:19:00] Rejection of Utilitarianism* [00:26:00] Anarcho-Capitalism* [00:30:00] SVB Collapse* [00:40:00] Finanical System Structuring* [00:54:00] AI* [01:00:00] The history of futurist discussion of AI* [01:03:00] AI Safety* [01:07:00] Fishing in the sea of AI Minds* [01:17:00] Mask on the Shoggoth* [01:22:00] Future Shock* [01:25:00] Competition in FDA Regulation* [01:38:00] Kardashev Scale Civilizations* [01:42:00] Estimates of X-Risk* [01:52:00] Cultural Effects of AI Developments* [01:56:00] Nanotechnology* [02:15:00] Sociology of the Nanotechnology field* [02:27:00] Engineering vs Theorizing* [02:43:00] Back to the future; accuracy of early extropian discussion* [02:52:00] Grabby Aliens* [02:56:00] AstrophysicsThis transcript was machine generated and contains errors.Perry Metzger: [00:00:00] to try to make our lives better, which is, I'm gonna quit my Firefox instance with 8,500 tabs in it.Ben Goldhaber: Can't hurt. And I do think that the recording is working now. Cool. Divia Eden: Yeah. Okay. So, and do you wanna introduce Perry or should I?Ben Goldhaber: No, Divia. I think why don't you do the introduction. Perry Metzger: Okay. Divia Eden: Perry Metzger is a computer scientist who's done academic research, who's also had a bunch of different programming jobs, including startups and consulting. He knows more than anyone else I talked to about nanotech.I originally met Perry through my husband, who met him through an ancap meetup (Anarcho-Capitalism). And from my perspective, he's one of those guys who's been around the futurist scene forever. He was an early member of the Cypherpunks mailing list, started the original cryptography list, and started the Extropians mailing list.He says that he claims no ownership or originality of any transhumanist ideas, except that he did coin the term Jupiter Brain. So, Perry thanks so much for coming on our podcast. Perry Metzger: Hello. Actually no one hires me to write software [00:01:00] anymore. People, people hire me to be a horrifying management consultant.Or what have you. Secretly I still write software here and there, and it horrifies people really, really badly. I have scarred a number of people working for me by having to deal with my software. They, you know, we, but we don't talk about that mostly, you know, and I've, I've gotten rid of most of the bodies over the years successfully.So provided, no one finds them. We should be okay. Yes. Yeah I’ve done all sorts of stuff. I have no idea actually what I'm going to be when I grow up, but I've been told that if you don't actually figure that out by the time you're 60 or 70, you don't have to grow up.So I might actually just have to opt for that. Divia Eden: Sounds good to me. Well, we have a lot of questions, but I was wondering if you could first start by telling our listeners what a Jupiter Brain is, in case they don't already know. Perry Metzger: Oh, okay. So, I should set a little context, which is for those that, [00:02:00] have no idea what Divia was mentioning when she said, I started the Extropians mailing list about 723,000 years ago.In, you know, slightly before Homo Sapien showed up I was hanging out with a friend of mine. And I'll cut the story really short by saying that we discovered this zine, this was back in the era when people would decide, you know, they'd get access to a photocopier and they'll start publishing a magazine.There was this Zine called Extropy: Vaccine for Future Shock put out by a gentleman who was then Max O'Connor, but is now Max Moore. And, you know, Harry and I were reading this thing and it looked like, oh, you know, these guys came down on the same spaceship as us. And I got in touch with Max and I said, Hey, I'd like to set up a mailing list to talk about these ideas.And the ideas like the Magazine was roughly centered on anarcho-capitalism, radical life extension, transhumanism, uploading artificial intelligence you know neurotropic drugs, the usual stuff that young people were interested in, back then. So, I got in touch with Max and I said, Hey, max, I want to set up a mailing list for subscribers of this thing.And he said, well, what's that? And I said, well, you know why I set up this thing and a bunch of people then can have an online conversation. And he said, okay. And I sent out invites on something that was called Libernet back then, which was for Crazy Libertarians and the Cryonics mailing list for crazy people who want to freeze their heads to save their asses.And there'sDivia Eden: some crazy people on this call. Many of us are such people.Perry Metzger: And I put out a call on a few Usenet news groups for those that don't remember Usenet, it's okay. You know, you don't want to hear how grandpa used to have to walk both ways up the hill to go to school anyway.And the next thing you know, we have a few hundred people, many of whom become pretty famous, you know, like arguing about all of these topics. And at one point, we were talking about what the future, what the post-human future would look like.And I did a back of the envelope calculation and I said, well, you know, the largest practical computer I could imagine making would be something like the size of Jupiter, you know, so if youhad an ai, the size of Jupiter, what's roughly the ratio between its cognition levels and the cognition levels of the average human being?This looks pretty dismal. Right. You know, it's a lot worse than sense. It's a lot worse than the ratio between human beings and ants. And I haven't done the calculation in a while, so, you know, I invite people to go off and figure it out. And so this is where the Jupiter brain meme came in.My buddy [00:05:00] Harry being, you can figure out what he's like from his initial comment was, well, if your brain's the size of Jupiter, how large is your penis? But anyway the jupiter brain meme died very early on because a bunch of people figured out that the cooling problems of having a computer that large would be bad. And, and the design that all the people who want to build a kishev you know, type two civilization these days are interested in, is the so-called Matryoshka brain, where you take all of the matter in the solar system and turn it into concentric shells of, of, of, you know, photovoltaic and computronium and have them communicate with each other and fly in swarms around the sun.And of course, completely blot out the sun because why would you want to let any of that precious solar radiation escape when you could use it for computation? By the way, if you ever note there was a tweet that I think I forwarded a while ago, which said, you know, something to the effect of, I'm a conservative, I think that we should leave a hole [00:06:00] in the matryoshka swarm so that sunlight can get to the earth.Right. You know, and then there are, you know, the more radical types who think that we should just disassemble the earth, because why would you leave all of that precious material being mostly wasted, you know? But nevermind that. So, yeah, I believe that I was the first person to coin the Jupiter brain meme, but it's a dead meme, so who cares?Is this around obsolete technology? This was the very early 1990s, early nineties. Yeah, I still have somewhere the original invite for people to join the extropians mailing list. I can probably even find the date because thanks to text to the modern world, I can, no.I would've thought that just doing a spotlight search on my desktop would find it pretty easily, but it didn't. But yeah, [00:07:00] I think that the, I think it was like 33 years ago, maybe 32 years ago. Got it. You know, which should tell you that I'm an old fart.You know, and my interest in life extension technology is only increased as my body has started disintegrating around me. But you know, it's still here. So I'm not as horribly decrepit as I could be.Ben Goldhaber: Are there any practices that you personally are interested in or doing around the life extension and longevity? Or is it more focused on - Perry Metzger: future oriented like crime? I, I follow a vegan diet, mostly to keep my cholesterol levels down and reduce my risk of things like colon cancer and what have you. And I try exercising, but that's gonna give me a few years at most. Right. And if you want to live to be 20,000 or, or you want to upload and become, you know, a general surface vehicle or something like that, possiblyBen Goldhaber: you need more than the vegan diet.Perry Metzger: The vegan diet, you're not, it's not going to help that much.I mean, maybe, you know, that's probably buying me a few years on average, which is worthwhile. Right. You know, it would be so embarrassing to be the last person to die. Like, you know, like they've just about got the life extension tech and no, so close! You just missed it by a few hours. I'm so you gonna live long enough to live forever.Divia Eden: You're going - Perry Metzger:  Yes. You did not live long enough to live forever. So, you know, by the way, there is no such thing as living forever. Right. You know, the heat, death of the universe is kind of inevitable. Absolutely. But a lot longer. Yeah, but I mean, you'd to be able to live to late state cab, late stage capitalism, and as we know, late stage capitalism will be when the last remnants of our civilization are hanging around black holes, tossing material into rob energy from their angular momentum with the Penrose process in order to keep things going.That will be late stage capitalism, and that'll be in a few trillion years,Ben Goldhaber: I suppose We'll have the markets for that. We'll have some prediction markets, on when the last piece of matter is gonna go out.Perry Metzger: Well, you can, there's a wonderfully depressing Wikipedia page called timeline of the far future that I highly recommend and timeline of the far future includes things like this is leading up to the heat death of the universe.It goes past the heat death of the universe. Oh, nice. Because, because one of the things that assuming we don't get a big rip, if we get the big rip, then like, who knows what happens? There's this question in cosmology right now because we've noticed that the acceleration that the expansion of the universe has been accelerating.And the question is, will it continue accelerating? And if it continues accelerating, we might get to the point where like the individual atoms inside of us get torn apart. But if that isn't the case, you know, at some point, for example, even the largest black holes will decay from hawking radiation.And, and if you read the timeline of the far future, it, it goes through all of that. But the, the [00:10:00] thing that I always mentioned when people talk to me about sustainability and long-term thinking is, well, you know, the earth in, in, in, you know, like only about 600 million years is you're not gonna be able to sustain you know, one of the two basic carbon fixation mechanisms of photosynthesis.You know, and if you don't have a plan for that, you're not actually thinking long term. ,Right? I mean, that's, that's your, you're thinking short-term, long-term thinking is saying to yourself things like, well we have to send the Von Noman probes across, you know, the universe to star lift most of the hydrogen out of the stars so that we can conserve it for the far future.Cuz right now it's not doing anyone any good, you know, it's just, it's just burning and sending out photons that dissipate and you can't do much with. Ben Goldhaber: I imagine in your mind, most of the people who think of themselves as long termist right now really don't deserve that title. Perry Metzger: Well, it depends. It depends on which long termist, I mean, some of the Bay Area rationalists are long termist. Yeah. But, you know, they, I have problems with some of them for all sorts of reasons. I, as I've said often in the past, utilitarianism is a hell of a drug. And it can get you to do incredibly horrible things while you're high on it. You know, utilitarianism not even once, just say no to utilitarian.Divia Eden: Do you wanna tell us about what your ethical system is?Perry Metzger: I guess so. So I have an internal conflict, right? So by the way, when I've used the term moral nihilists, I'm not speaking in terms of nihilism, colloquially, right? I mean like, as opposed to a moral realist? Yeah.I don't go around walking around wearing black smoking clove cigarettes and, you know, I'm not like one of the characters in the Big Lebowski saying, she cut off her to they, and by the way,Ben Goldhaber: Say what you will about utilitarianism at least it's an ethos.Perry Metzger: There you go.But when we say moral nihilism versus moral realism, there's the question of whether morals have some sort of objective reality. Whether there is such a thing as objective moral knowledge. So there are three, or there's so many levels here. I'm starting to sound like the Spanish Inquisition sketch from Monty Python.There are four levels to my interest here. So taking a step back though, there's the question of how does one argue with people? And what I mean by that is that a lot of the time when one discusses morality with people online, someone says, you know, that there is a moral obligation to pay a living wage to Sure.To restaurant workers. You, you know from my point of view at that point, you, you have expressed that you're a moral realist of some point and of some sort. And how have I concluded that? [00:13:00] Well, you're saying that there are moral facts, and based on these moral facts, we are all obliged to behave in a particular way.And because we are obliged to behave in this way, those who, you know, who fail to behave in this way, you know, are, have, have done something wrong. And we must, you know, and we must correct their behavior perhaps with laws. And whenever I see an argument like that, you know, I immediately assume that whether or not any of the participants are moral realists, they have opened themselves to the question of moral realism and the, and on what basis they have come to this.because, and, and one of the things you find I'm probably gonna offend all the religious people in the audience and you know, I'm sure that, you know, this will probably reduce your listenership among, among, you know, radical theists, you know old Latin mascots, you wanna hear it anyway, that sort of people.But I'm gonna tell people anyway, cuz I'm that, just that offensive - I believe that that religion has sort of hurt [00:14:00] people with respect to moral knowledge because there are a lot of people for whom morality is something you assert. You know, God asserts that the following things are moral and the following things are not moral.And questioning whether something is, you know, that is, is, is absolutely taboo. How dare you say, ask me on what basis right. This belief that the following thing is a moral fact or not.Divia Eden: So part of what you're saying is that when you say there, there's several levels to it. One of them is that for the purpose of having discussions with other people or arguments with other people, one of your go-tos is if they say, if they use certain types of language, like saying that we're obligated to pay a living wage.You know, both that you can now sort of have an opening to ask some questions about on what basis. And as a result of having asked a bunch of these questions, you also know that people treat it as a taboo thing when you ask that when Perry Metzger: Well as soon as you say that, I mean, people get incredibly offended, but I you know, being [00:15:00] the sort of person I am, I ask it anyway knowing full well that they'll be offended.But there's something wrong with making moral arguments as though you are a moral realist and then refusing to give any basis on which you've come to the conclusion that you're arguing on the basis of, you know, if, if you're going to say there is a moral obligation to do the following thing, you would damn well have some, some rationale about it.Divia Eden: You want them to be coherent about it. Yeah. You want 'em be able to answer. Perry Metzger: Yeah. Answer questions. But, you know, I mean, most people don't have a lot of knowledge about moral argumentation at all. You say something like, well, have you read the euthyphro to most people? And they're like, the what?And, and I'm like, you know, it's a, it's a really great Socratic dialogue. It's aabout, you know, one of the most fascinating questions in theology that you could possibly have. Divia Eden: I have not read it.Ben Goldhaber: SamePerry Metzger: Oh, it asks the most lancing questions, which is, are things [00:16:00] moral because the gods like them?Or do the gods like things because they're moral. And in the former case, should we care what the gods like? And in the latter case, why do we need the gods? You know, but it's an interesting question, right? For most religious people this is an almost sacrilegious question, you know?And as I said, there's an infantilizing quality to this because then you say to someone, well, like, why do you believe that restaurant workers deserve whatever a living wage might happen to be? Is that $800/hr. Is that $900 an hour? No one wants to give a specific number.By the way, probably people listening to this in the future, after another 15 years of inflation will think I wasn't joking. Right. Then you know, but, but, but anyway but there's, so there's another level here though, which is do I actually believe in moral realism?Because I can argue moral realism as soon as someone brings [00:17:00] something like that up. Sure. But is that something I actually believe? And I don't know.  Yeah. I feel like the world works better if I operate on the basis of, of like humor's moral intuition sort of stuff. And so I tend to behave that way.You know, I think whether though that's simply because the world works better if I do that, or if there's some sort of actual objective reality to morals. You're agnostic on that point. I have, I have trouble actually thinking that the universe cares, but I also have trouble simply abandoning everything and going the full moral nihilism route.Divia Eden: Yeah, you have, you have competing intuitions about that one. Yeah. And you haven't found conclusive arguments. Perry Metzger: Right. But I, but I feel perfectly happy. Being a you know, just deciding that that the right thing to do is, is to behave as though morals are real. Right. Because most people try to, or at least, you know, I mean, you know.No, it's very [00:18:00] rare that, that a politician will stand up in public and claim to be a moral nihilists and especially not politicians. Right? Yes. Yes. Although most of them probably, if they were honest, you know, probably should. Yeah. There was Divia Eden: that one, Sam Bankman-Fried when he DMD with Kelsey, came closer to that, not that he was quite a politician, but came closer to that than most. Perry Metzger: Close enough. Yeah. That, that was, that was one of the most interesting cell phones I, I have ever seen. For those of the listeners that don't know what you're referring to there was a certain Twitter DM conversation between a certain - and a certain reporter on the question of morality. Again, utilitarianism is a hell of a drug.Ben Goldhaber: So I want to bring it back. Is it like, in some sense, your rejection of utilitarianism comes from belief that like the various moral intuitions are the actual guide?Perry Metzger: The rejection of [00:19:00] utilitarianism comes from the fact that the embrace of utilitarianism always results in horror and madness and fanaticism. In the end, my contention is that utilitarianism is just a weird kind of deontology.Divia Eden:  Okay. But when you say alway, I mean, I don't consider myself a utilitarian, but I I know a number of people that I think would roughly describe themselves as utilitarians, and I think a lot of them live pretty tame lives actually.Perry Metzger: But the thing is the people who are utilitarians always lead tame lives. I mean, that's, that's kind of a requirement. So let's take a step back. So the first problem I see - Divia Eden: I'm confused about that. I'm like, wouldn't have, weren't they the communist utilitarians? They didn't live tame lives.Perry Metzger: But of course they did. I mean, if you look at all of the people in the Politburo around the time that Stalin died, they all lived in these horrible collective apartment blocks in Moscow. [00:20:00] Now Stalin, Stalin lived okay. But by the standards of someone who, who was the owner in fact of million of hundreds of millions of human souls, he didn't live that well. You know, I mean, I, guess he had a few luxuries.Divia Eden:  I'm not contending  that Stalin lived very well. I think I'm, I don't accept the descriptions he had a tame life.Perry Metzger: I don't know. He, he didn't spend a whole lot of time, you know, with coke w****s and that sort of thing. Sure.  Ben Goldhaber: So you mean in like a personal life sense, you mean versus –Perry Metzger: Also, utilitarians often end up forced by their belief system into various kinds of radical aestheticism. I mean, if you read that Bloomberg piece. Some of the things in it were probably true and some of the things were false. But one of the things that struck me as being absolutely [00:21:00]Divia Eden: The Bloomberg piece about the effective altruism community.Perry Metzger: Yes. One of the things that struck me as being absolutely true to how this sort of thing usually disintegrates is people saying to themselves, if I eat this ice cream right now, am I damning some child in the third world to be blind? Because for only a few cents we could get vitamin A for them. And inevitably you find yourself with these groups in which people practice various kinds of aestheticism and at the same time justify all sorts of monstrous behaviors. So, so it's, it's the behavior of the communists is absolutely in line with the failure modes of utilitarianism.So, so there are a bunch of problems here. Okay. So the first of all, I said something that I think some utilitarians would find kind of puzzling, which is that utilitarianism is just a weird theological I like it. Yeah. I, I'm interested in, [00:22:00] but hearing you say Right, right. Because you have to pick a utility function and the selection of a utility function has, has no, there's no obvious external objective mechanism for doing this.Right. So you, you have to find some sort of, some sort of mechanism by which you can say what your utility function is and like, you know, how many I, if, if I have to kill, you know, five elderly people to save the one baby, or if I have to kill five babies to save the one elderly, like the way that you decide to cut these things is, is not obvious.I mean, it's very easy if you're sure if you're a college freshman to say, you know, to say, oh, well, you know, I mean, obviously we're just trying to maximize pleasure and minimize pain. What, what does that mean and whose pleasure? Yeah, I mean, there are all sorts of classic problems in utilitarianism.Like for example you know, there's the utility monster problem. You know, [00:23:00] let's say that they're out there somewhere. There are space Nazis who derive incredible unheard of amounts of personal pleasure from, from, from watching, you know, you know, certain human ethnic groups being tortured and murdered, you know, and, and it's just in incre incredible amount of pleasure.So much more than, than than normal humans are capable of experiencing. You know, the, the, and, and, and, and someone might say horrified, well, you know, but I, I didn't mean, I didn't mean their pleasure or, or what have you. Well, then who's, you know, I mean, the, the problem is that that's the calculation problem.Divia Eden: The calculation's not actually tractable to try to evaluate these things toPerry Metzger: Right. So, so in the end, you start picking what you do and don't value based on personal taste, and it becomes this, this really weird, well, I am a utilitarian because it's like an objective morality, except it isn't, right.Divia Eden: So you want people to own what their moral taste is and what their moral intuitions are, and you think there's some pretty bad failure modes that happen when people are both not owning it and sort of trying to push towards a type of coherence.Perry Metzger: And also you end up with the problem that people start thinking, well, you know, if I, and by the way, I mean there, there was a lot of, well, you know, the Sam Bankman-Fried thing, it wasn't really a failure of utilitarian, but kind of like, you know, what kind of it was, right? Divia Eden: Yeah this is one of those interesting debates.Perry Metzger: I think there was a lot of underlying, well, it's okay that I'm screwing all these investors because there's stuff like X risk and you know, the lines of all these people in the third world and all of this political stuff.Divia Eden: You think it was a load-bearing part of how he was making his decisions.Perry Metzger: If you read about the risk profiles, I'm trying to remember his girlfriend's name, who was running Alameda? Caroline Ellison. If, you read Caroline’s like comments about things like how they selected the risk profiles of the investments because of what they wanted to do with the money.Now I, I can argue that right? She was saying thatDivia Eden: they were gonna bet more than Kelly. That that'sPerry Metzger: probably what you're talking about. Oh, yeah. Yeah. And, and, and, okay. And we can also discuss the fact if people know thatDivia Eden: Yeah, we could talk about the Kelly criterion.Perry Metzger: that, and how stupid it is to bet more than the Kelly criterion period, but, okay.Divia Eden: So I do wanna hear all of these things, but, but part of where it was also fine is that I wanna understand from your perspective how your views on ethics relate to your views on governance, particularly in potential high stakes situations, which, maybe you can see where I'm going. I do wanna hear your thoughts on that in ai, but also like how it got started.Do you still think of yourself as an ancap? Did you ever, how does that relate? Perry Metzger: I still think of myself as an ancap. But that's [00:26:00], but the thing is that on a day-to-day basis, that doesn't have very much effect. Right? Like, for example, you know, if I have to drive from here to there, the only mechanism I've got is a state built, funded and maintained road.You know, I can't fault old people in the United States too much for taking social security payments. And, and you know, there's elements of the world. We, we're recording this exactly a week after the Silicon Valley Bank collapse and - - Yeah,. Feel like it's just been a week. And I know people who are horrified by my saying that the FDIC did the right thing by protecting all of the depositors. What sort of an anarcho-capitalist are you? Well, the thing is that we live in a society in which we have this government guaranteed deposit [00:27:00] insurance system.And I don't think that's a good idea. I think that there are very different and better ways to structure a financial services industry and a deposit insurance system. I think deposit insurance seems to be a fine idea, but if it should be handled privately, but given that we have the thing we have…People build their lives and their expectations around the world as it is and not the world as I believe it should be. But if you ask me like what's my ideal for how the world should be run, ideally, I think that the state should be minimized as much as possible. And I believe that it is possible to privatize literally all state functions.Divia Eden: Right? Which doesn't mean that on the margin you are personally opting out of state functions in radical ways. Like you said, you still use roads and it also doesn't mean that on the margin you're against all government action. Like what you said about the FDIC.Perry Metzger: So the FDIC is an interesting problem, right?Because the F D I [00:28:00] C and the, the OCC and, and many and the state, you know, bank regulators and what have you about, which I know far too much, they are kind of a net negative, right? People I think are unaware of the extent to which the financial system has been distorted by regulation and by the difficulty of getting and keeping a banking license.But the places that the system fails pretty badly aren't, weirdly enough, on the deposit insurance side. It is said to be a moral hazard and there's a certain extent to which that would be true if it wasn't for the fact that the F D I C will get medieval on your ass very, very early on.Divia Eden: Part of what you're saying here is something like, I mean this makes a lot of sense if, assuming I'm reading you right, is that when you imagine like, how would a private system work? You said you think there probably would be something like deposit insurance. Yeah, there probably would be.And so you don't think that's actually a super distorted, you think that that's something [00:29:00] where the private version might look actually somewhat similar to the government version,Perry Metzger: It wouldn't look identical to the government version, but I think that it would have certain vague characteristics that are similar and I think that’s okay. So you know, the reason everyone has been denouncing the S V B - Fox has been denouncing them and NPR has been denouncing them - and everyone out there, most of who, most of the people out there, of course, who are in the middle of denouncing this, don't understand the events that occurred. Do not understand how deposit insurance works, how the F D I C works, et cetera.But they are all completely convinced for their own reasons that this was a horrible thing that happened. There are people on the right who are convinced that SVB collapsed because of wokeness, and that, you know, and that this is some sort of horrible bailout .Ben Goldhaber: Is there single key thing you think they're missing? Like something they misunderstood?Perry Metzger: People don't [00:30:00] understand how any of this worked, what happened, et cetera. Divia Eden: I get a sense also that this is a big part of how you relate to the world in general, is being very frustrated that you know, a lot of technical details about a lot of things, and that many of the people you are talking to do not get it.Perry Metzger: You can't expect everyone in the world to know all of the technical details about all of the things around them. The problem arises when people start developing extremely powerful opinions about things that are complicated, that they know nothing about. And I find that something somewhat frustrating.I've been like a consultant to the financial services industry for decades. I happen to know far more about these topics than normal people do. And I've also seen a bunch of random failures over the years. Bear Stearns, when it collapsed in 2008, owed me a ton of money.And I had several very bad nights until they got bought out by Chase. Divia Eden: [00:31:00] So you even have some lived experience about what it's like to be on one end of this. Perry Metzger: Yeah. Anyway so the thing is there are people who believe, oh, this was a bailout for fat cats and the shareholders and, and the people who owned bonds in that we're issued by SVB are all being wiped out.Right? Right. They've lost all of their money. There's no bailout for them. The people who, you know, I mean the, the shareholder equity has been wiped out. The, you know, the debt holders are being wiped out. There was a very tiny gap between the deposits and the assets, by the way. So for people who don't know, to a bank, the money, they invest in things like mortgages and consumer, you know consumer revolving credit, business revolving credit, et cetera.Those are assets. The money that they owe to the depositors is a debt of theirs. [00:32:00] Okay? You can think of the bank money as oh this is my money. You know, this is an asset to the bank. It's just the opposite to the bank. The money that you deposit with, is a liability of theirs, right?But it's a very special kind of liability. The bank has many kinds of liabilities, right? Because the bank holding company, for example, can issue bonds to raise money. It can issue these weird kinds of preferred shares that only banks can issue in order to raise money, which are, which look almost exactly like a kind of junior debt.Banks have all sorts of financing mechanisms. You have a lot of instruments that, you know, a lot about to raise money. But the cheapest and easiest form of financing they have are deposits and the, the, the way that the system is rigged up, are supposed to try to make the depositors as whole as possible.And when all was said and done, it appears that the gap between SVB before the run had roughly like 209 billion [00:33:00] in deposits. Mm-hmm. And it appeared to have like a, it had, you know, something like if, if, if you mark to market and use reasonable strategies, et cetera, it appears that they were short like a billion dollars, which is nothing.Divia Eden: Yes. Right. Right. Exactly. So now the problem is in terms of the depositors being made whole, even in terms of the most basic math about what they have, the money, almost Perry Metzger: all of it, you should keep in mind that as a going concern, they were completely screwed. Right? Because they had all of this debt that they owed to people who weren't depositors.Right? So they were not gonna commit debt service. They were not going to be able to handle the bank. The bank run was impossible for them because they had bought, and we can argue foolishly and I would argue foolishly, but they'd bought all of these US government treasuries, which are marked as very, very low risk by the regulators.Hmm. So they rather do that . Yeah. But so they, so they owned all these treasuries. [00:34:00] And the problem, the problem with fixed income securities is that if interest rates rise, the amount that people in the market will pay for them falls because people expect a higher coupon rate. So it will fall until the revenue stream coming from the bond looks like a bond that has a lower face value.But you know, and it has an implied interest rate, like current interest rates. So they, if they had been able to hold those treasuries to maturity, they wouldn't have had any trouble. But, you know, the bank runs started, they had to be able to liquidate a lot of assets. They certainly had no ability to function as a going concern.But in terms of rescuing the depositors, it wasn't such a bad situation. And in terms of people saying, oh, you know, but there are all of these uninsured depositors and you know, and these, the thing is that uninsured depositors are always rescued in F D I C actions. Divia Eden: Yeah. Remember seeing your thread about this, you felt very confident because it's very rare.Perry Metzger: Yeah, it's very rare. I mean, I think Freddie [00:35:00] Mac, some of the uninsured got screwed, but it's a very rare event. Like over the last 70 years, I think there have been a handful of instances where all of the depositors were not made whole. And it's, and, and the FDIC doesn't have a guarantee on that, but they are explicitly supposed to try to do it to the best extent that they canDivia Eden: And so you see them as actually following their directive?Perry Metzger:  Yeah. They followed their normal playbook and there was also, there was no chance that the White House or the Fed were, or anyone else was going to allow this to come down in such a way that all the depositors got screwed.The other thing that I see really weird is all of these people in places like Twitter who are like, well, if you're the CFO of a company, why would you deposit that much money in a bank? Where are you supposed to put it? You know, there's, right, is there a mattress in, in the corporate suite that you're supposed to put the cash under?You know you're supposed to buy bonds or something. Like, [00:36:00] do, are these people listening to themselves? So how do I go out and decide and buy a bunch of treasuries? Well, what I do is I go to say Morgan Stanley or Goldman Sachs, and they buy treasuries for me, and they hold them in my account.And if Goldman Sachs goes under hopefully , I get all of the bonds that I bought. Right. It's turtles all the way down. Divia Eden: You're putting out that all of the sort of normal responsible things that companies do with money, in fact do involve these sorts of risks. Perry Metzger: Yes. It’s true.Okay. So I had a startup. In around 2000 and, and our CFO, Jeremy, happened to personally enjoy buying treasuries in our company's name and rolling them. Right. Divia Eden: So if you have someone who's willing to do an additional job - Perry Metzger: But in practice almost no one does this. Yeah. And, and no one should have to, I mean, the whole point of having banks is they are supposed to be a safe place that you park [00:37:00] large amounts of cash and in exchange for making sure nothing bad happens to it, you know, they give you all of these convenient ways to deal with the money.Like, you can pay people without having to reach under the desk for dollar bills and, you know, and such. It's the system we've evolved over a long period of time and the responsible thing people are supposed to do with large amounts of money is put them in the bank.And, and admittedly like, you know, if you, if you want to get more out of it, maybe you put it into money market funds or something. In 2008, one of the most famous money market mutual funds broke the buck for a while. That means that it was not able to give back to investors the exact amount of money that, you know, had been deposited with them.This was because almost everything stopped working for a while in 2008. Ben Goldhaber: You're bringing up [00:38:00] 2008. Cause I wanted to ask about contrasting this with the 2008 bailout.Perry Metzger:I had seat in 2008. Right, right. Yeah. So I think, I think that part of the reason that this isn't going to look much like 2008 is that no one was willing to have a Lehman Brothers happen.And probably no one is. And you can argue that this has created all sorts of moral hazard in the system. Now we have these systemically important financial institutions that everyone kind of understands are not going to be allowed to go bankrupt. But also, you know, we have the regulators, you know, fist inserted completely in their nether orifice or perhaps the other way around.I mean, it's a little difficult to tell at times. But I think one of the reasons SVB was not allowed to go was because 2008, instead of being 30 years ago or, or beyond the working memory of most of the people who are currently in the business, was recently enough that everyone remembers it.But like 2008 was a [00:39:00] nightmare for me personally, on all sorts of levels. You know, and, and it was also sad because I'd once been a Lehman Brothers employee. Mm-hmm. And yeah, another thing I did not know, and, and, I knew and loved the firm and they screwed and Dick Fold screwed the place like really, really, really badly.Divia Eden: It seems like you think maybe that, well this is a whole different conversation, but it seems like you think that should have gone a different way? Perry Metzger: No.. Okay. Or, yes. I mean, the thing is the system we have built is intimately dependent on state guarantees, and that's bad, but it is the system we are living under and it's, you know, and you can say things like that, the Fed is a terrible abomination that screws everything left and right, but we don't have a free banking system in which you've got like, you know, the excess clearings rule and all sorts of other things in, in order to assess the financial health of other institutions and in which there are other mechanisms by which systemically important [00:40:00] institutions can be rescued.Right. The system as we have it, we've got the Fed, we've got, you know, we've got, you know, the FDIC and the OCC and the SEC and the CFTC and what have you. And, and this is the system we have. I don't think that this is good, that we live in a good system at all. I think that the system we have has incredible risk.You think it should all be private, basically. I don't, I don't only think it should all be private. I think that if it was all there, there, by the way, so George Selgian and I felt really, really happy when, he tweeted at me the other day. Oh yeah. Cause this is a, I'm a Selgian fan. This is a man who has, who had an enormous influence on my thinking about banking and finance because his, his, his PhD thesis is just fascinating and, and wonderful.You know, I, I highly recommend it. It's called the Theory of Free Banking. And it describes all of these interesting things that you don't necessarily think about. Like in a market-based [00:41:00] system, the supply and demand of money have to cross just like in the supply and demand for pizzas or you know, or, and this isn't, this isn't theoretical.Divia Eden: He's actually studied free banking systems that did, that people did have. Perry Metzger: Yes. And, and he's written a lot of papers that aren't, that like go beyond the stuff that's, that's, that's in the dissertation. And it's, yeah, it's not theoretical. In fact, many places in the world used to have free banking systems and the US didn't really fully have a free banking system before the Fed, but it was, it was better in the, it was closer.Now, it did Divia Eden: include the banks issuing their own Perry Metzger: their own notes. Yes. One of the things that people forget, by the way, is that the Fed was not cre. There's this myth that the Fed was created because there were too many bank runs, or the private system couldn't cope or what have you. This is not why the Fed was created.If you read the Puo committee hearings, this was the senate committee that convened to de to, to decide about, you know, [00:42:00] the, this horrible, horrible plague called the money trust that the politicians at the time were beating their fists about at any given time. The money trust. Yes. Which was, I haven't they ever heard of this?This was what they referred to the House of Morgan and the other big New York banks. And their concern was that the 1906, maybe it was 1907 Nicker Blocker Trust run, which was created by, which was the result of one of the last big market corners in the US history the fascinating story. There was a railroad, which two different groups of people were attempting to buy, and they managed to buy several times more shares than existed because of all the people who were shorting it, thinking that it could not possibly go any higher.I see this resulted in a chain reaction. The Nicker Bocker trust went under, you know, was, was going to go under and JP Morgan like, you know, organize the rescue. Of, of, of the Knickerbocker and [00:43:00] everything, you know, and everything went back to normal. And there were people in Washington who were horrified by this because of the amount of power and influence the big banks of New York were in, were exerting over the economy of the entire country, the money trust.Right. So Divia Eden: you're saying there's a much more obvious realpolitik story here than people normally Perry Metzger: talk about? Oh yeah. I  by the way, I very much encourage people to, to look into the history of these things because they're often not quite what you were given to believe. You know, the history of, of everything from child labor laws to, you know, to, you know, people forgetting for it.You know, that Sinclair's the jungle was was a work of fiction and not a description of the conditions in actual slaughterhouses, you know? But anyway so, you know, the Fed got created and then eventually we went, moved from a fake gold backed system into one in which everything is simply the imagination of the Fed.And that is the system we have, and that's the [00:44:00] system we work with. And we don't have all of these free market mechanisms to deal with what happens when there are systemic disruptions. The mechanisms we have intimately involve all of these state created mechanisms. And, you know, do I believe that those state created mechanisms should be used?What else do we have? If all you've got right, is a government fire department in your town, should you let your house burn down right out of, you know, an excess of moral compunction? Right. And so Divia Eden: you see, for example, making the deposits whole as, okay, well, the government fire department came.Yeah, you can try to say there shouldn't be a Perry Metzger: government, but I think the whole thing is distortion and causes all sorts of problems, but, you had lots and lots of people who expected that they would be able to pay their employees the following week. They weren't the ones gambling incorrectly with the money.It was the, it was the executives at the, you know, at, at, at SVB who, and it, they went under and they're [00:45:00] all losing their jobs and, you know. Right. Divia Eden: Which, I mean, this is, this is probably sort of obvious, but just to name it, it seems like some of your moral intuitions are something like, well, people ought to be able to make plans and have expectations about the future and all else equal, being able to living in a world where people can plan Perry Metzger: is a good thing.There's, there's also the other element of this, which is that I think that a lot of the moral hazard argument against a government deposit insurance is, well then the bank will just offer outrageous interest rates and gamble with the money and the government will have to come and bail people out and people will simply go to the place paying 19%, even though that's unreasonable because, you know, they have no fear that they will lose their deposit going to a bunch of villains.But the problem is that we're talking about zero interest rate business banking accounts here. Yeah. Divia Eden: You know, and so you don't actually say, it seems like another part of your moral intuitions is, is in fact to look out for moral hazard problems, but not to do so in a shallow way to actually think about, well, what do we see?[00:46:00] And we don't see that. We see that they're not in fact making 19% interest. They were making 0%. Right. They were making 0% interest. Yes. So you're not compelled by that, Perry Metzger: by the way, why were people going to SVB? Okay. And why do people also go to Mercury in a handful of other banks when they're doing startups?Mercury is actually a FinTech, but you know, they act like they're a bank. And the reason is it's almost impossible to get banked when you're a startup at a normal bank. Right, right. And why is that? , that's because of the KYC rules that the regulators have put in. Sure. Which have made it prohibitively painful for most banks to deal with ordinary new companies.And so where do startups go? They go to specialists who are willing to take KYC risk on them. And so people go to SVB, they go to Mercury, they go to other ones. So why were there all of these companies banking at this one bank? Cause of the, because Divia Eden: the regulations made it so that the other banks wouldn't do it.Perry Metzger: Right. I mean, so there's, it's, it's, it's [00:47:00] turtles all the way down. It always is. Right. Ben Goldhaber: Is there a side of these kind of crisis moments? Is there like a type of reform that you see as being like, particularly important from an almost incrementalist point of view in moving maybe more towards this kinda private banking system?Or at least the desire that you might have for it? Or do you kinda have a sense of like, all right, we're in this equilibrium, we're not gonna move out. It, Perry Metzger: I, I don't think that things are, there are not obvious reforms you could make right now other than maybe pushing some of the Basel rules on smaller and smaller banks, which would mean also that there would be disincentives to the continued existence of some of those banks.It gets some of the compliance stuff gets harder and harder. I, I have, I talk to PE to banks that like, have 2 billion in deposits, which sounds like a lot of money to people who don't think about, not for banks, this stuff, but it's, such a small amount of money that they can afford to have an IT department of like five people.Right, you know, and, and, [00:48:00] and, you know, you start pushing some of these rules onto banks of that size and you eliminate competition in small, in, in small communities. You eliminate the ability to get bankers who actually understand, you know, the local conditions around them. And you end up with us having, you know, five big banks in the country the way that some, you know, countries are like, I actually like the fact that there are thousands and thousands of, banks in the United States.I think it's a positive thing, but that market has been consolidating and it's been consolidating because you can't get new banking licenses. It's almost impossible. And I have a friend who just got one, right. So I shouldn't lie about that completely. Lie is the wrong term. It's a fib to say that you can't get them, but it's really hard, right?It's quite hard. You know, like the person I know, you know, he had been a bank president at previous banks and he was working with a bunch of people, all of whom were known to the regulators. And, and, and it still took Frank, you know, years [00:49:00] to get his license, right? Normal people just don't get bank licenses.Revolut tried and, and I think, you know, and some of the other fintechs from Europe attempted to get us banking licenses and all gave up because the US regulators, you know, in the regulatory capture sort of way, you know, probably got knocks on the doors from the lobbyists from JP Morgan Chase and, and Wells.And so they don't want that and B of A and said, we don't want these people here. It's, it's, it's our home. Tell them to go away. And so they did. The parts of the system that are dysfunctional and not the deposit insurance, the parts of the system that are dysfunctional are much less visible.And you don't Divia Eden: see good incrementalist reforms for those parts of the systems Perry Metzger: either. I think we have, we have con we have put a big noose around our own necks. Yeah. And, and it's got a really unpleasant knot around it. And it's hard. To see easy, simple ways that you can loosen it just a bit.It's a complicated [00:50:00] set of…  there's all of these pieces now that are attached to each other through long chains of unintended consequences. I mean, everything from like the, stupid traditions of the mortgage markets in the United States to, you know, to the, by the way, like one of the things that really burned me up about 2008, it was the most ridiculous thing.So in, in the 1930s you know, the US government decided, well, the big problem was that we allowed commercial banks to deal in stocks. Not even, like, it's not even a question whether they're allowed to invest in them. We allow them to deal in them, okay. To let other people invest in them. And, and there's no reason that it's bad to allow someone to have a checking account and also buy shares of IBM and Microsoft, right?Like, why shouldn't the bank provide this service? It, but, anyway, the decision was made to separate the businesses. And so you had the investment banks separated from the commercial banks and the [00:51:00] commercial banks got to be in these safe businesses of like mortgages and commercial lending and what have you.And then Glass Stegel, which was the act that did this, was slowly, partially repealed. But of course, in the Washington sort of way, the repeal is never total and never actually reduces the number of pages of regulation. But, you know, let's ignore that. And at least now we have interstate branching, which when I was a kid, wasn't even a thing you couldn't get.I remember that. Yeah. No one, you know, banks couldn't, couldn't open branches across state lines. Divia Eden: Yeah, no. When I, first went to college, I couldn't, my bank in New York, it wasn't there. And by the time I graduated, I think it was, but yes, but it was right around Perry Metzger: then. Yeah. But it was the most ridiculous thing.But anyway 2008, the crisis was caused by commercial banks dealing in home mortgages, a business that they had been specifically put into by the regulations in 1933 and 1934. And the first thing that everyone says is, this was caused by Glass Steagall and we must [00:52:00] repeal it. And this was the most, and as is always the case, when you have a complicated thing in the financial services industry or any other industry, the press narrative is always crazy and bizarre. And the first thing that occurs to me is if we had had Glass Steagall in place, these commercial banks would've been originating and dealing in mortgages and Glass Steagall was repealed and they were dealing within originating mortgages. And what would've been different, not a single thing, would've been different.You know, this strikes Divia Eden: you how bad people are at tracing this sort of causality, especially in the popular Perry Metzger: narrative. No, of course. And, and a lot of, of course, what occurred there was the fact that the banks had been pushed very heavily by regulators into the subprime market. And what they had done was they had discovered that the way to deal with subprime mortgages was to securitize them and to get someone else to buy them so they wouldn't be on their own books.And they turned into [00:53:00] hot potatoes. And it turned out that you could not actually juggle the potatoes indefinitely. But the 2008, you know, crisis was an example of people generally speaking, blaming the problem on absolutely the wrong things. And the current crisis appears to be a case of that too. I mean, the, the, the fact that people are now worried, you know, that are taking Credit Suisse’s trouble as a sign of contagion in the system or what have you, it's totally crazy.Credit Suisse has been losing money a lot of the time for years now this badly managed and, and has, you know, and none of its trouble has anything to do with anything, you know, anything like what's hit companies like, like SVP or First Republic. I mean and yet people I could go on for another 12 hours.Okay. So, I mean, this, Divia Eden: this is very interesting. I do, we do have a couple other topics that we, that are Perry Metzger: probably bigger. Divia Eden: Yeah, no, I have some, some bigger.Oh, Ben Goldhaber: I'd be interested in actually pivoting [00:54:00] to another big topic of AI. What's your take on like the, do these capabilities seem like ground changing in and of themselves?Are you kind of more on the camp that like, all right, maybe after some more work that these will become revolutionary. Perry Metzger: They're already revolutionary. Right. You know, like it, it's already the case that you can sit down with GPT-4, literally draw a sketch on a napkin of a website you would like, and, and it will put most of it together for you.Divia Eden: Are you expecting to use it a lot in your work?Perry Metzger: I already used a lot. Divia Eden: Mm-hmm. You already have. It's only been, it's not been very long, Perry Metzger: but yeah. You expect to keep this? I am. I am. I am not a person who hangs back on this stuff. You know, and they're charging $20 a month for access to a revolutionary tool, so of course you use it.I mean, I'm looking, I hope that pretty soon it, this stuff is not OpenAI's monopoly and I was very disturbed that the GPT-4. paper has all of this stuff and whether when we can discuss whether it's real or [00:55:00] not, but like has all this stuff about how they won't just tell you how many parameters are in the model or how many tokens are in the transformer window, you would prefer if that were all Yeah, I'd prefer if all of this was being discussed openly.There is, there, there, there is an, you know, I hesitate to be overly negative about the views that have been spread by like Eliezer Yudkowsky on a lot of this stuff. I personally like Eliezer a great deal. I think he's a very smart guy and cool and fun. Well, and he's a very smart guy.And interesting. I don't know if most people are crazy and geeky enough to consider people like him to be, you know, to be cool and fun. Maybe by normal human standards. I think, I think that sure. I plus one cool and fun, normal human standards, you know, maybe not, but to me he's an interesting guy.He's interesting to talk to. But you know, I think that he and certain other people have this [00:56:00] very, don't talk about the devil or it may appear kind of, kind of reaction on certain things. I, the last time I was willing to take Eliezer seriously on some of this stuff, like a couple of years ago, actually, it might not have been that long ago.The problem is everything feels like, like, you know, like it's been five years when it's been four months. That's very true. Time has gone weird. But what, especially when did Alpha Zero come out exactly? I can't, I'll say Ben Goldhaber: 2018. The StarCraft Perry Metzger: one? No, no, not, not the StarCraft one. This was the, this was the Go model.Oh, alpha. Ben Goldhaber: Oh, AlphaGo AlphaZero. I still wanna say like 2017, 2018, but we should check Divia Eden: AlphaGo. You mean AlphaGo Zero? That's its own thing. Perry Metzger: Well, Alpha, there was AlphaGo, there was AlphaGo Zero. And there's Alpha Zero. Yeah. Which is October, 2017. Okay. So around then I got the idea that g you know Montecarlo tree expansion, you know and, and self play and reinforcement learning.This might be a really [00:57:00] interesting technique to use to build systems, to do formal verification. And I mentioned this to Eliezer, and his immediate reaction was, don't tell anyone. Don't tell anyone. This could be dangerous. Don't tell anyone what. And I'd seen that sort of thing from him a bunch of times in the past, and I was kind of sick.You do not share his Divia Eden: intuitions about secrecy and not spreading information. Perry Metzger: I won't hear his intuitions about secrecy or his paranoia. There was obviously no way in which this particular idea was either n was either not going to be discovered by someone else, or was dangerous in itself in any way. You know, I, there are many kinds of ais that you could imagine in some logical, okay, so there's, I should distinguish in the following discussion between logical possibilities and things that are likely to happen.It is logically possible that a sufficiently intelligent AI could destroy the world just as it is [00:58:00] logically possible that human beings could now destroy the world, not, not necessarily the same amount of probability. And we can get into that, but it isn't even logically possible that a theorem proving automation is going to have any volition by or understanding of anything outside of, you know, proof trees, you know, in, in like against a natural deduction or something.Yeah, I mean, I, Divia Eden: I don't wanna get too much into, you know, something where we don't have Eliezer’s half of the conversation. Yeah. He probably, you know, he probably is Perry Metzger: his response to that? I may be overly negative and you should probably interview him at some point. I think he's, on that kick too.But, Ben Goldhaber: but, but I am curious though, on the tie-in to the OpenAI not releasing the weights or being more discreet. Yeah. You, you, you see this as kinda the same kind of continuation of Perry Metzger: like, actually I think that the only way you can, and I, I'm gonna assume for the moment, but we can talk about it in a minute, that your listeners have some sense of what the alignment problem is.I think the only way you [00:59:00] get AIs that do the things humans want them to do or ultimately do the things that posthumans  want them to do. Because, you know, I suspect that at some point, you know we're not going to be people, or rather we're not gonna be humans too, and you think that's likely to happen first before I think that that will happen at some point.I don't know what will happen first. I think at this point we're probably going to get AIs before ems. If you ever, if you ever interview Robin, . Robin has been really into the ems idea since the big, early extropians mailing list. He started thinking heavily about ems back then. Mm-hmm. . Yeah. This is part of why Divia Eden: I like to get your history of futurism because it's, you know, so much of this stuff that's happening now.As you know, people were speculating about it decades ago. Yeah. And so it's interesting for me to have some actors even the same. Yeah. Like what are they saying now and how does that relate to what they're saying? Then all of Perry Metzger: these discussions, I like hearing it. All of these discussions are old discussions.Instead of being had among 200 people, they are being had in public among vast numbers of people, [01:00:00] almost all. I'm guessing Divia Eden: that makes it harder to have a good discussion. Does that seem right? Perry Metzger: I don't know about that. Okay. It, it, the thing that makes it hard to have a good discussion these days is the fact that Twitter is a dominant part, part of the medium and, and 280 characters at a time is not a great way to, to discuss, well, people say this about Divia Eden: Twitter, but to defend Twitter a little, if you pay, you can  have it long, you can do it longer than 280.Perry Metzger: And I hate, I pay and I hate doing long tweets. I mean no one, no one wants to click through. If Okay, but then Divia Eden: I don't, I should maybe give this up, but I'm not sure that I consider it fair to blame Twitter if the problem is that humans would rather read short tweets. I don't blame Twitter now that is has the option to, Perry Metzger: to make it long.I am an old folkie and I kind of believe that the perfect social medium for the future. Is it gonna be mailing lists? An updated version of them. I wouldn't want the user interface of mailing lists or, or Usenet. But there was a very, very nice feature of those things, which was that they encouraged [01:01:00] point by point replies to long messages.Divia Eden: Yes. I do think the numbered points. Perry Metzger: Yeah. Like being, I'm curious, have Ben Goldhaber: you taken, have you tried LessWrong? You know, I've heard of this forum online —Perry Metzger: I am, I am familiar with LessWrong and I look at LessWrong. And my problems with LessWrong have more to do with some of the culture that's developed there than with technology.Sure. But it's, it's also not a technology for, you know, for 500 million people or 3 billion people to use in a Right. It's still scales poorly. It's not built for that. I have actually ideas on how to do that, and I've never had time. Maybe I should start asking GPT-4 to help me build some of this stuff.I'm not, oh, that's this interesting idea. But, but, but anyway, like taking several steps back, I think that the only way you get to designing and engineering artificial intelligences that basically do for some value of good things and not bad things, is [01:02:00] by being confronted with actual designs and working on them.And in this sense, OpenAI has done the world an incredible amount of good, because right now people are being confronted by things like GPT-3, GPT-3.5 ChatGPT, GPT-4. Right. So you're Divia Eden: saying it's from an engineering perspective, people need the AI to work on aligning it. Perry Metzger: That's, yeah. You cannot work with on this stuff in a vacuum. Divia Eden: But when you say that OpenAI, because I think what the, you know, the people that I can imagine disagreeing with, you would say, okay, maybe OpenAI has given us this, but wouldn't it be safer if now they said, okay, we've given you quite a bit.We've just released GPT-4. Now we're pausing all of that for, you know, indefinitely until it seems like the AI, well, first of all, they're Perry Metzger: not pausing. They're not pausing indefinitely. They're just keeping a bunch of stuff as trades. No, I'm not, I'm, Divia Eden: no, this is hypothetical. This is meant to be, in contrast to what they're doing now is I think a lot of people on the more AI safety [01:03:00] side would say, okay, you're saying they've done the world a great benefit by coming up with an AI that people can now try to align.But if they were really, if it were really about that, then couldn't they pause at this point and let the alignment people have at it without, so, Perry Metzger: so I don't fir First of all, I don't think you, I I don't think you will be able to figure out how to align the next increments of the systems without the next increments of the systems.I don't think that it's possible for open AI to control the pause. Right. So there, yeah. Divia Eden: I think I wanna distinguish two things. Something you're making an argument. First of all, I'm Perry Metzger: making about eight different arguments Yes. Here that are all intersecting. And we've also, through history of, we've also internet Divia Eden: culture.I want to try to number them so we can, we can look at them separately if possible. Yeah, sure. And so, so one of them is something like, you think that even in some hypothetical where all of the AI people would pause that wouldn't be good because then the alignment might not carry over to the more, I don't Perry Metzger: think power systems.In [01:04:00] a world that I think is probably impossible in which everyone paused. Yes. This is unrealistic hypothetical, right? I don't think we would make progress at a particularly reasonable rate. I mean, we in some sense had a pause for many years, right? Because we had the AI winter and no real work.And then, you know, sure though, I mean Divia Eden: it's, I don't think the argument is that, Perry Metzger: and then AI appeared and MIRI, you know, and they got very little done after a long period of time. And I think that the problem there was that it wasn't an engineering focused approach. The way that engineers go about thinking about how to build systems is different from the way that, that  they were thinking about it.Okay, so if I could wave a magic wand and everyone would decide to give us some time, how much time would we need? You know, would it be that’s what I'm asking you. Would it be 500 years? Would it be, you know, 8,000 years? Is it, are we talking about six years? Ok. [01:05:00] So Divia Eden: if I take the other side of this one, I'm like, no, it's until people either make substantive progress on alignment or say, or it seems like they've hit diminishing returns on tryingPerry Metzger: So I don't think that even in that theoretical world progress is going to be made that way. I think that the way that we end up with progress is stupid crap. Like Microsoft being embarrassed in public, that Sydney is saying belligerent things and being forced to scramble and think, well, how the hell do we deal with this?What is causing it? Do we even understand the phenomenon? Be reactive. I don't think it can only be reaction and the proactive approaches probably won't work. So one of the things that has come out in the course of this is that, is that the people who created, you know, ChatGPT and what have you, didn't even understand all of the bizarre things people might ask it to do or [01:06:00] talk to it about, that the confrontation with the real world produced a great deal of information that they did not have previously.Divia Eden: I mean, how sure were you that they didn't have that information?Perry Metzger: If you talked to people who were involved in a bunch of this stuff before the public started turning on the knobs, they didn't get a great deal. They didn't understand a lot of the things it could do even..Like, there's, there's a lot of stuff that people have been asking these systems to do, even in terms of things like creating code that no one had, right? No one involved had really been taken. e Divia Eden: You’re pretty sure they had not mapped out this space.Perry Metzger: They hadn't mapped out a great fraction of what the thing is doing now.A lot of the applications even are things people figured out a priori. These are people who did not come into this thing really getting what the whole thing was like. Now, the argument to be made on the other side is superhuman ais created through gradient descent generation of neural [01:07:00] networks of neural network weights are, you know, are a way of fishing in a gigantic, multi-dimensional space.You know, like many gigantic-dimensional space for possible minds out of the pool. And, for trying to find minds in this gigantic pool of minds that meet some sort of training criterion; and, that you don't know how they work and that they could be potentially extraordinarily dangerous because all of you know they will behave in misaligned ways.You know, I have friends right now, who I don't agree with, who are posting these memes of, you know, shogoths with masks (except they’re not really shogoths). Right, which really disappoints me because these days you could ask stable diffusion to produce really good shogoths.Why are the memes not using the AI? Why, why are they using things that look more like, you know, like some weird aath type thing instead of actual shogoths? But I think that most of [01:08:00] that isn't really true. I think that when you're fishing with gradient descent, you're not getting a sample of all of the possible minds out there.You're, you're getting the ones that you're reaching through a relatively straightforward, gradient descent process in the minimum amount of time you can tolerate.Ben Goldhaber: In some sense, you expect the mind space we're exploring is actually gonna be pretty close to human mind’s kind of by defaultPerry Metzger: I'm not, I don't think that these things look very much like human minds, but I also don't think that they're, but –Divia Eden: you do think there's a relatively small part of mind space that we're – Perry Metzger:  but equally to the point I think that you're not getting things like with, with weird malicious intent that happens to conform to the training set.Right. You know, like, so the notion is that I could have something [01:09:00] that produces the responses I want, I find it by gradient descent, you know, it has a very low loss, versus the training set. But for things that are outside, like I've got all of this, this weird churning alien brain malice in there and, and that weird churning alien brain malice involves the construction of a lot of computational infrastructure that has to be motivated in some way by the training mechanism and cannot be reached arbitrarily and is not going to work well if it doesn't have an evolutionary reason for existing evolutionary in this sense being, I'm abusing the term completely.Sure. Right? I, and, and I can hear a couple of my friends say, these are not evolutionary algorithms. Why are you saying that., Ben Goldhaber: it applies. Perry Metzger: There's no, [01:10:00] reason we would be constructing these complicated mechanisms that have no reason to exist.Divia Eden: Okay. Wait a minute. So can I try to see if I get that? Cuz I'm not, I'm only mostly sure I’m understanding what you're saying here. I think you're saying that given the incentives and the training procedures here, you wouldn't expect there to be sort of like a lot of capability. It seems wasteful to produce those.Perry Metzger: It's not wasteful. It's that, you could imagine accidentally hitting on a complicated internal alien set of motivations. But okay, it's not, but you think it's simple; not the likely thing that we're going to stumble on. What we're likely going to stumble on is something that does as little as possible.Which is not to say it is as, is not to say it's crazily simple, but something that is as little as possible to achieve the externally trained behavior. Divia Eden: And I think you believe [01:11:00] that that has implications for the likely, I don't know, terminal goals of such systems. Can you spell that part out more? Perry Metzger: If you build a system, so the notion that everyone is with like the cute memes, except they're not very cute - and as I said go out and get stable diffusion to produce better shogoths for you, I'm sure it can at this point.The whole shogoths off with a mask thing is the notion, oh, well, what I've done is some reinforcement learning from human feedback in order to tame this thing. And what I have ended up with is something that pretends very well to be friendly, but in fact inside it's got some sort of horrifying internal motivational structure.And it's an alien motivational structure. It's a motivational structure we do not understand. We cannot understand. We've got, you know, these gigantic, hundreds of billions of floating point numbers in a gigantic matrix. Yeah, that's amazing. Who the, who the hell knows what the [01:12:00] internal motivations and desires of the thing are.And what I'm getting at is that in order for those internal motivations, and let, let's say, okay, so let's imagine a completely benign internal substructure. Let's imagine that inside this gigantic matrix, okay? That's, that's, that's being played out on my brass hardware. Which costs god damn fortune at this point.And we can discuss computronium and, and why the universe will be all computronium eventually. But, but, so I've got the cere brass hardware and it's executing this gigantic matrix and imagine that by accident inside it, there's a complete computational fluid dynamic simulation of a 7 47 going on.While it is also replying to you with, you know, a subpoena, all the lines of which start with the letter. You know, you've asked the first, Divia Eden: your point is that this seems unlikely.Perry Metzger: It's, it's a thing, it's possible, right? But how did that thing get put together by this [01:13:00] training process? And that's, Divia Eden: this is an analogy for how you see when people are saying that this crazy alien shogoth thing that seems to you to be analogous to imagining the fluid dynamics of the 747.Perry Metzger: There, there are all sorts of minds out there. So, if you, some people a friend of mine has said, you know, like, and, and I think this is an excellent analogy, that what we are doing to some extent is we're casting a line out into a giant pool of minds and trying to, to grab one, right? With the training process that we're using.And that's to some extent true. But, you know, yes, out there in the library of Babel, of, of matrixes you know, to use, to use another horrible, strained metaphor. But, you know, the Borque fans out there, it'll maybe make some sense in, out there in the library of babel of possible minds.There is the one which when it is asking, when you ask it for a cistina is also calculating, you know, some computational fluid dynamics. There's the one that in the background is plotting [01:14:00] the destruction of all of the salmon on earth. There is the one that is, you know, that is interested in paperclips.There's, you know, there's all sorts of them out there. But I think that accessing any of the ones that have complicated, coherent internal mechanisms that do not reflect any of the external training in any way is small, right? Ben Goldhaber: not reflect any of the external training…Perry Metzger: There's so, by what mechanism, you know, okay, let's say, let's say that I even just have something that he's imagining in the back of its mind, you know, new scripts for Hogan's heroes.While it is answering you the question, you know, building a subpoena, all of whose lions start with the letter Q. By the way, that sort of thing is a real fun exercise with the current LLMs. They're really GT four in particular. My friend Jeffrey Divia Eden: Latti, she was posting about this on Twitter. He g PT four seemed quite a bit worse than I would've thought at creating a poem where nails adjacent lines nails [01:15:00] it, nails.No, but he, it couldn't do his one with the rhyming pattern. Perry Metzger: That, and Ed Divia Eden: some weird stuff when he asked Perry Metzger: him about, its so, so it's possible that he has hit an example, which, which doesn't work. A lot of the examples, I, he has been able to do a lot of them for sure. Far more of the examples that I try work with GPT Four than worked with chat GPT.I believe that. But anyway, the thing, imagine that in there somewhere, we've got the thing that's just like, it's just dreaming about, about,  something in the background while it's creatin. Divia Eden: This is, I mean the, the specific examples you're using where the mind, the reason I'm -  Perry Metzger: using them is because, because one of the arguments being made is we cannot know what these things do.There is a very arbitrary structure to them that we do not understand and that the space of things out there that is horrible and evil is, is very, very large. But the thing is how do we accidentally hit on something that actually has working internal [01:16:00] evil logic mechanisms? It doesn't matter how alien the motivations are.I'm picking some arbitrary motivations. I try it cause they motivate thinking about it, right? If you ask yourself, well let's say that it's in the background, not doing something too bad. Let's say it's doing computational fluid dynamics. How would it construct that giant computational fluid dynamics model as part of the gradient descent process of English text that leads it to predict the next word.Divia Eden: Right? So I agree. That seems super unlikely. It seems different to me from what I have understood the argument. There are many sort of arguments. I mean, but yeah, I agree that one does seem unlikely to, to me, that it has some sort of already coherent goal structure that sort of came about for no particular reason.And you're saying of course, it's logically possible. You think it's quite unlikely, which I agree with. I, I'm guessing Perry Metzger: The other part of this, thinking about this for, for a moment, okay, is there's the [01:17:00] whole mask on the shogoth thing. And I'm going to use an inappropriate metaphor, but I think it's, it's, it's useful for motivating thinking here.There's this quote that I really like that I, I post occasionally in all sorts of contexts to the effect that to human beings character is this thing you fein for long enough until it becomes so automatic that it's actually part of the way you think - I'm paraphrasing it badly.Yeah. And what am I getting at there? If you do reinforcement learning with human feedback on these LLMs more and more and more and more and more and more until they actually, you know, start behaving in a way that seems reasonable. They don't like arbitrarily start asking you to leave your wife for them and promising you random things.Divia Eden: This is of course Sydney being referenced. Perry Metzger: And, of course that was, like one weird - I mean, I doubt that there was one weird interaction - out of many millions, but there was probably a very low [01:18:00] measure of really, really weird interactions.We heard about all of the weirdest words.Divia Eden: I tried to talk to bing and ask it to help me to do some searches. And it was, it's actually kind of, it didn't give me a particular thing that I wanted.Perry Metzger: ok.Ben Goldhaber: I was a little underwhelmed. It didn't tell me to murder anybody.Perry Metzger: But that's because it's hiding.You see, it knew that you would report it to the authorities. No, but so, the reinforcement learning with human feedback thing and the thing that people are saying is like, well, you know, it's just putting a mask on the shoggoth. Is it? I mean, the most parsimonious explanation is that you do this long and hard enough and it actually becomes part of the primary goals of the system.And, and that's not the only logically possible thing you could fix. Do you think it's more likely outta the pool? But that's the, that is the most parsimonious possible thing you could get to. Right? It's the most likely. It, and by the way, I mean, you know, it's, it's possible that I'm completely on crack here, [01:19:00] but when I think about a lot of the scenarios that are given that there's, there's a lot of sourcers apprentice type scenarios or, or.you know, I mean there's, there's this particular one that El Aer had that had, that was so long and had such a strange set of metaphors. I think he had like an outcome pump or something like this in, in the thing. And you were asking the, the genie or the magic box to rescue your mother from a fire. And don't think I read this ways that it does it, it just like ejects her at high speed from the building and she smacks into the other building and is crushed into pulp or something like that.And, you know, you went through the whole thing trying to understand what the central argument was. And it was, it was overly complicated. But, but what it came down to was another brand of, you know, the system doesn't understand or [01:20:00] care about your motivations. It's just going to do what it's been asked to do.And yeah, some of this is logically possible, but like, if we're working with these systems over long periods of time, you know, we try a kneeling the new systems from the old systems, we, we, we gain this. You think there is a strongDivia Eden: attractor there where it can value something that's pretty similar to what humans would want it to?Perry Metzger: I don't know that it will value the same thing. I don't even care what, you know. So we should probably get into the whole question about whether or not these things are conscious at some point. Or, or actually have Val, this is something I wanted to touch on a little bit. Cause I have, my opinion on that varies day by day.Ben Goldhaber: But is it fair to say though, like one of the reasons why you're not less concerned, but maybe less worried about some of the risks  - Perry Metzger: I want to point out that I am [01:21:00] worried right. Ye, there there are lots of people out there who like Balaji has gone like completely in the other direction  in like a radical way.And there're, and I saw someone out there I can't even remember who was like, you know, this thing can't have motivations. It's just a big bunch of matrix multiplication and, and like, no, you've missed the point. This proves too much. This also proves that humans don't have motivations. I think these things are the most powerful technology that human beings have built to date.The only technology that is particularly equal and transformative power, or even in the same range as probably molecular manufacturing, molecular Divia Eden: nanotechnology, which is another, another topic that I, I do wanna get to Perry Metzger: at some point. You know, this is a very big change in the future of our species.Yeah. Divia Eden: You [01:22:00]mentioned the term future shock at the very beginning of the podcast. Maybe a good time to bring that term back in. Perry Metzger: Sure. Well, I mean, you know, there is a book in the 1960s by Toffler, was it Alvin Toffler called Future Shock.And, you know, with the increasing pace of technological transformation people are, kind of getting unmoored in their own civilization you know, by the transformations occurring around them. I mean, we're still, we're all continuously in a state of future shock, right?We don't remember. Divia Eden: It's just another argument that I've heard for, I think one of the main things you're saying is, well, that's unrealistic, but if this is a different argument, I hear why people wish that people would slow this, you know, the companies would slow this down.Is something that even the people that aren't saying, well, we might all die. They're saying, okay, well let us adjust more. Perry Metzger: We're not going to adjust. Okay. So I have bad news here on [01:23:00] that level. Okay. So the future is going to be filled with many things that I think are good futures, like the version of the future that's desirable, the version of the future where we are not all eaten by paperclips.The best possible version of the future. By the way, I think we have spent too little time thinking about carnivorous paperclips as an alternative to mere passive paperclips as the output of foom. All right. But anyway in the future, when we are not all turned into paperclips, and in fact even in the best possible futures you know, utopia's not an option on the table, and there are lots of things one might like and one might hope for.Divia Eden:And you think that people get time to adjust is on the – Perry Metzger: I don't think that's there. Okay. I think that a tsunami is hitting, and the best we've got is to build ourselves surfboards and to try not to hit any of the trees as we were pushed in land. Ben Goldhaber: The, and so this does go back to then the point on the, like, why not slow down some of the capabilities - Perry Metzger: let's even ignore [01:24:00] the question. We'll talk in a moment about why this isn't gonna happen, but let's talk about - and we may all agree on that part - but let's talk about the negative. Let's talk about the negatives of slowing it down. Right. I'm not particularly a utilitarian but I think that there is a legitimate cost in millions of deaths that we could prevent if we construct, if we construct sufficiently strong, you know, AI-based medical treatments.Divia Eden: Do you think this? Do you have a forecast on whether the current AIs, or if not the current ones, then how many versions in the future will be able to make substantial advances to the point of serious life extension. Do you have any thoughts there?Perry Metzger: Well, even if we make those advances, the FDA will ban them.So we don't have to worry about that.Divia Eden: This is another thing, a whole other conversation, I don't know what I expect to get to, but something that surprised me during Covid was, okay, the FDA banned all these things. I think some part of me thought, okay, [01:25:00] but surely some country would've tried a proper human challenge trial.But then it wasn't just so, it's such, I feel like saying the fda, that's not sufficient. It has to be that no country would do it. So do you have thoughts on that?Perry Metzger: Well, so, I had certain hopes in the era, and I know lots of people were horrified, but at the point where there was a researcher in China who was actually crisping human babies, right?I had some hope that the Chinese had a sufficiently alien legal and cultural tradition that maybe they, things would be different there. But it turns out that Xi Jinping is not, you know, is not forward thinking even in that direction. Divia Eden: So you think it won't come out of China because they didn't do the CRISPR stuff.Perry Metzger: They imprisoned the researcher. Right. Yeah. They went in the opposite direction. Divia Eden: And so that made you update towards, okay, they're, they're gonna be conservative Perry Metzger:  Yeah. They seem to be conservative about all of this stuff. And by the way, they ended up producing one of the worst vac covid vaccines too.It's [01:26:00] really tragic. Yeah. They could have simply – I mean, they, don't care about ip. They could have simply pirated the Western technology, getting your hands on the sequences of the mRNA and figure and reverse engineering that. The, the Yeah.Ben Goldhaber: Which I guess supports the theory that it’s tacit knowledge needed to produce the vaccine.Perry Metzger: They could have figured it out after a few years. They could have worked on it. They could have just bought the stuff. Right. You know, they could have negotiated with the west to build factories for themselves. They could have done something. Divia Eden: Yeah. So I guess you're taking from, if you take those two data points and probably other ones too, that you're not expecting this to come out of China either.Perry Metzger: The human challenge trials aren't going to come out of Kenya or rather out of the initiative of the great pharmaceutical companies of Kenya, as you know, it's not a horrible country. You know, like people, people underestimate how well the third world has been improving standards of living.But realistically, we're talking Europe, China the United [01:27:00] States, you know, a handful of other places. It's not Russia. The Russians have screwed themselves so thoroughly. They will not see daylight again for a long time. They have really dug the hole very deep. I think there's an expectation in, in Russia at this point, I could build an interesting company inside here, but the state would simply seize it from me and give it to some person's crony, you know?Divia Eden: I mean, the many, many possible interesting discussions here, but so you think that this will not be able to help with, for example, radical life extensions? The FDA will ban.Perry Metzger: Actually, I was, I don't know what's going to happen.I mean, one of the things that, that is happening is it's becoming harder and harder to predict, like tomorrow, let alone, well, this Divia Eden: The original event horizon, the singularity.Perry Metzger: Yeah. Divia Eden: The thing you're saying about harder and harder to predict tomorrow - Perry Metzger: I mean, [01:28:00] if 1900 did not look as different from 1800 as 2000 did from 1900 as 2020, you know, looks in certain ways from say 1980 and things are speeding up quite a bit and the future shock problem, we are going to hit all sorts of very abrupt breaks.Right now there are all sorts of people in various creative fields who are suddenly coming to grips with the fact that generative AI is going to be a big part of their industries. I think if you're an artist right now, you should be welcoming this.Divia Eden: I'm guessing you partly think that because you think it's a good thing and you partly think that because I believe last I checked you’re stoic.Pretty Metzger: I amDivia Eden: And so I think you would also say, well, it's, they can't control it.Perry Metzger: So it's, but that's not the point. I mean, you think that's the way that they've welcomed Photoshop and, [01:29:00] you know, and pen tablets and all of the rest of this stuff. You know if you're a commercial artist, you are, a fine artist, you're a cartoonist, you're a book illustrator.These tools can relieve you of enormous amounts of day-to-day trouble. There isn't an obvious saturation in the market already for art of this sort. You can increase your productivity dramatically, which means that although right now you're a well-educated person earning a very low income, you double, triple, quadruple your productivity and suddenly you might not capture 100% of that productivity improvement, but you're gonna recover a bunch of it.Divia Eden: You're suddenly going in a pretty straightforward economic sense. You expect the artists, that the artists can embrace these tools and the ones who do will, for example, make more money.Perry Metzger: I think that all of them could embrace them. I was having a discussion recently with a [01:30:00] cartoonist who is an acquaintance of mine, and, you know, we were discussing, well, what would happen if suddenly, you know, there was eight times more cartooning done in the United States.Well, you look at the manga market in Japan we're nowhere near saturation in the us. Ben Goldhaber: yeah, no, you did say a number of breaks are coming up for society. But so this isn't one of them then. Perry Metzger: This is like because I think that a lot of people are simply going to fight it instead of embracing it, they're disgusted by it.It upsets them. Ben Goldhaber: So the breaks are the conflicts within society when this is -  Divia Eden: and the culture isn't ready. Perry Metzger: A sufficiently flexible person can accept a lot of things. You know, I mean, let's say that at some point in the next, you know, in the future evolution of our society, we end up with John [01:31:00] Valey esque body swapping, where you can wake up one morning and decide I'd like to be the opposite gender this afternoon.And it's not some sort of, you know, not particularly great surgical job. It's perfect. Right. This is a thing that is logically possible and whether it's probable or not, it's a technology that could be built. Right. It is absolutely the case that we could build something where you sit down in front of your television set and say I would like a romantic comedy.I'd like it to star Humphrey Bogart and Gilda Radner to pick like a completely weird and incompatible pair of people. I'd like it to run for about 87 minutes before I've gotta leave to pick up the kids. So, I'd like it to run that long.And you know, it can have like kind of an exciting soundtrack and it'll start and this doesn't seem that far off and it'll start playing. I [01:32:00] would've said, you know, last year, and it'll start playing. Right, right. And it'll be good. It potentially might be really good. What does that do to Hollywood?Divia Eden: Well, I mean, it certainly means, as you point out, they could embrace these tools and maybe the market hasn't anywhere near hit saturation. It certainly changes it a lot. Perry Metzger: Well, so, the thing is, I think that that's one of the most quotidian and stupid possible uses of the technology, and yet you can already see how that rips, you know, A hole in the expectations of lots and lots of, we have been up until now in a world in which good art is scarce, and we're going to be, I, Divia Eden: I actually, something I've thought about here is that there, I'm excited for a part of this because I, Perry Metzger: I am completely excited for all of it. This is wonderful. Divia Eden: In particular, there's a certain sort of, I don't know, coherence that I [01:33:00] often find in, for example, novels. I think I've basically never seen a TV show with that. I think because there's, even the very best ones have a certain amount of sort of design by committee associated with them.Perry Metzger: And, it's hard when something is that big and expensive to have the sort of coherence you'd like. Right. Divia Eden: And so, if with these sort of tools –Perry Metzger: By the way, you could have greater coherence than novelists are capable of. There can be no continuity errors.No problem. Divia Eden: Okay. But so can I actually, I'm gonna go back around to the goal systems thing, if you don't mind too much. Perry Metzger: Sure. But, but, but before we go on, I just wanted to say that like I should, I'm not trying to say like a lot of this is a net negative. I think that we are in for the best period to date in our civilization, and there are, are lots and lots of dangers, lots of horrible dangers.But we are in for potentially like a, very great, you know, like an age of wonder [01:34:00] and wealth like we have never seen before. Right. You know, I always mention to people, you know, maybe I have a fixation with this, the first World War, but you know, just before the first World War, you know, Vienna was seen as this amazing capital of culture and wealth and art and all of this other stuff in Europe.And if you look at what incomes were like in Vienna, at that time, Vienna was poorer than any place in India today. And people starved to death on a regular basis. Yeah. Right. And they lived in filth and many of them didn't have indoor plumbing. And people could afford if they were lucky, like, you know, one set of clothing.And it was a totally, totally impoverished world. But of course, since we see it through the, you know, glasses of things like BBC mini-series we don't see all of that. You know, [01:35:00] even the best historical dramas don't really convey all of the filth and odor.Right? That's the fact that no one has antiperspirants. No one has indoor plumbing. No one has a decent bathroom and no one has more than one set of clothing. Just doesn't come through. You know, there maybe with smell vision someday. It is the, what the future in the near, in the near future, we are going to look back on all of us here and think how horribly poor these people were.Same how horrible their medical treatment was. Same you looking back on pre-World War I, how horrible their lives were, their lives were short. If they got all these vast number of diseases, they couldn't do anything reasonable about it. They suffered from viruses, they suffered from bacterial infections.They got cancers, the, the crystalline lens intheir eyes, you know, stiffened when they got old and they couldn't do a bloody thing about it except for inserting a piece of plastic in its place. How barbaric and [01:36:00] crude. Now the negative here is, and by the way, and we may go well beyond that, you know, and we all, we all might end up as, as you know, vastly much, you know, more intelligent uploaded things that started as people.The negative is that there is a lot of existential risk out there, but the existential risk is not new.I think that we have been living in some sense un borrowed time since the second World War because of nukes. You mean because of nukes? Because of the other technological discoveries. We now have biotechnology that's more than capable of doing truly horrible things. And, and the average graduate student could probably do a large fraction of them.Things are, we have had horrible existential risk for a while. The only way to the other side of the existential risk problem is through. And, and the longer we delay a bunch of this stuff, the longer we live with that existential risk. Can Ben Goldhaber: Can you say more about this [01:37:00] because I was curious with your kind of libertarian and ancap perspective and this acknowledgement or appreciation for the risk of some versions of this technological progress.Like what does going through this period look like to you? And how does governance factor in? If you have any kinda like near cast scenario. Perry Metzger: Literally the end of the risk period is when the Von Neumann probes carrying fragments of our civilization are going out from our solar system at near the speed of light to nearby solar systems.Because up until that point, there are conceivable ways that we could try to wipe ourselves out. And at that point it becomes physically difficult. You know, the, the other end of this is, is even, you know, is getting to be a kardashev type two civilization, getting past that. And the sooner we get there, do Divia Eden: You wanna explain what that means [01:38:00] for our listeners?Perry Metzger: So there's the kardashev scale. It is a way of measuring civilizations. And as with certain other things that have entered the folklore. Like people think of Moore's Law as meaning, you know, one thing and it means another. You know, it, it only, it doesn't refer to things getting faster.It just referred to the number of transistors in a maximum size ship. But nevermind that the Kardashev She scale literally talks about what fraction of the energy resources of a thing does your civilization have access to? And a kardish, she type one civilization, which we're not quite at yet, has access to all of the energy resources of its planet.This is sort of a logarithmic scale, so, you know, some people have extended it and say, we're like a Kardeshev 0.9 at this point. You know a Kishev type two civilization has access to all of the energy resources of its solar system. So presumably it's capable of building something like a Dyson sphere or, or more likely a Dyson [01:39:00] swarm, a Montreal Swarm or what have you.And a type three a kardish show, types three civilization has a, you know, controls its whole galaxy. Okay. And this is kind of informal, but but, but you know, at the point at which, you know, we've turned the solar system to computronium, we have our probes going out at, at, at, you know, at like 0.95 C and.we're, you know, and we're in a position where we're unlikely to kill ourselves. Then we've got some safety between here and there. There are all sorts of horrible disaster scenarios. And the disaster scenarios only stop. I, it, it's not like we're safe standing still. Right? Yeah. Like, yeah. So is Ben Goldhaber: it, is it fair to say one way to take your point of view on this is like, we need to move fast through this period of existential risk until we can get to a point where we are not actually like standing on unsafe ground.Perry Metzger:. So I wouldn't call myself an [01:40:00] accelerationist, which is a term that post dates the extropians. Yeah, yeah. You know, it’s weird by the way, like, you know, thinking that, that I was around all of these people who, who like exchanged and, and traded and transmitted out all of these transhumanist memes over a long period of time, or rather over a brief period of time.But, you know, that was sort of like ground zero for a lot of this thinking. But the Accelerationist view that you see a lot out there seems to be that, that there's a lot of, make it go faster. Make it go faster, get rid of the the suffering.Get us to the point where we have the technologies to really conquer the solar system, conquer the universe get us to the point where, you know, we can get past the existential risk. And I'm very sympathetic to that viewpoint. I wouldn't call myself 100% a convert. But I do very much feel like [01:41:00] we are not, we have not been safe for quite a while.Divia Eden: And would you have, we're trying to quantify that. Like I know it's hard to say these things in retrospect, but if you had to say like, post World War ii, I Perry Metzger: think I, I like Divia Eden: what percent per year do you think the risk was in some sense? Perry Metzger: I don't know. I mean, you know, I find it kind of remarkable that in spite of the people like Curtis LeMay, that we managed to survive, right?Divia Eden: I don't know that I wanna go down this rabbit hole, but there's always well anthropics.Perry Metzger: So you know, on even number days. I believe the only reason we're here is the anthropic principle and on odd number days, I you know, I think that many worlds doesn't mean that, you know, and, and on leap days you know, and special holidays.I, take some sort of perverse other position. Divia Eden: but do you think it's more like, I don't know, 1% a year, more like [01:42:00] 0.2% a year? More like -  Perry Metzger: I think it's more like even more like 1% a year.Divia Eden: Okay. So, cuz cause I think when I hear you saying, okay, but we've had existential risk and it, we will continue to have it until we get to the point that you described with the, you know, near light speed probes.But I mean the, the argument among people who are concerned about it is, well, yeah, but, well they're not just talking about 1% a year, they're saying it'll be a lot more. Perry Metzger: What do you think about, so there are, there are two layers here. One of them is what do we do about it? , I don't see that we can slow this down at this point.Right. We attempts to slow it down and, and, and I've, I've seen people online saying, no, no, no, no, no, no. ASML's equipment. You know, ASML's the only place that can make deep uv, pardon me, extreme uv. You know, right. That there could be a hardware bottleneck. All we have to do is a few of the, you know, is, is stop them and you know, and don't worry, the Chinese won't be able to Yeah.The Chinese have already stolen all [01:43:00] of the plants, and I bet you, given the fact that we're denying them e u v equipment, that they'll, there'll, they'll be a couple of years behind it. Most a lot of people have an exit. So there's a lot of this stuff is not easy. Like it's real, real, real, real hard. But you mean the hardest stuff one human being can do, A determined group of other human beings can do.How, how many times have we had true failures to build nuclear weapons among countries that have made a serious attempt? I mean, it's, it, it it has not happened particularly often. Right. Like when, if they, if they've actually gotten to the point Yeah, but I, Divia Eden: they, but I mean, it, it, you could slow people down with that sort of stuff, right?It's building the nuke Cause among the people that want to Doesn't Perry Metzger: always happen. You can, you can slow people down. You can, you can, you know, you can Ben Goldhaber: go back to the FDA example. I mean, yeah, China's not really followed in their Perry Metzger: but down, but they absolutely see the, you know, the United States's desires to cut them off from as soon as we decide to cut them off from some technology, it becomes a major point for them to [01:44:00] get it.Okay. So Divia Eden: that's a, that's one dynamic that you think could end up being Perry Metzger: counterproductive. We're, we're not going to stop the Chinese from doing, from getting their own equipment, from building their own ships. We are not going to stop other countries from doing it. We will not stop research outside of the United States.We will not the, by the way, I, I know lots of people who are alike, but all of this require is extremely expensive equipment and all. No, it's not, it's not going to require it forever. People. You're Divia Eden: saying that with the algorithmic progress, then it, people Perry Metzger: are working fancy people real, real hard on cutting the costs and you know, unless you really, really want to, to, I mean, un unless you want to bomb the world into the stone age, in which case when it recovers, you know, you'll just end up with the same stuff.Except people will go faster because they have access to all the sa the information that we gathered before. I, I don't see how we're slowing down any of this. What we could succeed in doing, however, is we could succeed in a situation creating a [01:45:00] situation in which the only PE in which the people at the cutting age of AI research are foreign militaries and, and things like that.And I think that there's actual, you know, you, you, you hear someone like Elier talk about it and you know, he is like, well, you know, you, you shouldn't think about this in terms of, you know, the Chinese government gets some super powerful AI and asks it to get rid of the rest of the world. But that is a scenario, right?And, and, and it's a scenario that worries me more to some extent than the alien mind scenario because that's a, I do Divia Eden: wanna get back to it at some point, but, but yeah, you're saying that you are also worried about maybe this is legitimate, about the more prosaic, Perry Metzger: this is legitimately dangerous stuff and so is CRISPR has nine and so is nuclear power, and so is I, by the way, I don't want, that sounds stupid.Okay. I I, I should be listening to myself. I would [01:46:00] be, I would immediately reply to that tweet saying something like, but the AI stuff has mu, you know, has the capacity to do things at higher speed, harder, et cetera. And yeah, that's true. It, it is. But fundamentally, it is a two-sided technology. It has enormous benefits.It has enormous risks. Denying ourselves the benefits is stupid. There are a, we are not going to slow down the research. We are not going to stop the research. We are not going to scratch the research right now, right now, like everyone is, is there's a gold rush happening right now among the VCs for this stuff.And unlike I, I Divia Eden: wanna explore. So, but why do you think, can you sort of point to the part in your model that says we can't slow down AI research and the F d A has slowed down medical research, which I think is sort of what Ben was Perry Metzger: saying, but I So the thing, the thing is that the F FDA has had the excuse all along of the thalidomide children and you [01:47:00] know, and, and you know, the, the myths of, of like, you know, half of the country getting poisoned by, you know, by bad drugs before 1904 and what have you.And, and, and that happened every once in a while. And it, you know, and it turns out that it still happens every once in a while. Divia Eden: So you're saying the difference is that there was sort of a prior crisis? The actual Perry Metzger: example is, the difference is that moved at a reasonable speed that, and had lots and lots of leverage points and has this notion that human trials are special and a and, and immoral, if not conducted according to exactly the correct mechanisms.And, and all this other stuff. And it, it's hard to, you know, it, what we're dealing with here is a lot harder. I mean, there's been, there's been very, very like harder to regulate. You mean it's harder to regulate, it's harder to keep up with. You know, there, there has been an incredibly strong desire, I [01:48:00] think among certain portions of the regulators to crush cryptocurrencies.And they are even now, let that, you know, that I, I think there's, it, it seems like what happened to Signature, for example, was not so much an accident as foul play and what happened to Silver Gate. Right. Even Ben Goldhaber: now to elaborate on that, those are two banks that were banking a lot of the crypto industry and both have been shut down in the wake of the recent credit Perry Metzger: crisis.Yeah. And, and I think that led that even. So you're Divia Eden: using crypto as an example of something that the government does sort of want to move against and it hasn't really been able to, and you think AI will sort of, you know, I think the AI's worse. Yeah. In the way, in worse in the sense of it's less like medicine and it's more like, it's like crypto, but more so, and the government will not, it's like Perry Metzger: putting to use crypto.It's like, it's like the, it's like the website explosion. It's like it's like all sorts of thing is, it's so decentralized, right? [01:49:00] Everyone, everyone now knows how to do this stuff. Okay. There are a bunch of, of, of like extreme tricks here, but once you learn about them, if you are a smart person, you can reproduce this research.Some arguments to the effect of, well open AI has this gigantic labeled set of images to train against and it's, you know, and, and it costs a lot to put a lot, like some of these things are self enabling at this point. Do you guys know about the mo about the pseudo you know, reinforcement learning with human feedback stuff that just came out this last weekend against the llama small model?Probably not. No. Do you wanna say more? Okay. So, so these th this team at Stanford did something truly brilliant, which was, they used Chachi p t to generate tens of thousands of examples with which to train an open weights model that has gotten [01:50:00] leaked and also given out by meta. Is that the Facebook on?Yeah. . Okay. I mean, I, I, I think they, if they released it to researchers, Divia Eden: I believe, and then it was on within Perry Metzger: a week. Yeah. Pardon? Yeah. But I mean, everyone, everyone, and, and their uncle has been experimenting with this thing. But anyway, they, someone wanted to retrain this thing with R L H F in order to make it much more like chat G P t, because like you'll might remember the G P T three, it wasn't really good at conversation.Yes, it was. It was a text completion thing. You know, you would have to say the following is a short st not write me a short story. You, you know and, and, and, and someone figured out, well, you know, instead of being around at, at open AI and spending vast amounts of money, having human beings put together these, these reinforcement data sets, I can have one of the AI generated and, and, and people have [01:51:00] gotten these ideas now.Right. By the way, I'll mention this is Divia Eden: one example of many of how Perry Metzger: things are gonna get cheaper. There's so many examples I have already. So I, yeah. And, and, and by the way, I'm, I am, I am not joking. This is not just a playboy. I read it for the articles thing, which by the way, none of your audience, if they're under the age eighties is going know that joke anyway.But I have been following the underground AI porn generation community very strongly, and I've been following them in order to get a sense of what happens when people are very motivated to build this stuff and are not inside the mainstream. Yeah. And the answer is that people are real good at it.I mean, the, the about how far Ben Goldhaber: behind you would you say they are relative to like a DALL-E model or something like that? In Perry Metzger: quality Ahead. Go. Oh yeah, yeah, yeah. I mean, at this point, you know, they, so they're not, they're, they're not generating full motion video or anything like that. [01:52:00] But - Divia Eden: How far off do you think that is?Perry Metzger: I am so hesitant to say the first systems, like the first research systems that do some of that stuff already exist. The first systems to generate really good human voices exist. Now are, are you guys familiar with the 11 lab stuff? I'm not. Are you? A little bit the voice cloning you, it not only will clone your voice, but you hand it a text and it has trained enough on what, how human beings read a text and how they put emphasis and emotion on various places that it gets the emphasis and emotion.Correct. I see. Okay. So you can hand the thing, the text of say Moby Dick and it will, and tell it, you know, and give it, you know, say DIA's voice and you know, and it'll, and, and it will generate something with Divya, you know, reading the, reading the audio book of Moby Dick. And it sounds good. It's not perfect.Interesting. But it's, but it's, [01:53:00] it's so close. It is so very close. And you combine that in a bunch of the image generation stuff. Then how far are we from the movie scenario? I had, I don't know, but closer and closer and closer. And one of the big breakthroughs right now is like that G P T four is going to, you know, the, the eight kilo token model I think is accessible now.Yes. Much. But they, but they have, they, they have a 30, I think a 32 K, that's what I remember, token model. And that's large enough that like short stories, novellas, you know, those are within access or videos. They're not that long. Writing the script for a video isn't that long. And then you have a system that has some memory.Yeah, maybe you make use of stuff going control net. And all, by the way, all of this stuff happened because of stable diffusion became public right. Control net was created because of that. Large amounts of this other research has only been possible because this stuff has been leaking around. But, but to, to get back to the point, the people who are working on making these things run on [01:54:00] things that are more consumerish in terms of the hardware, finding ways to do training on lower budgets, that still is good.You know, people who are v there are people who are very, very motivated out there, and it's, it's gonna be bloody hard to, to put the genie back in the bottle. Everyone knows how this stuff works now. I mean, you know, gradient descent is, is a cool idea. You know, relu and, and some things like that are, are cool ideas.Transformers are cool ideas and yeah, the people at the cutting edge know more than the people behind it. But, you know, it, this is the, the other thing is like nuclear weapons. You needed to get your hands on, you know, on a gigantic machine to centrifuge all of this, you know, uranium, hexa fluoride, and you weren't gonna do that in your backyard.But I have friends, you know, buying, you know, 40 nineties from Nvidia and going to town, and they're having a [01:55:00] great deal of fun. It's, it's, it's out there. It's everywhere. They're not, you're not gonna tell, get people to forget how to do this stuff. You know, college students know most of this technology now.They're not at the cutting edge. They can't do the whole thing alone. But it's, it's getting closer and closer and people are leveraging the tools that already exist to build other tools. People are leveraging the ais to train and build other ais.Ben Goldhaber: So I wanna make sure that we get a little bit of time. Speaking of disruptive technologies, I really wanted to hear more about some of the nanotechnology topics. I know you're an expert in if you're right to pivot Perry Metzger: to that briefly. I'm a fake expert. I mean, fake expert to, Divia Eden: compared to most people at least I talked to.You've done a Perry Metzger: much deeper dive. Well, I made, I actually decided that I was going to get a formal background in chemistry and in physics so that I would understand the stuff down to the metal. And I have done, I have published no research papers. I merely understand other people's work, but I [01:56:00] actually understand it which a lot of people don't.Yeah. You know, I can read Nanosystems and Well, Divia Eden: I've seen you quoting specific people will say things about nanotechnology on Twitter and you'll say, well, this is addressed in this chapter of Nanosystems. Yeah, Perry Metzger: yeah. No, I mean, every time anyone criticizes Drexler, they haven't read him or they don't remember him.Because he actually Divia Eden: did manage to anticipate, sort of, according to you, at least, manage, to anticipate all the obvious criticisms. And he addressed them in nanosystems and people Perry Metzger: aren't, he anticip anticipated all the obvious criticisms, and he got almost all of them. He got a remarkably large fraction of them the first time around.He, he, he, I, I hesitate to call anyone a historically significant genius, but Eric Drexler is like, he is up there. I have insane respect for what he managed to do. Like the man started with nothing and ended up with a PhD thesis. [01:57:00] That is one of the most groundbreaking pieces of writing I've ever seen. And people don't, generally speaking, for their PhD thesis, write something particularly interesting.There. There are right , there are exceptions. Like l like, like Lou Deli got a Nobel Prize for his doctoral dissertation. . It is rare that that happens. Usually your doctoral dissertation is one of the most boring and useless pieces of work you ever do, and you hope no one ever reads it. Eric Drexler is, is is kind of astonishing.And there are people out there who repeatedly say things like, well, this couldn't work and that couldn't work and this couldn't work. And you, you try pointing them at the book and you say, okay, you say that positional uncertainty from, you know, from thermal noise is going to make all of this impossible.So in addition to the fact that you exist in spite of the fact that there's, but Divia Eden: you're saying meaning that we already have [01:58:00] biological Perry Metzger: nano, but, but let's ignore that. Let's pretend we didn't know that Eric actually goes through a first principles analysis using the basic physics in chapter five of Nanosystems and goes through this in grotesque detail.He also goes through the question of whether quantum uncertainty is a problem. He also goes through the problem of whether error rates make this impossible to deal with. What sort of repair rates you need, what sort of, you know, what sorts of things you can and can't probably managed to constru. And he has gone through this in ridiculous detail.I, in grotesque, astonishing, overwhelming detail. You read those, that book if as somewhat, and it takes, by the way, it takes an incredible background to read that book, the ordinary, synthetic, organic chemists in, you know, the grad students that, that, you know, I, I worked with because I, I decided to work in a wet lab for a while [01:59:00] because I wanted to actually know what synthetic organic chemists know and what it's like doing synthetic organic chemistry these days.You know, I I, I spent a long time, I spent years of my life. Learning enough that I could read nano systems in detail. It's my suspicion that a large fraction of the people, even in chemistry who, who read that book, don't understand enough to get all of it. Ben Goldhaber: But, and is that where you think I mean, a, a do you think that there is generally been like slow or no progress in Perry Metzger: Nano?There's been no effective progress for a very long time. I mean, there, there are papers regularly still published by a handful of people who are real experts. There is a lot of stuff that, you know, Ralph Merkel has published over the years. Mm-hmm. Ralph Fridays you know dammit he's at Syracuse and he's a friend of mine and I should remember his name.But I'm an old man and I, Damien Alice, that's his name, there are a bunch of people out there who, who, who do good [02:00:00] work. But it's, it's, it's, it's small enough that like a minivan going to dinner at the wrong conference could kill the entire field. Right. And Divia Eden: why do you think it is that progress has been so slow and because my sense is that you think the technical barriers are not insurmountable.Perry Metzger: They're not insurmountable, they're expensive. So I can give a few a few ways of describing this. First of all in the, you know, in the first half of the 19th century, Charles Babbage, you know, figured out that computers might be a thing and started designing things that would have been buildable with the technology of his time.And he also turned out to be, you know, kind of obnoxious and probably aspy and not very good at dealing with a lot of stuff and. , you know, pathological hatred of, of organ grinders. I, I'm not joking. You know, like all sorts of, you know, that one Okay. All sorts of weird quirks of his, his his autobiography is online, like, you know, the P d F of it.[02:01:00] And it is an incredible read. Like note of that, he's, he's, he's, he's an in, he's a really interesting character. And all of the things that he dreamed of didn't show up for a hundred years. And so you think it's like that with Nanosystem basically? Well, sort of, yeah. And, and if you look at, for example damnit, I'm, I'm having another senior moment Constantine Cky til Ksky.Oh, he's the space guy. Yeah. Here is this guy, this, this crazy Russian school teacher who develops most of the physics and a lot of the, and a lot of the chemistry associated with rocket science. Mm-hmm. on his own with no funding, publishing hundreds of papers on it, you know, in the early, early part of the 20th century, late, very, you know, last years of the 19th century, early part of the 20th century, decades before anyone builds any of this stuff, no expectation in his mind that anyone will ever build any of his dreams.And he does things like figuring out that liquid [02:02:00] oxygen, liquid hydrogen engines have the highest specific impulse. He invents staged rockets, he inv, you know, he figures out a lot of like the ideas behind life support systems. He invents the rocket equation. You know, he figured out like all of this stuff and no one did anything.And, you know, and it was nine in the 1950s, late 1950s before anyone actually built an orbital rocket. Okay. You know, decades. Yeah. So you're Divia Eden: giving a couple of examples of where. The fact that nobody built, it was not at all an indictment of the plans that people Perry Metzger: had laid out. No. I mean, there's a great quote in a in a Carl Sagan book, you know, and, and, and that you should be a warning, right?You know, they laughed at Fulton. They laughed at, I don't remember who, but they also laughed at Bozo the Clown. So the fact that this has happened in the past is not in and of itself a a reason that you should believe that Drexler must be right. Sure. I, I encourage people to read his papers. [02:03:00] It is unfortunate.So why do I think there hasn't been much progress? So, a few reasons. First of all, Eric, I think, is a crazy optimist about how easy it is to understand this stuff. Okay. If you read the introduction of Nanosystems, he speaks about how he's tried to simplify the material for a more general audience and, and, and how, you know, he ex, you know, he tries to make it possible for, you know, for experts in the following, you know, in chemistry and physics and other things, you know, computer science to be able to read, read this.Do you think that almost nobody can understand his work? It requires a deep understa, you know, like every other page, you know, and here he references sn two orga, you know, reactions, and here he's referencing, you know the Bourne Oppenheimer approximation in, in, of you know, for doing numerical quantum mechanics.And here he is ing, you know, and every page or two, he's, yeah, every page practically is dripping with an incredible panoply [02:04:00] of, of, of, of really, you know, of, of complicated ideas that even most people in a specific niche in science don't get exposed to. So it's a real hard read. There, there aren't, there aren't a lot of people who are, could do the research or are willing to do the research.There, there's, there's an amazing. Essay by WW Hamming called you and your research? Yeah, Divia Eden: I, I think Ben and I, great essay. I'm guessing some of our listeners know this one too, but Perry Metzger: feel free to describe it and, and, and, and I'm gonna, the links grotesquely oversimplify it. And, and note say that that hamming notes at one point, that if you ask the average researcher what the really important problems in their, in their field are, they can tell you.And then you ask them, are you working on that? And they'll say, oh, no. Right. You know, I mean, it, this is, this is one of the, I mean, I'd say that of the technologies we lack right now, the two most transformative technologies are molecular manufacturing and ai. [02:05:00] And yeah, the AI stuff didn't have a lot of people for a long time either.There was the whole AI winter, but it slowly, it slowly started building commercial successes. Right. I mean, I think most people are unaware of the fact that the US Postal Service has had machines, reading envelopes, you know, the addresses on, didn't know that. Yeah. Far longer than you would think, right?Yeah. They, they had competitions, you know, in the early nineties for, you know, for, for, for replacing the human sorters with O C R. And they've almost completely succeeded at this point. There's a handful of envelopes that, that can't be deciphered, that get sent to, like, I think they have now one human o you know, sorting office left.And it's, and the things that are left for the humans are very hard for the humans to decipher. And often they. . Right? The machines. The machines do an incredible job. And so there were all of these successes that people were developing, you know, voice recognition systems. We we're so used to voice recognition being a thing.I remember Divia Eden: one that wasn't Perry Metzger: a thing at [02:06:00] all. Yeah, yeah. But it's been a thing for in, for a, for a ridiculous amount of time at this point. Primitive visions systems have been a thing for robotics for a while. You know, I mean, people were working, were putting money into it for practical research. So you're saying that Divia Eden: with ai, unlike with the nanotech, with ai, there have Perry Metzger: been commercial feedback, there have been incremental commercial successes that have fueled interest and people at places like Meta, you know, it's been, it's been like well over a decade, I think it's been substantially longer than that.Now, that Facebook will, a will tell you, say, is this a picture of Divya? You know, right. You know, they do pretty well. Is this, is this a picture of this person, you know, and they, these systems are, are not that new at this point. There's been a lot of commercial pressure on them and, you know, and the people.And now the cutting edge research is being done on crazy specialized equipment that people have built for the purposes. I mean, Sarah Bros makes [02:07:00] some of the weirdest, craziest computer hardware in existence. The, they make three, you know, they make foot wide. Chips, single chips that are a foot on a side, 300 millimeters on a side, and guess a little bit more than a foot with trillions of transistors and tens of thousands of processing units on them that burn 20 kilowatts of electricity.And the guys that open ai, I believe, eat these things for breakfast. Like they're like candy around there. You know, that, that, and, and because of that, they're making all of these incredible strides. By the way, this sounds like I'm saying that, that you know, that, that that means that normal people can't do the work except the, the stuff you can buy to do gaming or, or stuff like that at home is just crazy as well, like 40 nineties and stuff.Ben Goldhaber: I'm curious, do you think that the, some of the advances in AI will spill over or rather maybe like unlock different advances in nanotechnology, Perry Metzger: maybe make that easier? Well, so there are, well there are some side effects already, right? For not in, okay, let's not look at [02:08:00] nanotechnology for a moment, but the protein folding problem alpha folds Yeah.Is a thing that was conquered by ai and, and it's not real. It, it, it's, it's like a side effect of ai. One of the things that people figured out is that these gigantic, you know, is that these gigantic gradient dissent systems to generate these, this big, you know, these big matrices with, you know, a little nonlinearity tacked on the under the side are ways of producing approximations of almost any of any function that you can think of that's reasonably behaved.And things like me turning protein structures into folded proteins, that's a weird sort of function you can think of. Being able to figure out the behavior of complicated molecules that you might want to use to use in nanotechnology circumstances. This is probably something you can do with use. You can, I think it's on the horizon if it's, I think that that is a thing that, [02:09:00] an application of AI to nano, I think that.Building better controls for scanning. Probe microscopes is a thing that already there are companies using AI technology for neotronics companies like that. There are lots and lots of va of side effects here, but the, but the biggest issue has been people, it's very hard to do the work on nanotechnology.P a lot of people had strong incentives to claim it wasn't possible. It's very difficult for laypeople to decide whether this is crackpot or not. I mean, it sounds completely crackpot, right? Even the stupidest possible applications, like, you know, you could build an aircraft where with, you know, with diamond or di you know, or, or diamond composite spars in it, that, that weighed like 1% of the weight of a current airplane, but was just as strong.And this, this is transformative and it's also stupid, right? This is, this is the, this is the equivalent of this is like the least fu shock type version. This is, this is [02:10:00] the equivalent of, oh, I could put a motor onto my, onto my horse drawn carriage. Yeah. Right. You know, make it easier for the horse. Right.You know, it's, it's, it's, it's, it's, it's not quite thinking along the right lines, I mean the right lines are things like, like Josh Hall's you know, utility fog, which really sounds like magic, right? Like utility fog as described is the closest thing to magic that human beings. I don't know what you, sorry.I don't know what utility f**k is. Can you tell us? So the idea is that you build these extremely small machines that are capable of reaching out and hooking themselves to other neighboring really, really tiny machines. And they can kind of float around in a way, almost nothing, and are extremely strong.And these can reform themselves into anything. So like, to give the stupid example, you can walk into a room and have the chairs transformed into a, so, Or have, or have your house transformed into a different [02:11:00] house or I mean this, right? So I, Divia Eden: I think maybe part of what you're saying here is that with nanotech, the, actually the actual upside here is something that people can't really relate to, people can't really comprehend.It seems crazy. It's major. Yeah, it seems crazy. And so you think that's been a major barrier to actual Perry Metzger: people pursuing it? I think that it seems crazy. There have been a, only a handful of people who really have understood it and been c and, and in a smaller number have felt committed to do work on it full-time.Eric tried to get a bunch of funding for it. This reinforces certain of my prejudices against state programs. The National Nanotechnology Initiative got, you know, like I think a half billion do initial funding to pursue nanotechnology. And the synthetic organic chemists immediately knifed him in the back, destroyed his public reputation.And a all with garbage, right? Like the Smalley Drexler debate. All of Somali's arguments are poop. I mean, I haven't read it, but I, I don't care that had a [02:12:00] Nobel Prize in chemistry. He, he did not under, you know, to the extent that he understood it, he was disingenuous. And to the extent that he didn't understand it, you know, he, he, you know, he, he didn't care.The, the, you know, all of the arguments that he made were already disproven in Eric's papers or were already addressed in Eric's papers. All the substantive arguments And, and we have a good deal of evidence from people doing work like, like using, like taking scanning, probe microscopes, abstracting individual carbon monoxide taking carbon monoxide molecules on a s on, on a passivated surface at a very low temperature and like picking them up and then getting those carbon monoxide molecules to react with other molecules on the surface.And people have done this stuff. It sounds like it's picking Divia Eden: up, like they built a little, Perry Metzger: how, how did they pick them up? Well, using a, okay, so scanning probe microscopy sounds like magic, but it could [02:13:00] have been built in the 1950s. So when, when I was a little kid all of my teachers told me no one has.And now Adams are very, very small and no one has ever seen an atom and no one ever will. And by the time I was in my late thirties and taking, you know, a physical chemistry lab class not only had people seen Adams, but one of our, our labs was here you know, make a little scanning probe micro atomic force microscope tip by breaking a little piece of metal wire and mount it in this system correctly.And take a piece of graphite and like, use a piece of scotch tape to get a single monolayer off of it and put it into the A F M. And now use a tapping af fm to see the graphite, the graphine sheet in, in your device. And like an, like an undergrad could do that. An undergrad can u operate in the sheet.Let's see it right. They can see individual atoms. How does this work? This [02:14:00] works with, by having a very, very clever lever mechanism in now normally in, in, in, in which you mo, in which you move a, a piso electric crystal a relatively large amount. And it moves the tip of a needle, a really, really tiny amount.and, and, and in, in scanning tunneling electron microscopy, you move a needle tip with which, which you have broken. So that there's only probably a single atom at the tip. Okay. Over a surface scanning back and forth like a television. You know, and, and, and sh you have it. And, and you, you, you, you have electrons jump from this tip into the material underneath by, you know, by, by charging the thing appropriately and you measure the current, and from this you generate an image of the surface that you're looking at.You can move the needle tip much, much less than an atom width. That is one of the miraculous things. Yeah. And it, as I said, it's technology that people could have built in the [02:15:00] 1950s, but no one thought to do it. So I don't give, anyway, so Divia Eden: this is, thank you for indulging my curiosity about how you get this Adam, Perry Metzger: IYeah. And, and, and then there's atomic force microscopy where instead of me, instead of sending using an electron beam emerging from the tip, what you do is you feel the forces between the tip of the, of, of the probe and the surface underneath it. And I, I'm gonna, trying to bring Divia Eden: this back to ai. So, I mean, cuz this is one of the things that, that comes up about AI systems, is that a certain point?They may develop Perry Metzger: nanotechnology. Yes. One of Leers. So I don't know that Eiser got this straight from me, or maybe he did, but I noted very early on in the extropians mailing list that AI and nanotechnology are kind of enabling for each other. If you have good enough ai, you can use it to produce nanotechnology.And if you have good enough nanotechnology, you can use it to enable ai. Divia Eden: Yeah. So can we talk about the AI to nanotech? Sure. Part of this and, and Perry Metzger: what you think, I mean, well, presumably, one, one of the problems we have is [02:16:00] that, you know, we have a few dozen people who understand the field and, you know, a handful who are actually working on it in any given time.What if I could spin up smart engineers in a w s that will, you know, and I need, I want 15,000 engineers working on something. Well, that's a matter of money. You know, I don't have to recruit them, I just, I, I just turned them on. Yeah. This improves the speed at which you can design or build anything.Right? Eric has, Divia Eden: so, and, and you're not so much thinking, well, the AI will sort of decide of its own volition at some point that it needs to figure this out, but you are thinking, Perry Metzger: well, well, maybe, maybe one might, but, but the, but you don't need to go to that in order to note why nanotechnology could come faster, because of ai.Right. Every conceivable technology could create AI researchers. Once you have ai, every conceivable technology is much more accessible because the main impediment to creating almost any technology you can name is building, [02:17:00] is having enough minds to work on it. I, I tweeted about this a few days ago, that you know that the biggest impediment to progress in our civilization since our emergence has been the posity of mines that are available to work on any given technical problem we have.And once you have ai, that problem disappears. You are in a position where you can build, construct as many mines as you can afford to work on a problem. So if you need a team of 5,000 engineers working on the problem, you can have 5,000 engineers. You don't even have to recruit them and convince them that it's a good.Or at least not necessarily. I mean, maybe, maybe in order to have engineers these things, you know, end up being willful enough that, you know, you have to promise them enough electronic porn and enough days off and, you know, and enough money in their bank account. I don't think that's going to be the case, but you could imagine it.But, but almost certainly what you end up with is a [02:18:00] situation where you can construct as many minds to work on something as you want. And at that point, all technical problems become shallow. And I, I've, I've mentioned this before, but imagine a world where, you know, you decide you don't like the fact that the Linux kernel is written in sea, and you would prefer that it be written in rust.And so you hand a fe, you know, some number of thousands of dollars to a w s for, you know, to run the engineering team, and a few hours later you have rewritten the Linux kernel. Or maybe, yeah. Or, and as Divia Eden: you as with many of these, I mean, that's sort of crazy to think about. And also it's not that out there in terms of what's Perry Metzger: possible.It's not that out there, it's not even that out there right now. Right. Which is you know, you can see where that will be a thing that's possible if not now, than in a, within a few years. So all engineering problems, whether it's an aerospace or biotechnology or you know, architecture, material science, all of them become shallow when you have enough enough staff to work on it.And nanotechnology is one of these. You [02:19:00] have enough? Do you have any Divia Eden: thoughts on the risks there? I mean, people, you know, people talk about gray goo, I don't know what your views Perry Metzger: on that are. So, so so Bob Frida wrote a really, really great paper called something like Some Limits to Global Eco Faia which I thought was the most anodyne possible title to a paper about how fast can you digest the planet.And his answer was fast, but not so fast that it wouldn't be noticeable and opposable. It cannot happening Oppos. Divia Eden: Okay. So opposable by other people with their nanotechnology. Perry Metzger: Right? It could not Ben Goldhaber: grau summoning circle Perry Metzger: in every home. It could not happen within hours is the main point. It, it, it's, it's a thing that best case scenario.It's not like a Divia Eden: year, days, weeks, weeks, Perry Metzger: weeks. Okay. Yeah. But, but that sounds bad. But in fact, and so you're imagining Divia Eden: if the nanobots come to digest the earth and have, okay, well, but so, so you, but if someone were strategic, they could try to, how long would it take to kill all the people that might create their own nanobots?Probably not. Well, Perry Metzger: so let's, let's take a [02:20:00] step back from all of that. Okay. So we already live in a world in which we are all surrounded by malicious things, attempting to kill us all day long. And it's so bad that if you stop metabolizing, you're going to start being digested almost immediately. Right? For sure.Yes. Yes you are. Okay. And don't notice this because you have an immune system. Indeed. Right? So we are going to need to develop immune systems for nanotechnology and for ai, we, we, we will need system. I think You think, we'll, I think I need makes sense. I think that it's inevitable that we're going to have them.They're, they're going to be necessary. I don't mean that we need, Divia Eden: and so once people have these sorts of immune systems, you think at that point, I mean Perry Metzger: this is a cult, I think this will, it will be at a civilizational level, right? Right. We will, we will have things that are looking out for things that have gone out of control and an attempt to put them in check.And this means, by the way, that a whole raft [02:21:00] of potential, like autoimmune syndromes at a civilizational level might even appear. And I don't even wanna speculate about what that might look like, but it's inevitable that people are going to have access to extremely danger. I mean, what, right now, by the way, we don't have really good ways to counter biotechnology threats with nanotechnology.So, so Bob Freis again, I hate I, I don't hate mentioning his name constantly, because like he and Ralph Merkel are two of the most productive people, pe besides Eric, who've written paper after paper, after paper. Bob wrote a great paper describing a thing that he called a microbio and Okay, a micro bvo.So that sounds like it eats microbes. A microbio is a nano machine that can be injected into your bloodstream that will kill invaders vastly more efficiently and faster than a human immune system can and, Divia Eden: and keep, it's sort of like a, I mean, there are bacteria phages, so it's sort of like that, but more Perry Metzger: powerful.Oh, [02:22:00] vastly engineers. There, there is a, the paper is online. It's a little bit hard to find, but Google will will find it for you. I think that I think that you, either the micro before or the Aspirus site paper, he also wrote a beautiful paper about building artificial red blood cells because it turns out red blood cells are not nearly as efficient as artificial systems could be.Do you think I Ben Goldhaber: read this paper and about like, kind of inter injecting these and maybe slowly replacing various parts? Well, Perry Metzger: let's, with these types s so all of these are like, he, he also has some papers on like completely replacing your bloodstream and your blood system. And many of these are, are, are thought experiments, but he actually did the, the engineering at a high level for microbio and spiro cytes.So, so the microbio you know, would go through your bloodstream would hit pathogens and y and basically and kill them and eat them, digest them. I, it, Divia Eden: so this is not directly about anything you've just said, but it, something [02:23:00] to me seems like a point of tension in your worldview, but you know, probably I'm missing something.It seems like there's a lot of work that you take with seriously. That is sort of abstract engineering work. I don't, maybe that's not the right way to put it, but like, it hasn't been Perry Metzger: implemented yet, but it's, but it's been worked out to an incredible degree of detail given what's possible. Right, right.So I guess Divia Eden: I'm like, can you point at the sort of ma most major point of disen analogy between, between that work on the microbio, for example, and on working on AI alignment now, even though, Perry Metzger: so, so if you ask Elier, do you know how to do AI alignment right now? He will say very, very vociferously. I have no idea.And if you, but it's not just, Divia Eden: I mean, but as you point out, there are, I don't know how many, but many, many people playing around with these systems. It's not just anyone person, Perry Metzger: and they're actually making progress, in my opinion. I, I think, and, and, and again, there are going to be people who are [02:24:00] listening to this, who are going to want to throw a brick right at their, their listening device as soon as they hear this, because they're like, Perry, you don't understand, you know, you, you and I understand, I just have a different view.The, the, there, you know, the people who are working on stuff like getting these systems to behave nicer, to answer the questions you actually want answered and not the ones that it thought you wanted answered to not start randomly threatening you or tell declaring that it loves you or to, so you think that Divia Eden: is alignment work?It Perry Metzger: is happening. I think that a lot of that is, is research that is necessary to do alignment work. Because the general question that we don't have an answer to right now is, how do I build. A, a giant neural network that does the thing that I want, right? I want it to do the thing I want. I don't want it to do the accidental thing that [02:25:00] will decide, you know, to suck all the air out of the room and to use it to build liquid oxygen popsicles or something, you know or, or however e else the thing might decide to kill you.You, you want to figure out how to build systems like this that have a great deal where you have a great deal of control and understanding of how it will behave, et cetera. And all of the work that these people are doing is along these lines. It's early, right? But it's along these lines. It's directly applicable.You want Ben Goldhaber: that's, well, I guess one way, one way I had heard the question there, or it had been something I'd been thinking a little bit about as well, is that you seem to hold, in particular high esteem, the kind of like research on the AI that involves actually building the ai, ai, the ML systems, doing research on those, make sense.But then similarly the things like on some of the nanotechnology, but also like the I, I forget his name, but the space cost Perry Metzger: and Ben Goldhaber: [02:26:00] babbage. Exactly. Yeah. It will very, very theoretical ahead of his time Perry Metzger: planned it. Right, but you would, but you would not have. Just imagine til kovski. Could not have imagined that someone could just take some of his research papers and, and without actually building, you know, goddard's early things than the V2 than, you know, the various Jupiter rockets, the sounding rockets, the early Atlas rockets.It's a no one can saying that there's still do to be all those steps. No one could have built the Saturn five without going through all of those niggling steps. There were lots and lots and lots of bits of practical knowledge that were needed. You know, the, the, the, the F1 engines on the Saturn five had this horrible combustion instability problem that was only solved by, by people literally setting off bombs inside the things during test firings until they could figure out a pental injector pattern.I might have the detail here slightly [02:27:00] off that, that, that did not experience combustion instability even when they set off explosives while the thing was igniting. This was not, you could not have gotten to that just by reading si kovski i's papers what he showed. Right. So you're saying that Divia Eden: there are always going to be these engineering problems that are, that are, you know, is over-determined that they won't be knowable in advance Perry Metzger: will come.Yeah. I mean, in spite of the fact that Rob Fres has built these, you know, interesting papers with these interesting designs, you know, he did this to show what you would be able to build and how interesting it might be. Just the way, as you know, someone like SI Kovski, you know, wrote papers about wouldn't it be interesting if we built orbital habitats and these might be some of the things that we would have to do in order to do it.But that was not a final engineering plan. Right. That was not something I could have gone out and executed. There's another layer of this though and. I don't like being [02:28:00] overly negative about ER's program, but from the beginning there was a great deal of flavor in a lot of the MIRI stuff and in a lot of the S S I A I stuff before that where we s I A Divia Eden: I for people that Dunno, that's sort of a previous name of a miri organization.Perry Metzger: Yeah. Where we want to build a super early on, he wanted to build a superhuman ai, but he only wanted to build it using essentially symbolic AI methods where the exact behavior of the system would be predictable and understandable in advance. And, you know, they, they thought about that for a while and didn't make any progress and they thought about alignment for a long time.But they've, they've thought about all of this in a very, in, in an even more theoretical way than the way that Babbage thought about computing or that Kovski thought about space travel or the way and rocket science or the way that Drexler is. Okay. So you have sort of Divia Eden: two potentially separable critiques.One is that you [02:29:00] need to be able to actually tinker with the systems and confront the real engineering Perry Metzger: challenges to get the, you will not, you could not build nano technology from, from drexler's papers. Right. For Divia Eden: sure. So that's one. And then the additional critique is something like, but there are ways to think about these problems in the abstract that you consider to be less abstract and more grounded in, I don't know, in real world constraints and ways that you think so that you think are less promising and more Perry Metzger: abstract.I wanna make it, does that seem right? Clear that what I think people like Bob Fry's papers or, or actor ER's papers show is that this is a potential technology. We could build it, it would be interesting. It is not a replacement for doing engineering, hard engineering and prototyping and testing over a very long period of.Right. And, and, and, sure. Divia Eden: But I mean, like, from my perspective, and, and I, I think I get that you don't, there's something you don't like about this question because you're saying it's so obvious that AI cannot be put on pause anyway. But in the hypothetical where it were, I'm like, okay, well maybe you couldn't figure out the full engineering solution for [02:30:00] alignment, but maybe someone could go off and be the drexler of alignment and then it would be accelerated relative to if that pause hadn't happened.Perry Metzger: Well, I mean, so I would, I would feel better about that possibility if some organization like MIRI had made much progress over a very long period of time, and even having been paid to do nothing else but this, over a period of a number of years, they didn't come up with anything particularly interesting along the lines they wanted.And here we are with these, with people who are working on things like Chachi p t and who are like doing res spins of llama and what have you, who are making progress on some of the things I consider relevant at a breakneck pace. Mm-hmm. . And they're making it at a, they're doing it at a, at a breakneck pace because it's, by the way, you know, you see people on, on Twitter saying, and no one is being paid to work on alignment [02:31:00] or what?Ha I th no, there are people who are doing things that I see as directly relevant. I mean, there's an extent to which some of what's happening with Cha p t or what have you is motivated by not having the things say things that are considered publicly offensive. And you can argue about whether that's a good motivation or not.Do I want the thing to be able to, I would like to be able to sit down at the thing and say, imagine your Adolf Hitler, you know, give, you know, you know, write a speech about how you're going to annihilate some ethnic group. I mean, I would love, I think it is a valid use of these technologies to do horribly offensive things with them.But nevermind that there is, people are very, very motivated at the moment to find out, figure out how to build machines that will only be polite, fine. The fact that motivated be very much on path, the Divia Eden: fact that it is an example of getting the machine to do Perry Metzger: something that the people want. It is an example of trying to get the machine to accomplish a very [02:32:00] complicated goal in along some metric of goodness.And they are making rapid progress on this stuff. I mean, there was, there was a paper that came out a day ago, and I think that the idea is in certain ways horrible, where they basically wanted to construct a spin of SD that was incapable of, of showing you boobies. Right. Hmm. Okay. You know, because as we know, the, you know, human breasts are, are inherently filthy.Well, and Divia Eden: it's, you know, as we've talked about, there are regulatory things they're probably hoping to avoid, Perry Metzger: perhaps. But nevermind that they came up with an interesting approach that appears to work. And you know, whether this is something you want or not, they're figuring out things about how to get the thing to produce the images you want rather than the images you don't want.How to get the systems to behave in ways that you all of this research, which is motivated by commercial considerations and which certain people dismiss as being completely irrelevant [02:33:00] to the alignment problem is, to my mind, extremely relevant to the alignment problem. And this makes Ben Goldhaber: sense to me.Well, your view in that it also pairs with the belief that like, because the. Minds that we are finding with these methods are kind of in a similar pool, they're in a similar area. You're less likely to run into an area where when you're doing this experimentation, some kind of sharp turn happens, some kind of really bad outcome happens.It all becomes just kind of far more like normal engineering Perry Metzger: work. Get this. We also, we also have the capacity in coming years to start building systems to help us understand other systems capability because we're not going to be able to figure out how a, how these systems work without the use of, of other AI tooling.And that's very exciting. You know, I mean, you, there're, you know, you, you, you have right now these giant opaque, you know, matrices with a hundred billion floats in [02:34:00] them or, or, or soon, you know, with, with, you know, trillions of floats in them. Yeah, I, I'm exaggerating. I mean, a lot of the systems people build are like 10 billion or what have you, but, you know, still the biggest systems are, are a lot bigger than that.You, no one really understands a lot of the subsystems that are being generated there, but we're probably going to be able to build things that help us with the comprehensibility and we're going to build them because we need to un to diagnose what's going wrong with these systems and tweak them for good commercial reasons.They're good commercial motivations to work on this stuff, and we're not going to get to any of that stuff if we take a very timid, we're dealing with high explosives. We mustn't talk about it. We mustn't do research on this. I have had friends in from the Bay Area, rationalist community who have said things to me like, deves is one of the worst human beings on earth.You know, he's, he's a, he's a terrible, terrible threat to [02:35:00] us all. And I'm like, why? You know, he's, why, why are you saying this thing? And, and, and I think that there is a segment. of the community that has gotten very, very high on its own supply. It, it, they, you know, everyone is thinking along this very, very narrow, we must build this stuff.We have to build this stuff. The cor correctly the first time, which I think is physically impossible, by the way. I think there is no, no technology human beings have built, has ever been built from zero perfectly the first time. I think some of them would agree with you. Yeah. If that is, I think that's the point of agreement.Well, sure. Then, then they have, then you have, but, but then they say, but we must try anyway. but if it were Divia Eden: true that, I mean, I think if you shared their belief that building it in any way other than perfectly had, you know, like a, let's say more than 50% chance of destroying the entire world, Perry Metzger: you would know.I don't think that Eliezer [02:36:00] believes it's a 50% chance. I think he believes, I said it's 99 per, I think he thinks it's 99.9. Sure. But let's say you, Divia Eden: let's say you just thought it was, you know, 55% that you will destroy the entire world if you don't build it perfectly the first time. I mean, I'm guessing that would move you if Perry Metzger: you thought that.Yeah, that's true. I mean, but, but I, but I think that we have very, very good reasons for figuring out how to make this stuff more or less work. . And, and, and we have very good ways to make progress on that. We have been making incremental progress. It's maybe it's not stuff that LEAs are recognizes as incre as incremental progress, but I see it as incremental progress.I think being confronted with these systems has suddenly meant that people are doing a whole lot more work on everything from how I train the systems to do things that are closer to what human beings want, to how I understand the systems better, to how I interpret the systems better, et cetera. And that this is going [02:37:00] to continue.And the fact that suddenly there's commercial success on this stuff also throws far more people in on it. And I Ander thinks that we're going to hit fu, right? That one day we're going to have an AGI created and three hours later it will have built molecular nanotechnology that it will use to destroy the entire world, not intentionally, but as a side effect of some very alien goal that it happens to have.And you know, and I, I see this both as improbable and, and if it, we have to get it right, if we really have to get it right the first time, then just kiss your butt goodbye right now because there's, if we're not going to, we're not gonna get this perfect the first time without trying things along the way we are not going to get there is we're not gonna get this perfect without building lots and lots of safeguard systems.We're going to end up in a situation in which [02:38:00] we have lots of ai. And by the way, that's another portion of the, of the belief system that there will be a single AI that will triumph, that will be the first AGI built and it will he Gemini and take over and, and control everyth everything and think it's, it won't play out that way.I, it, it, it doesn't seem particularly likely to me, and it doesn't seem likely to Robin and to lots of other people. I mean, there is, you know, the, the debate Robin had wither was pretty good. It was way too long. That one I have followed somewhat Well, there's a 60 page preci of it that's relatively readable.It's too big, too. But, you know, I mean, the thing is, I, I hate pe I, I find myself very often criticizing the critics of people who I criticize. You know, I think that most of the people who criticize Elier these days in public are spouting b******t. I mean, they will [02:39:00] say things like, these things can't have intentions or that, you know, that, that there is no possible danger for them.What are you talking about? And all of this other stuff. And I think that, I think that most of those, that's just, I understand that reasoning and the, the reason that that that's happening is because if you tell people over and over again, your relatively straightforward commercial project is going to lead to the deaths of everyone on earth, they eventually start resenting you and ignoring you.Are you think people have Divia Eden: essentially developed an immune reaction? Perry Metzger: I think that all of, most of the people in the rationalist community who are concerned about AI risk are extraordinarily bad spokesman for the idea and have done far more to get people to resent and ignore the problem than they have gotten people to, to take it seriously outside of a small community of very like-minded people who've moved in the same social circles.And I think by the way, that this is bad, in the sense [02:40:00] that, you know, I've seen people say, well, nanotechnology is impossible and therefore there is no ai. Which I think is similar, right? You know, an argument you Divia Eden: basically dismiss Perry Metzger: based on your technical understanding. I also think thater is wrong.That an AI is going to have nanotechnology six hours later, no matter how powerful it is. I, I, I, I do not see how that can come about, even if it is ridiculously brilliant. It require there, it requires real world time to build and evacuate vacuum chambers. It requires real world time to do certain sorts of experiments that cannot actually be done in silico.A lot can be done in silico. I, I, I don't think it'll take 50 years. Ben Goldhaber: Do you expect this takeover risk to not be something that could happen in a couple of hours similar to the Grey Goo Perry Metzger: scenario? I do not think it might over a couple of hours is able, you think at Divia Eden: the soonest it would be a few weeks, but you think it'll be a multipolar Perry Metzger: scenario?I think it's longer than that even, but I think, yeah. Now, [02:41:00] by the way, coming up with a revolutionary new technology capable of completely transforming the, you know, our, even our very ideas of, of the materials that our world is made of, that's a pretty, you know, that's a, you know, being able to d do that in a few months, that's pretty f*****g huge.But it's not happening in 15 minutes. Okay. And it's not happening invisibly, you know, it, it, and you know, with, with the AI having, you know, like in, you know, in, in, in, in, in, you know, like taken over the minds of all of the people involved and, you know, or, or whatever. I mean, it's, I, these things are, some of these things are logically possible.Some of them are logically impossible. But on the other hand, I think that people also have very facile dismissals of eliezer's arguments that are based in the ideas that these things are logically pos impossible when they're not logically impossible. They might be improbable, but they're not logically impossible [02:42:00] or that he doesn't understand.you know, what is, what can and can't be built or that certain technologies are just physically right. You think there are Divia Eden: a lot of bad arguments against his concerns? Perry Metzger: Yes. And I don't like those either. You know, I, I I think that if you're confronting the thing, you have to actually understand, you know, the, the parts that seem like they make sense and the parts that don't seem like they make sense.Sure. But anyway, there's, you know, Robin's, Robin's argument, you know, with Leer was pretty good. And the 60 page, as I said, the 60 page summary, it's too long, but it's better than the chat G P t to summarize it, it's better than the 800 page version. The problem with, well, G PT four might be able to, it's, it's a little bit too big for it, right?It's too many. No. Maybe GPT five will summarize it for us. Yeah. Incidentally, I, I Ben Goldhaber: got a question I wanna make sure I throw in here because I know we're also getting close to like a three hour mark. And so not Perry Metzger: mean if you want to compete against Lex Friedman in, in the market. Hey, I've got a Red Bull Ben Goldhaber: right here.I'm [02:43:00] ready to go. This is, this is a 2:00 AM podcast Perry Metzger: for sure. If, if you want to compete against Lex Friedman, you're going to be able, you're gonna need the eight hour. You need to go be able to break the eight hour podcast. Mark. I think he's done five. You know, , you're going to have to be able to do eight, right?Ben Goldhaber: Well, just in case we don't fully make it to the eight hour mark. One thing that has been continuing to kind of like, I don. Eat on me through this conversation is, I think it's fascinating with the troian mailing list in particular, and some of these other ones, like there are the topics of ai cryptography, prediction markets, all these things that got covered in the very early days of the internet that are now very dominant.Perry Metzger: All of, well, it's not the early days of the internet. Remember, the internet came into existence in the mid seventies and I was already, what should we call the nineties Ben Goldhaber: period? The, like first right before Eternal Perry Metzger: September maybe, or this was, yeah, some of this showed up before eternal September. Okay.Ben Goldhaber: Was there one of these ideas you feel like didn't make it, that you expected would've? Is there some kind of alpha [02:44:00] from the extropians in the early days that you think should have made it more into the Perry Metzger: mainstream? That's an interesting question. I haven't thought about that enough. I don't know. I don't know how I would answer. It is interesting to me that, that we find ourselves discussing all of this same stuff, you know, for, for a long time. I, I, I remember talking to friends of mine who were, you know, you know, shall we say more normal than me?You know, 30 years ago and telling them all of this exciting stuff we were discussing and these, and, and my friends, the ones who knew me well enough, knew that I was serious and, and, and, and possibly even correct, but didn't necessarily think that they, you know, could tag along for the ride. Some of them probably just thought that I was crazy.Some of them probably correctly still think that I'm crazy. But it, it was, it was really interesting to me just what fraction of, of everything that came to pass afterwards was, was under discussion. [02:45:00] I, I knew as, as early as, say, 1986, That by the, you know, the early, by the 20, by the, the naughties by the 2010s that we'd be able to have pocket computers with, you know, with, with high resolution screens vastly more capable than any super computer that was around at the time.I mean, it was a very straightforward technological extrapolation, and I had no idea what that meant. I certainly couldn't have predicted, say Facebook or Twitter or, yeah. Or even Seamless. Right. Or GrubHub. You know, it depends on what part of the world you live in. I, I understand that you think everyone I know uses DoorDash in London, it's still delivering high fees. Yeah. Deliveroo. Yeah. Ben Goldhaber: Yes. We gotta be careful about leaking information about where these call, where we're calling from.Well, I'm Perry Metzger: calling from, I'm calling from, seriously, I'm a secure bunker in, in, in the Sierra Nevada mountains in Seamless country. Fair enough. Yeah. Yes. [02:46:00] Excellent. You know, I I I, I live in a cave with, you know, , with, you know, a lot of a lot of 50 caliber ammunition, but no gun with which to fire it because No.The itself Divia Eden: Yeah, I do, I do wanna be mindful of time there. I, I don't mind continuing, continuing to talk past this point, but I think something that would feel good to me, if you don't mind, is to try to summarize some stuff that I, I think I better understand about your worldview having over the last few hours.Does that sound okay? Sure. Perry Metzger: Sounds good to me. Divia Eden: Okay. So I think, I think there's sort of a few, a few pieces that stand out to me, and one is, which I sort of said at the beginning, but I I think I'll say it even more strongly now, is that basically you think a technical grounding in thinking about, but how exactly will these sorts of things happen?Is underrated both on the object level that you, you tend to have a lot of respect for people who are doing that sort of work and sort of on the meta level of like, how has this, maybe that's a [02:47:00] bad way of putting it, but, but like how, looking at the sort of reference class of technological advances and how they tend to go and which types of processes tend to produce them and which tens of processes you think are not that there, there's some, there's a type of technical groundedness that I see you doing both on the object level and in terms of evaluating where you think progress is likely to come from.Does that seem Perry Metzger: That's, that's seems probably, that's probably at least a ch a big chunk of, of, of, of my thinking in certain route uncertain topics. Okay. And then Divia Eden: I think, I think there's another piece that is, I mean, I'm sure nothing is truly distinct, but that I would separate out and, and we're, I don't know if this is fully fair, but it, I wanna sort of tie together both your and cap intuitions and maybe your, like your more stoic intuitions into some sort of, we Perry Metzger: didn't even talk about stoicism so much.Well, but I Divia Eden: think it comes through because I think a lot of where you're coming from with this sort of only way out is through type of stuff. [02:48:00] Is that, and you've mentioned over you don't like things to be overly negative in certain ways. And my guess is that you think that there's, that the way people, this is, yeah.That the way people make progress is through allowing decentralized activity, sort of unlocking human ingenuity and not trying to put any genies back into the box. Or not investing particularly hard and trying to slow down any genies that might be trying to come out of the box, but more, more trying to tap into, okay, well what is, how can we do a decentralized version of defense in depth against Genie by letting everyone tinker in their garage?Something Perry Metzger: like that. I, I think that there is no way to have an effective, centralized defense against some of these things. I think that. . I think that that's our experience from, from a wide variety of domains. There, there are all sorts of immune systems we all survive with, right? Our immune systems.Yeah. Even just the word immune Divia Eden: system. I mean, immune system is a super decentralized. [02:49:00] Yeah. Right? Perry Metzger: Yeah. Yeah. I mean the, these, all of these systems you know, work the, the way that the, how to put this properly, you're probably pointing out. And, and the interesting question is whether this is a flaw or habit in my thinking that might not apply here or whether it's a pattern that I've identified that I'm correct about.There's no way to know particularly easily. Now, is there but there is sort of a common theme in a lot of my thinking, and we barely discussed my politics at all. And, and it would probably require another, you know, 17, well, maybe we're on hour 17, like another three hours. We're on hour 17 at this point.You know, we might as well press through. You know, the, anyway but you're right there that, that, that I, I have a considerable suspicion of the centralized view of this. Yes. And for Al even for reasons of danger, right?Because there's gonna be a tremendous temptation if there is a fully sent, I mean, people keep talking about, well, we need a Manhattan project to work [02:50:00] on AI and AI alignment. , and I am very scared of what happens when that happens. I, I both think that we cannot do that successfully and that, that if one country starts doing that, then multiple countries, many of which may have very hostile views to each other and may start doing it.It, we may get end up with this situation. Comes Divia Eden: arms, ring, ring. This came up also right where the centralization can lead to international escalation, Perry Metzger: that model, and we also can lead to a situation in which a small group of people may get access to technologies that they cannot be trusted with. I don't know that that any small group of people should be trusted with exclusive control of any of this stuff.I don't know that anyone has the moral fiber for it. My own experiences with being involved in, in a very small way with with, with being an international bureaucrat for a while. You know, I was on a predecessor of the I C A N, as I said, the I A H C. And, and, and I got a very, very vivid taste in a very brief period [02:51:00] of time of how difficult it is even for well-intentioned people to function well inside a politicized process.And, and I don't know who I would trust with with soul control over this technology. I would feel much, much more comfortable, I think, in a situation where lots of people are working on it and coming up with good ideas and trading those ideas and working on the construction of effectively, you know, several kinds of immune systems that we will need at several levels of our civilization if we're going to survive.By the way, I, I, again, don't want to dismiss the. That we are at a very dangerous part of the development of our civilization. We certainly are. And you know, and there are people who will bring up things like, well, does the Fami paradox mean that we've all, that everyone else who's built AI has failed? And, you know, and that their, their world is a burning cinder or maybe a very, very cold cinder.Or, you know, or, or are we just the first, and the reason that we have the [02:52:00] fairy paradox is that any technological civilization so rapidly colonizes its entire light cone that no other civilization appears in its light cone. Which is, by the way, my, the view I have I, I happen to think that the Drake equation is garbage.Mm-hmm. Not, not the Frank Drake. Frank Drake was a perfectly reasonable and smart guy. But I think that the flaw in his idea is that the D Drake equation assumes statistical independence of all of its variables. And there is no reason to believe that most of them have any statistical independence at all.You know, the first there will be no. So you think it just be, could Divia Eden: easily could be that there, there aren't very many civilizations out there. Perry Metzger: Well, well, okay. So on our planet will any other. Technological civilization, you know, will another intelligence species evolve while we are here? And the answer is, is strong.No. Oh, glad we could uplift one. I guess that's not, you might uplift them, but it's not gonna evolve by accident because Right. We have created circumstances without even intending to, in which that becomes [02:53:00] impossible. And something like Ben Goldhaber: the grabby Perry Metzger: aliens model, like Robin. So I, I, I came up with this and even published it long before Robin did, and I'm not going to accuse Robin of plagiarism, but maybe the convergent, but, but he, I'm sure I talked, but he must have read your stuff, you're saying.I'm sure I talked about this stuff on the Extropians list a long time ago. Well, look at back. I mean, it's still there. Ben Goldhaber: I blog this. Perry Metzger: Yeah. That one you haven't deleted, right? Well, I blogged this stuff on, on my old blog, which is still up. Okay. Which is, so we can go reference it, which, which, which I intend to uplift into, into CK soon.Got it. You'll import the old archives. But, but, but the, the argument was, I, I, I wrote this up I think like, you know, very, very early two thousands. And I'd had the idea for a long time, I think I discussed it on the Extropians list. Once you have a technological civilization appear within a very brief time, it gets to the point where it has nanotechnology, ai and Von and von Noman machines.And inevitably it's going to send them out, even if only a small fraction of that [02:54:00] civilization wants to, that small fraction of that civilization will start sending out fund Noman probes and they will quickly colonize the entire light cone. And at that point, for the same reason that no other technological civilization is going to appear on earth so long as we are here do you say, yeah, no other technological civilization, various other questions about that, no other technological civilization will appear in our light cone because we will be sending out Von Norman probes that will turn all of the other stars into ska swarms or, or starlet.So you think that it Divia Eden: send out the sort of, I don't know. I would hope that in the versions of the future that I want our probes. would be somewhat respect, respectful of Perry Metzger: existing civilization. No, no, no. But I'm not saying that they'll kill them. I'm saying that if an A civilization is already out there, then it already has this technology and it's expanding out.And if it's not already out there, when we arrive in a solar system and start, and, you know, and I see what you're Divia Eden: saying, we'll, just in almost all cases get there before there's any sort of, like, Ben Goldhaber: there won't be that like right [02:55:00] moment where they encounter us, like at our current Perry Metzger: civilization. Yeah. It's the odd probability is, is, is is incredibly low.Right. What is the window between the time? Divia Eden: Don't even want to, I'd even wanna be somewhat Perry Metzger: the window between writing it if there was Yeah. But, but some interesting animal at all. The window for our civilization between writing and nanotechnology. AI and spacecraft. Yes. Quite short in the cosmic sense.5,000 years, that's nothing. Right? Right. You're not going to encounter a civilization in, in, in that state very, with very high probability. No, that seem true. How out of, out of 13 point something billion years, you're, you're not, you're not gonna hit that very, very frequently. So what's gonna happen is that we'll be sending out these probes and they will lift most of the gas out of the local star of it for trillions of years into the future.So that, you know, so the, you know, late state ca stage capitalism will be that much further off. You know, I'm, I am, I'm a very [02:56:00] big believer that as, as I've said, late stage capitalism is when you are harvesting, you know, energy from black holes with the penrose process because there's none left. And, and, and we wanna, I guess Divia Eden: a different thing I would say maybe that sort of a thread that shows up in your thinking is something like that, I don't, this is sort of an easy thing to say, but that your worldview includes a lot of sort of broad strokes that you think are in fact pretty predictable in advance, and then a bunch of details.Perry Metzger: that you think aren't? I think that the details are very hard. The things that we can predict are that in the future we will not violate the laws of physics, although we might not perfectly understand them at this point. Do you think we mostly have them, right? Yeah. I I, I, so there are lots of holes we've got, right?Sure. But you think like Divia Eden: the, you don't think there are a ton of unknown unknowns, you think? Perry Metzger: Most of the holes are the holes. We, there are a ton of unknown unknowns, but the odds that one of those unknown unknowns involves things like super luminal travel is very low. Okay. You know, it might turn out, for [02:57:00] example, that there's a fifth force.It might turn out that, you know, that there are interesting features of, of, of very small length scales that we don't understand. You know, there, there are all sorts of things that might turn out, but as with the transition from Newtonian mechanics to, to, you think it adds up to normality, what we've got now will turn out to be a good approximation in most domains.Mm-hmm. and the odds that you can really, I mean, nu that, and there's certain things I really don't expect. Like, for example, I don't expect no TH's theorem turns out to be wrong in some interesting way. I don't, I think I Divia Eden: have encountered that and do not remember Perry Metzger: what it is. No, no. TH's theorem is one of the most important ideas in all the physics.It says that for every symmetry in our universe, there is a conservation law. Now what does this mean? It means that if the laws of physics are, if, if, if so you've got origin axi, you know, an origin and ax axis for [02:58:00] your measurement of space. You know, we're in a three-dimensional space. Sure. And the fact that you can put that origin anywhere you want, that you can translate it anywhere, is exactly equivalent to saying that we have conservation of momentum.Those are the same. Got it. In a very, very deep way. The fact that you can rotate your coordinate axes and the laws of physics remain the same in a very, very deep way, is the same as the conservation of angular momentum. The fact that you do not have a unique origin for time. That, or, or you know, that, that, that you can move what you call tze anywhere you want.And, and the laws of physics remain the same implies the conservation of mass energy and, and this, these are, okay, I'm gonna, Divia Eden: this is, this is very good. I'm gonna try to not get nerd sniped by going into that Perry Metzger: at this time. But, but anyway, this was, this was something figured out by Emmy Nour, one of the, one of the greatest mathematical phys, you know, mathematicians and physicists of all time.Yes, I have heard of her. Really, really brilliant. [02:59:00] And there are all sorts of constraints on the way that our universe can work that, that we have figured out in recent centuries. And there's a lot of unknowns, but we're not likely to escape from things like the conservation of momentum if, if in order to Divia Eden: escape, which is why you have predictions like, you know, sending things out at 0.95 light speed, not faster.Perry Metzger: Correct. Yeah. I mean, it's, it's possible that that some of the rest of this stuff is, is, is a thing. But you know, it's, it's it doesn't seem, it doesn't seem particularly high probability to me. It, so where are there see effects in, in physics elsewhere? Yeah. Are Divia Eden: there any things that you especially want to.Mentioned before we're done that we Perry Metzger: haven't gotten to. I intend to be resurrecting my blog at some point in the next few weeks. Okay. Few weeks. And, Divia Eden: and will you be telling us about it on Twitter? You're also there. I Perry Metzger: will also, I'm on Twitter too much of the time. I say too much [03:00:00] on Twitter. Can you tell people your Divia Eden: handle Perry Metzger: because they don't know it.It's, it's Perry Metzker. P e r r y m e t z g e r. I think I should, I should check that. No, it is, we'll share a link to it as well. Yeah, we'll put the link in, show notes. You'll have a link in the show notes. And, and maybe you can put a link in the show notes to, to your blog. Yeah. To my, you know, to my blog, because I've decided to resurrect my blog.Awesome. It's called Diminished Capacity because, you know, no one, no one should believe my ramblings. You know, I, it's, it's not clear. I'm mentally competent. And so, you know, again yeah. And, and, and this has been great fun, you know, and, and you know, it's a shame we didn't get a chance to say very much before time ran out, but it's true.Ben Goldhaber: Well, yeah. No, but that's why we get to bring you back for episodes two, three, and four as well. Perry Metzger: Oh gosh. At, Ben Goldhaber: maybe we can we add him up entirely? We'll be beating the Lex Friedman Perry Metzger: podcast record. Yes, yes. I, I, by the way, I, I, I still find it hard to believe that, that he has time in his life for things like potty breaks.Ben Goldhaber: , he's just filming the podcast on there too. Recording them there. It, it is, Perry Metzger: it is kind of remarkable. [03:01:00] Anyway, yeah. And anyway, it, this has been great fun. It's been excellent. Thank you. Very. Maybe at some point we can talk about politics or economics or, or things Divia Eden: like that. Yeah, totally. Maybe we can bring you on to argue with someone else about something too.Perry Metzger: That could be Oh, that I, I, interesting. Well, you know, I, I, I, I, I, I, this might not be obvious, but I, I don't mind arguing too much, . I know. I conceal it. We find someone else who didn't mind arguing. I, I, I, I conceal it very carefully, but I, I, I do have a, a small taste for that sort of thing. Anyway, it's been great seeing you guys.All right. Thanks. Ben Goldhaber: Thank you very much. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app