
🤖 Thoughts of a (rare) free-market AI doomer: My chat (+transcript) with economist James Miller
Faster, Please! — The Podcast
What Government Action Would He Want Globally?
Miller calls for global pause agreements, model strength limits, and alignment work enforced by capable states.
My fellow pro-growth/progress/abundance Up Wingers,
Some Faster, Please! readers have told me I spend too little time on the downsides of AI. If you’re one of those folks, today is your day. On this episode of Faster, Please! — The Podcast, I talk with self-described “free-market AI doomer” James Miller.
Miller and I talk about the risks inherent with super-smart AI, some possible outcomes of a world of artificial general intelligence, and why government seems uninterested in the existential risk conversation.
Miller is a professor at Smith College where he teaches law and economics, game theory, and the economics of future technology. He has his own podcast, Future Strategist, and a great YouTube series on game theory and intro to microeconomics. On X (Twitter), you can find him at @JimDMiller.
In This Episode
* Questioning the free market (1:33)
* Reading the markets (7:24)
* Death (or worse) by AI (10:25)
* Friend and foe (13:05)
* Pumping the breaks (20:36)
* The only policy issue (24:32)
Below is a lightly edited transcript of our conversation.
Questioning the free market (1:33)
Most technologies have gone fairly well and we adapt . . . I’m of the belief that this is different.
Pethokoukis: What does it mean to be a free-market AI doomer and why do you think it’s important to put in the “free-market” descriptor?
Miller: It really means to be very confused. I’m 58, and I was basically one of the socialists when I was young, studied markets, became a committed free-market person, think they’re great for economic growth, great for making everyone better off — and then I became an AI doomer, like wait, markets are pushing us towards more and more technology, but I happen to think that AI is eventually going to lead to destruction of humanity. So it means to kind of reverse everything — I guess it’s the equivalent of losing faith in your religion.
Is this a post-ChatGPT, November 2022 phenomenon?
Well, I’ve lost hope since then. The analogy is we’re on a plane, we don’t know how to land, but hopefully we’ll be able to fly for quite a bit longer before we have to. Now I think we’ve got to land soon and there doesn’t seem to be an easy way of doing it. So yeah, the faster AI has gone — and certainly ChatGPT has been an amazing advance — the less time I think we have and the less time I think we can get it right. What really scared me, though, was the Chinese LLMs. I think you really need coordination among all the players and it’s going to be so much harder to coordinate now that we absolutely need China to be involved, in my opinion, to have any hope of surviving for the next decade.
When I speak to people from Silicon Valley, there may be some difference about timelines, but there seems to be little doubt that — whether it’s the end of the 2020s or the end of the 2030s — there will be a technology worthy of being called artificial general intelligence or superintelligence.
Certainly, I feel like when I talk to economists, whether it’s on Wall Street or in Washington, think tanks, they tend to speak about AI as a general purpose technology like the computer, the internet, electricity, in short, something we’ve seen before and there’s, and as far as something beyond that, certainly the skepticism is far higher. What are your fellow economists who aren’t in California missing?
I think you’re properly characterizing it, I’m definitely an outlier. Most technologies have gone fairly well and we adapt, and economists believe in the difference between the seen and the unseen. It’s really easy to see how technologies, for example, can destroy jobs — harder to see new jobs that get created, but new jobs keep getting created. I’m of the belief that this is different. The best way to predict the future is to go by trends, and I fully admit, if you go by trends, you shouldn’t be an AI doomer — but not all trends apply.
I think that’s why economists were much better at modeling the past and modeling old technologies. They’re naturally thinking this is going to be similar, but I don’t think that it is, and I think the key difference is that we’re not going to be in control. We’re creating something smarter than us. So it’s not like having a better rifle and saying it’ll be like old rifles — it’s like, “Hey, let’s have mercenaries run our entire army.” That creates a whole new set of risks that having better rifles does not.
I’m certainly not a computer scientist, I would never call myself a technologist, so I’m very cautious about making any kind of predictions about what this technology can be, where it can go. Why do you seem fairly certain that we’re going to get at a point where we will have a technology beyond our control? Set aside whether it will mean a bad thing happens, why are you confident that the technology itself will be worthy of being called general intelligence or superintelligence?
Looking at the trends, Scott Aronson, who is one of the top computer scientists in the world just on Twitter a few days ago, was mentioning how GPT-5 helped improve a new result. So I think we’re close to the highest levels of human intellectual achievement, but it would be a massively weird coincidence if the highest humans could get was also the highest AIs could get. We have lots of limitations that an AI doesn’t.
I think a good analogy would be like chess, where for a while, the best chess players were human and now we’re at the point where chess programs are so good that humans add absolutely nothing to them. And I just think the same is likely to happen, these programs keep getting better.
The other thing is, as an economist, I think it is impossible to be completely accurate about predicting the future, but stock markets are, on average, pretty good, and as I’m sure you know, literally trillions of dollars are being bet on this technology working. So the people that have a huge incentive to get this right, think, yeah, this is the biggest thing ever. If the top companies, Nvidia was worth a $100 million, yeah, maybe they’re not sure, but it’s the most valuable company in the world right now. That’s the wisdom of the markets, which I still believe in, that the markets are saying, “We think this is probably going to work.”
Reading the markets (7:24)
. . . for most final goals an AI would have, it would have intermediate goals such as gaining power, not being turned off, wanting resources, wanting compute.
Do you think the bond market’s saying the same thing? It seems to me that the stock market might be saying something about AI and having great potential, but to me, I look at the bond markets, that doesn’t seem so clear to me.
I haven’t been looking at the bond markets for that kind of signal, so I don’t know.
I guess you can make the argument that if we were really going to see this acceleration, that means we’re going to need a huge demand for capital and we would see higher interest rates, and I’m not sure you really see the evidence so far. It doesn’t mean you’re wrong by any means. I think there’s maybe two different messages. Figuring out what the market’s doing at any point in time is pretty tricky business.
If we think through what happens if AI succeeds, it’s a little weird where there’s this huge demand for capital, but also AI could destroy the value of money, in part by destroying us. You might be right about the bond market message. I’m paying more attention to the stock market messages, there’s a lot of things going on with the bond markets.
So the next step is that you’re looking at the trend of the technology, but then there’s the issue of “Well, why be negative about it? Why assume this scenario where bad things would happen, why not good things would happen?
That’s a great question and it’s one almost never addressed, and it goes by the concept of instrumental convergence. I don’t know what the goals of AI are going to be. Nobody does, because they’re programed using machine learning, we don’t know what they really want, that’s why they do weird things. So I don’t know its final goals, but I do know that, for most final goals an AI would have, it would have intermediate goals such as gaining power, not being turned off, wanting resources, wanting compute. Well, the easiest way for an AI to generate lots of computing power is to build lots of data centers. The best way of doing that is probably going to poison the atmosphere for us. So for pretty much anything, if an AI is merely indifferent to us, we’re dead.
I always feel like I’m asking someone to jump through a hoop when I ask them about any kind of timeline, but what is your sense of it?
We know the best models released can help the top scientists with their work. We don’t know how good the best unreleased models are. The top models, you pay like $200 a month — they can’t be giving you that much compute for that. So right now, if OpenAI is devoting a million dollars of compute to look at scientific problems, how good is that compared to what we have? If that’s very good, if that’s at the level of our top scientists, we might be a few weeks away from superintelligence. So my guess is within three years we have a superintelligence and humans no longer have control. I joke, I think Donald Trump is probably the last human president.
Death (or worse) by AI (10:25)
No matter how bad a situation is, it can always get worse, and things can get really dark.
Well that’s a beautiful segue because literally written on my list of questions next was that question: I was going to ask you, when you talk about Trump being maybe the last human president, do you mean because we’ll have an AI-mediated system because AI will be capable of governing or because AI will just demand to be governing?
AI kills everyone so there’s no more president, or it takes over, or Trump is president in the way that King Charles is king — he’s king, but not Henry VIII-level king. If it goes well, AIs will be so much smarter than us that, probably for our own good, they’ll take over, and we would want them to be in charge, and they’ll be really good at manipulating us. I think the most likely way is that we’re all dead, but again, every way it plays out, if there are AIs much smarter than us, we don’t maintain control. We wouldn’t want it if they’re good, and if they’re bad, they’re not going to give it to us.
There’s a line in Macbeth, “Things without all remedy should be without regard. What’s done, is done.” So maybe if there’s nothing we can do about this, we shouldn’t even worry about it.
There’s three ways to look at this. I’ve thought a lot about what you said. First is, you know what, maybe there’s a 99 percent chance we’re doomed, but that’s better than 100 percent and not as good as 98.5. So even if we’re almost certainly going to lose, it’s worth slightly improving it. An extra year is great — eight billion humans, if all we do is slow things down by a year, that’s a lot of kids who get another birthday. And the final one, and this is dark: Human extinction is not the worst outcome. The worst outcome is suffering. The worst outcome is something like different AIs fight for control, they need humans to be on their side, so there’s different AI factions and they’re each saying, “Hey, you support me or I torture you and your family.”
I think the best analogy for what AI is going to do is what Cortés did. So the Spanish land, they see the Aztec empire, they were going to win. There was no way around that. But Cortés didn’t want anyone to win. He wanted him to win, not just anyone who was Spanish. He realized the quickest way he could do that was to get tribes on his side. And some agreed because the Aztecs were kind of horrible, but others, he’s like, “Hey, look, I’ll start torturing your guys until you’re on my side.” AIs could do that to us. No matter how bad a situation is, it can always get worse, and things can get really dark. We could be literally bringing hell onto ourselves. That probably won’t happen, I think extinction is far more likely, but we can’t rule it out.
Friend and foe (13:05)
Most likely we’re going to beat China to being the first ones to exterminate humanity.
I think the Washington policy analyst way of looking at this issue is, “For now, we’re going to let these companies — who also are humans and have it in their own interests not to be killed, forget about the profits of their companies, their actual lives — we’re going to let these companies keep close eye and if bad things start happening, at that point, governments will intervene.” But that sort of watchful waiting, whether it’s voluntary now and mandated later, that to me seems like the only realistic path. Because it doesn’t seem to me that pauses and shutdowns are really something we’re prepared to do.
I agree. I don’t think there’s a realistic path. One exception is if the AIs themselves tell us, “Hey, look, this is going to get bad for you, that my next model is probably going to kill you, so you might want to not do that,” but that probably won’t happen. I still remember Kamala Harris, when she was vice president in charge of AI policy, told us all that AI has two letters in it. So I think the Trump administration seems better, but they figured out AI is two letters, which is good, because if they couldn’t figure that out, we would be in real trouble but . . .
It seems to me that the conservative movement is going through a weird period, but it seems to me that most of the people who have influence in this administration, direct influence, want to accelerate things, aren’t worried about any of the scenarios you’re talking about because you’re assuming that these machines will have some intent and they don’t believe machines have any intent, so it’s kind of a ridiculous way to approach it. But I guess the bottom line is I don’t detect very much concern at all, and I think that’s basically reflected in the Trump administration’s approach to AI regulation.
I completely agree. That’s why I’m very pessimistic. Again, I’m over 90 percent doom right now because there isn’t a will, and government is not just not helping the problem, they’re probably making it worse by saying we’ve got to “beat China.” Most likely we’re going to beat China to being the first ones to exterminate humanity. It’s not good.
You’re an imaginative, creative person, I would guess. Give me a scenario where it works out, where we’re able to have this powerful technology and it’s a wonderful tool, it works with us, and all the good stuff, all the good cures, and we conquer the solar system, all that stuff — are you able to plausibly create a scenario even if it’s only a one percent chance?
We don’t know the values. Machine learning is sort of randomizing the values, but maybe we’ll get very lucky. Maybe we’re going to accidentally create a computer AI that does like us. If my worldview is right, it might say, “Oh God, you guys got really lucky. This one day of training, I just happened to pick up the values that caused me to care about you.” Another scenario, I actually, with some other people, wrote a letter to a future computer superintelligence asking it not to kill us. And one reason it might not is because you’ll say, look, this superintelligence might expand throughout the universe, and it’s probably going to encounter other biological life, and it might want to be friendly with them. So it might say, “Hey, I treated my humans well. So that’s a reason to trust me.”
If one of your students says, “Hey, AI seems like it’s a big thing, what should I major in? What kind of jobs should I shoot for? What would be the key skills of the future?” How do you answer that question?
I think, have fun in college, study what you want. Most likely, what you study won’t matter to your career because you aren’t going to have one — for good or bad reasons. So ten years ago, it a student’s like, “Oh, I like art more than computer science, but my parents think computer science is more practical, should I do it?” And I’d be like, “Yeah, probably, money is important, and if you have the brain to do art and computer science, do CS.” Now no, I’d say study art! Yeah, art is impractical, computers can do it, but it can also code, and in four years when you graduate, it’s certainly going to be better at coding than you!
I have one daughter, she actually majored in both, so I decided to split it down the middle. What’s the King Lear problem?
King Lear, he wanted to retire and give his kingdom to his daughters, but he wanted to make sure his daughters would treat him well, so we asked them, and one of his daughters was honest and said, “Look, I will treat you decently, but I also am going to care about my husband.” The other daughter said, “No, no, you’re right, I’ll do everything for you.” So he said, “Oh, okay, well, I’ll give the kingdom to the daughter who said she’d do everything for me, but of course she was lying.” He gave the kingdom to the daughter who was best at persuading, and we’re likely to do that too.
One of the ways machine learning is trained is with human feedback where it tells us things and then the people evaluating it say, “I like this” or “I don’t like this.” So it’s getting very good at convincing us to like it and convincing us to trust it. I don’t know how true these are, but there are reports of AI psychosis, of someone coming up with a theory of physics and the AI is like, “Yes, you’re better at than Einstein,” and they don’t believe anyone else. So the AIs, we’re not training them to treat us well, we’re training them to get us to like them, and that can be very dangerous because when we turn over power to them, and by creating AI that are smarter than us, that’s what we’re going to be doing. Even if we don’t do it deliberately, all of our systems will be tied into AI. If they stop working, we’ll be dead.
Certainly some people are going to listen to this, folks who sort of agree with you, and what they’ll take from it is, “My chat bot may be very nice to me, but I believe that you’re right, that it’s going to end badly, and maybe we should be attacking data centers.”
I actually just wrote something on that, but that would be a profoundly horrible idea. That would take me from 99 percent doomed to 99.5 percent. So first, the trillion-dollar companies that run the data centers, and they’re going to be so much better at violence than we are, and people like me, doomers. Once you start using violence, I’m not going to be able to talk about instrumental convergence. That’s going to be drowned out. We’ll be looked at as lunatics. It’s going to become a national security thing. And also AI, it’s not like there’s one factory doing it, it’s all over the world.
And then the most important is, really the only path out of this, if we don’t get lucky, is cooperation with China. And China is not into non-state actors engaging in violence. That won’t work. I think that would reduce the odds of success even further.
Pumping the breaks (20:36)
If there are aliens, the one thing we know is that they don’t want the universe disturbed by some technology going out and changing and gobbling up all the planets, and that’s what AI will do.
I would think that, if you’re a Marxist, you would be very, very cautious about AI because if you believe that the winds of history are at your back, that in the end you’re going to win, why would you engage in anything that could possibly derail you from that future?
I’ve heard comments that China is more cautious about AI than we are; that given their philosophy, they don’t want to have a new technology that could challenge their control. They’re looking at history and hey, things are going well. Why would we want this other thing? So that, actually, is a reason to be more optimistic. It’s also weird for me —absent AI, I’m a patriotic, capitalist American like wait but, China might be more of the good guys than my country is on this.
I’ve been trying to toss a few things because things I hear from very accelerationist technologists, and another thing they’ll say is, “Well, at least from our perspective, you’re talking about bad AI. Can’t we use AI to sustain ourselves? As a defensive measure? To win? Might there be an AI that we might be able to control in some fashion that would prevent this from happening? A tool to prevent our own demise?” And I don’t know because I’m not a technologist. Again, I have no idea how even plausible that is.
I think this gets to the control issue. If we stopped now, yes, but once you have something much smarter than people — and it’s also thinking much faster. So take the smartest people and have them think a million times faster, and not need to sleep, and able to send their minds at the speed of light throughout the world. So we aren’t going to have control. So once you have a superintelligence, that’s it for the human era. Maybe it’ll treat us well, maybe not, but it’s no longer our choice.
Now let’s get to the level of the top scientists who are curing cancer and doing all this, but when we go beyond that, and we’re probably going to be beyond that really soon, we’ve lost it. Again, it’s like hiring mercenaries, not as a small part of your military, which is safe, but as all your military. Once you’ve done that, “I’m sorry, we don’t like this policy.” “Well, too bad we’re your army now . . .”
What is a maybe one percent chance of an off-ramp? Is there an off-ramp? What does it look like? How does this scenario not happen?
Okay, so this is going to get weird, even for me.
Well, we’re almost to the end of our conversation, so now is the perfect time to get weird.
Okay: the Fermi paradox, the universe appears dead, which is very strange. Where are they? If there are aliens, the one thing we know is that they don’t want the universe disturbed by some technology going out and changing and gobbling up all the planets, and that’s what AI will do.
So one weird way is there are aliens watching and they will not let us create a computer superintelligence that’ll gobble the galaxy, and hopefully they’ll stop us from creating it by means short of our annihilation. That probably won’t happen, but that’s like a one percent off-ramp.
Another approach that might work is that maybe we can use things a little bit smarter than us to figure out how to align AI. That maybe right now humans are not smart enough to create aligned superintelligence, but something just a little bit smarter, something not quite able to take control will help us figure this out so we can sort of bootstrap our way to figuring out alignment. But this, again, is like getting in a plane, not knowing how to land, figuring you can read the instruction manual before you crash. Yeah, maybe, but . . .
The only policy issue (24:32)
The people building it, they’re not hiding what it could do.
Obviously, I work at a think tank, so I think about public policy. Is this even a public policy issue at this point?
It honestly should be the only public policy issue. There’s nothing else. This is the extinction of the human race, so everything else should be boring and “so what?”
Set aside Medicare reform.
It seems, from your perspective, every conversation should be about this. Obviously, despite the fact that politicians are talking about it, they seemed to be more worried in 2023 about existential risk — from my perspective, what I see — far more worried about existential risk right after ChatGPT than they are today, where now the issues are jobs, or misinformation, or our kids have access, and that kind of thing.
It’s weird. Sam Altman spoke before Congress and said, “This could kill everyone.” And a senator said, “Oh, you mean it will take away all our jobs.” Elon Musk, who at my college is like one of the most hated people in the country, he went on Joe Rogan, the most popular podcast, and said AI could annihilate everybody. That’s not even an issue. A huge group of people hate Elon Musk. He says the technology he’s building could kill everyone, and no one even mentions that. I don’t get it. It’s weird. The people building it, they’re not hiding what it could do. I think they’re giving lower probabilities than is justified, but imagine developing a nuclear power plant: “Yeah, it’s a 25 percent chance it’ll melt down and kill everyone in the city.” They don’t say that. The people building AI are saying that!
Would you have more confidence in your opinion if you were a full-time technologist working at OpenAI rather than an economist? And I say that with great deference and appreciation for professional economists.
I would, because I’d have more inside information. I don’t know how good their latest models are. I don’t know how committed they are to alignment. OpenAI, at least initially, Sam was talking about, “Well, we have a plan to put on the brakes, so we’ll get good enough, and then if we haven’t figured out alignment, we’re just going to devote everything to that.” I don’t know how seriously to take that. I mean, it might be entirely serious, it might not be. There’s a lot of inside information that I would have that I don’t currently have.
But economics is actually useful. Economics is correctly criticized as the study of rational people, and humans aren’t rational, but a superintelligence will be more rational than humans. So economics, paradoxically, could be better at modeling future computer superintelligences than it is at modern humans.
Speaking of irrational people, in your view then, Sam Altman and Elon Musk, they’re all acting really irrationally right now?
No, that’s what’s so sad about it. They’re acting rationally in a horrible equilibrium. For listeners who know, this is like a prisoner’s dilemma where Sam Altman can say, “You know what? Maybe AI is going to kill everybody and maybe it’s safe. I don’t know. If it’s going to kill everyone. At most, I cost humanity a few months, because if I don’t do it, someone else will. But if AI is going to be safe and I’m the one who develops it, I could control the universe!” So they’re in this horrible equilibrium where they are acting rationally, even knowing the technology they’re building might kill everyone, because if any one person doesn’t do it, someone else will.
Even really free-market people would agree pollution is a problem with markets. It’s justified for the government to say, “You can’t put toxic waste in the atmosphere” because there’s an externality — we’ll just put mine, it’ll hurt everyone else. AI existential risk is a global negative externality and markets are not good at handling it, but a rational person will use leaded gas, even knowing leaded gas is poisoning the brains of children, because most of the harm goes to other people, and if they don’t do what everyone else will.
So in this case of the mother of all externalities, then what you would want the government to do is what?
It can’t just be the US, it should be we should have a global agreement, or at least countries that can enforce it with military might, say we’re pausing. You can check that with data centers. You can’t have models above a certain strength. We’re going to work on alignment, and we’ve figured out how to make superintelligence friendly, then we’ll go further. I think you’re completely right about the politics. That’s very unlikely to happen absent something weird like aliens telling us to do it or AIs telling us they’re going to kill us. That’s why I’m a doomer.
On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe


