
🤖 AI risks and rewards: My chat (+transcript) with AI researcher Miles Brundage
Faster, Please! — The Podcast
Intro
Jim introduces the episode and guest Miles Brundage, framing the discussion on AI's transformative potential and risks.
My fellow pro-growth/progress/abundance Up Wingers,
Artificial intelligence may prove to be one of the most transformative technologies in history, but like any tool, its immense power for good comes with a unique array of risks, both large and small.
Today on Faster, Please! — The Podcast, I chat with Miles Brundage about extracting the most out of AI’s potential while mitigating harms. We discuss the evolving expectations for AI development and how to reconcile with the technology’s most daunting challenges.
Brundage is an AI policy researcher. He is a non-resident fellow at the Institute for Progress, and formerly held a number of senior roles at OpenAI. He is also the author of his own Substack.
In This Episode
* Setting expectations (1:18)
* Maximizing the benefits (7:21)
* Recognizing the risks (13:23)
* Pacing true progress (19:04)
* Considering national security (21:39)
* Grounds for optimism and pessimism (27:15)
Below is a lightly edited transcript of our conversation.
Setting expectations (1:18)
It seems to me like there are multiple vibe shifts happening at different cadences and in different directions.
Pethokoukis: Earlier this year I was moderating a discussion between an economist here at AEI and a CEO of a leading AI company, and when I asked each of them how AI might impact our lives, our economists said, ‘Well, I could imagine, for instance, a doctor’s productivity increasing because AI could accurately and deeply translate and transcribe an appointment with a patient in a way that’s far better than what’s currently available.” So that was his scenario. And then I asked the same question of the AI company CEO, who said, by contrast, “Well, I think within a decade, all human death will be optional thanks to AI-driven medical advances.” On that rather broad spectrum — more efficient doctor appointments and immortality — how do you see the potential of this technology?
Brundage: It’s a good question. I don’t think those are necessarily mutually exclusive. I think, in general, AI can both augment productivity and substitute for human labor, and the ratio of those things is kind of hard to predict and might be very policy dependent and social-norm dependent. What I will say is that, in general, it seems to me like the pace of progress is very fast and so both augmentation and substitutions seem to be picking up steam.
It’s kind of interesting watching the debate between AI researchers and economists, and I have a colleague who has said that the AI researchers sometimes underestimate the practical challenges in deployment at scale. Conversely, the economists sometimes underestimate just how quickly the technology is advancing. I think there’s maybe some happy middle to be found, or perhaps one of the more extreme perspectives is true. But personally, I am not an economist, I can’t really speak to all of the details of substitution, and augmentation, and all the policy variables here, but what I will say is that at least the technical potential for very significant amounts of augmentation of human labor, as well as substitution for human labor, seem pretty likely on even well less than 10 years — but certainly within 10 years things will change a lot.
It seems to me that the vibe has shifted a bit. When I talk to people from the Bay Area and I give them the Washington or Wall Street economist view, to them I sound unbelievably gloomy and cautious. But it seems the vibe has shifted, at least recently, to where a lot of people think that major advancements like superintelligence are further out than they previously thought — like we should be viewing AI as an important technology, but more like what we’ve seen before with the Internet and the PC.
It’s hard for me to comment. It seems to me like there are multiple vibe shifts happening at different cadences and in different directions. It seems like several years ago there was more of a consensus that what people today would call AGI was decades away or more, and it does seem like that kind of timeframe has shifted closer to the present. There there’s still debate between the “next few years” crowd versus the “more like 10 years” crowd. But that is a much narrower range than we saw several years ago when there was a wider range of expert opinions. People who used to be seen as on one end of the spectrum, for example, Gary Marcus and François Chollet who were seen as kind of the skeptics of AI progress, even they now are saying, “Oh, it’s like maybe 10 years or so, maybe five years for very high levels of capability.” So I think there’s been some compression in that respect. That’s one thing that’s going on.
There’s also a way in which people are starting to think less abstractly and more concretely about the applications of AI and seeing it less as this kind of mysterious thing that might happen suddenly and thinking of it more as incremental, more as something that requires some work to apply in various parts of the economy that there’s some friction associated with.
Both of these aren’t inconsistent, they’re just kind of different vibe shifts that are happening. So getting back to the question of is this just a normal technology, I would say that, at the very least, it does seem faster in some respects than some other technological changes that we’ve seen. So I think ChatGPT’s adoption going from zero to double-digit percentages of use across many professions in the US and in a matter of high number of months, low number of years, is quite stark.
Would you be surprised if, five years from now, we viewed AI as something much more important than just another incremental technological advance, something far more transformative than technologies that have come before?
No, I wouldn’t be surprised by that at all. If I understand your question correctly, my baseline expectation is that it will be seen as one of the most important technologies ever. I’m not sure that there’s a standard consensus on how to rate the internet versus electricity, et cetera, but it does seem to me like it’s of the same caliber of electricity in the sense of essentially converting one kind of energy into various kinds of useful economic work. Similarly, AI is converting various types of electricity into cognitive work, and I think that’s a huge deal.
Maximizing the benefits (7:21)
There’s also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications.
However you want to define society or the aspect of society that you focus on — government businesses, individuals — are we collectively doing what we need to do to fully exploit the upsides of this technology over the next half-decade to decade, as well as minimizing potential downsides?
I think we are not, and this is something that I sometimes find frustrating about the way that the debate plays out is that there’s sometimes this zero-sum mentality of doomers versus boomers — a term that Karen Hao uses — and this idea that there’s this inherent tension between mitigating the risks and maximizing the benefits, and there are some tensions, but I don’t think that we are on the Pareto frontier, so to speak, of those issues.
Right now, I think there’s a lot of value being left on the table in terms of fairly low-cost risk mitigations. There’s also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications. I’ll give just one example, because I write a lot about the risk, but I also am very interested in maximizing the upside. So I’ll just give one example: Protecting critical infrastructure and improving the cybersecurity of various parts of critical infrastructure in the US. Hospitals, for example, get attacked with ransomware all the time, and this causes real harm to patients because machines get bricked, essentially, and they have one or two people on the IT team, and they’re kind of overwhelmed by these, not even always that sophisticated, but perhaps more-sophisticated hackers. That’s a huge problem. It matters for national security in addition to patients’ lives, and it matters for national security in the sense that this is something that China and Russia and others could hold at risk in the context of a war. They could threaten this critical infrastructure as part of a bargaining strategy.
And I don’t think that there’s that much interest in helping hospitals have a better automated cybersecurity engineer helper among the Big Tech companies — because there aren’t that many hospital administrators. . . I’m not sure if it would meet the technical definition of market failure, but it’s at least a national security failure in that it’s a kind of fragmented market. There’s a water plant here, a hospital administrator there.
I recently put out a report with the Institute for Progress arguing that philanthropists and government could put some additional gasoline in the tank of cybersecurity by incentivizing innovation that specifically helps these under-resourced defenders more so than the usual customers of cybersecurity companies like Fortune 500 companies.
I’m confident that companies and entrepreneurs will figure out how to extract value from AI and create new products and new services, barring any regulatory slowdowns. But since you mentioned low-hanging fruit, what are some examples of that?
I would say that transparency is one of the areas where a lot of AI policy experts seem to be in pretty strong agreement. Obviously there is still some debate and disagreement about the details of what should be required, but just to give you some illustration, it is typical for the leading AI companies, sometimes called frontier AI companies, to put out some kind of documentation about the safety steps that they’ve taken. It’s typical for them to say, here’s our safety strategy and here’s some evidence that we’re following this strategy. This includes things like assessing whether their systems can be used for cyber-attacks, and assessing whether they could be used to create biological weapons, or assessing the extent to which they make up facts and make mistakes, but state them very confidently in a way that could pose risks to users of the technology.
That tends to be totally voluntary, and there started to be some momentum as a result of various voluntary commitments that were made in recent years, but as the technology gets more high-stakes, and there’s more cutthroat competition, and there’s maybe more lawsuits where companies might be tempted to retreat a bit in terms of the information that they share, I think that things could kind of backslide, and at the very least not advance as far as I would like from the perspective of making sure that there’s sharing of lessons learned from one company to another, as well as making sure that investors and users of the technology can make informed decisions about, okay, do I purchase the services of OpenAI, or Google, or Anthropic, and making these informed decisions, making informed capital investment seems to require transparency to some degree.
This is something that is actively being debated in a few contexts. For example, in California there’s a bill that has that and a few other things called SB-53. But in general, we’re at a bit of a fork in the road in terms of both how certain regulations will be implemented such as in the EU. Is it going to become actually an adaptive, nimble approach to risk mitigation or is it going to become a compliance checklist that just kind of makes big four accounting firms richer? So there are questions then there are just “does the law pass or not?” kind of questions here.
Recognizing the risks (13:23)
. . . I’m sure there’ll be some things that we look back on and say it’s not ideal, but in my opinion, it’s better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing . . .
In my probably overly simplistic way of looking at it, I think of two buckets and you have issues like, are these things biased? Are they giving misinformation? Are they interacting with young people in a way that’s bad for their mental health? And I feel like we have a lot of rules and we have a huge legal system for liability that can probably handle those.
Then, in the other bucket, are what may, for the moment, be science-fictional kinds of existential risks, whether it’s machines taking over or just being able to give humans the ability to do very bad things in a way we couldn’t before. Within that second bucket, I think, it sort of needs to be flexible. Right now, I’m pretty happy with voluntary standards, and market discipline, and maybe the government creating some benchmarks, but I can imagine the technology advancing to where the voluntary aspect seems less viable and there might need to be actual mandates about transparency, or testing, or red teaming, or whatever you want to call it.
I think that’s a reasonable distinction, in the sense that there are risks at different scales, there are some that are kind of these large-scale catastrophic risks and might have lower likelihood but higher magnitude of impact. And then there are things that are, I would say, literally happening millions of times a day like ChatGPT making up citations to articles that don’t exist, or Claud saying that it fixed your code but actually it didn’t fix the code and the user’s too lazy to notice, and so forth.
So there are these different kinds of risks. I personally don’t make a super strong distinction between them in terms of different time horizons, precisely because I think things are going so quickly. I think science fiction is becoming science fact very much sooner than many people expected. But in any case, I think that similar logic around, let’s make sure that there’s transparency even if we don’t know exactly what the right risk thresholds are, and we want to allow a fair degree of flexibility and what measures companies take.
It seems good that they share what they’re doing and, in my opinion, ideally go another step further and allow third parties to audit their practices and make sure that if they say, “Well, we did a rigorous test for hallucination or something like that,” that that’s actually true. And so that’s what I would like to see for both what you might call the mundane and the more science fiction risks. But again, I think it’s kind of hard to say how things will play out, and different people have different perspectives on these things. I happen to be on the more aggressive end of the spectrum
I am worried about the spread of the apocalyptic, high-risk AI narrative that we heard so much about when ChatGPT first rolled out. That seems to have quieted, but I worry about it ramping up again and stifling innovation in an attempt to reduce risk.
These are very fair concerns, and I will say that there are lots of bills and laws out there that have, in fact, slowed down innovation and certain contexts. The EU, I think, has gone too far in some areas around social media platforms. I do think at least some of the state bills that have been floated would lead to a lot of red tape and burdens to small businesses. I personally think this is avoidable.
There are going to be mistakes. I don’t want to be misleading about how high quality policymakers’ understanding of some of these issues are. There will be mistakes, even in cases where, for example, in California there was a kind of blue ribbon commission of AI experts producing a report over several months, and then that directly informing legislation, and a lot of industry back and forth and negotiation over the details. I would say that’s probably the high water mark, SB-53, of fairly stakeholder/expert-informed legislation. Even there, I’m sure there’ll be some things that we look back on and say it’s not ideal, but in my opinion, it’s better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing, such as companies retrenching and holding back information that makes it hard for the field as a whole to tackle these issues.
I’ll just make one more point, which is adapting to the compliance capability of different companies: How rich are they? How expensive are the models they’re training, I think is a key factor in the legislation that I tend to be more sympathetic to. So just to make a contrast, there’s a bill in Colorado that was kind of one size fits all, regulate all the kind of algorithms, and that, I think, is very burdensome to small businesses. I think something like SB-53 where it says, okay, if you can afford to train an AI system for a $100 million, you can probably afford to put out a dozen pages about your safety and security practices.
Pacing true progress (19:04)
. . . some people . . . kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress . . . there’s quite rapid progress happening still.
Hopefully Grok did not create this tweet of yours, but if it did, well, there we go. You won’t have to answer it, but I just want to understand what you meant by it: “A lot of AI safety people really, really want to find evidence that we have a lot of time for AGI.” What does that mean?
What I was trying to get at is that — and I guess this is not necessarily just AI safety people, but I sometimes kind of try to poke at people in my social network who I’m often on the same side of, but also try to be a friendly critic to, and that includes people who are working on AI safety. I think there’s a common tendency to kind of grasp at what I would consider straws when reading papers and interpreting product launches in a way that kind of suggests, well, we’ve hit a wall, AI is slowing down, this was a flop, who cares?
I’m doing my kind of maybe uncharitable psychoanalysis. What I was getting at is that I think one reason why some people might be tempted to do that is that it makes things seem easier and less scary: “Well, we don’t have to worry about really powerful AI enabled cyber-attacks for another five years, or biological weapons for another two years, or whatever.” Maybe, maybe not.
I think the specific example that sparked that was GPT-5 where there were a lot of people who, in my opinion, were reading the tea leaves in a particular way and missing important parts of the context. For example, at GPT-5 wasn’t a much larger or more expensive-to-train model than GPT-4, which may be surprising by the name. And I think OpenAI did kind of screw up the naming and gave people the wrong impression, but from my perspective, there was nothing particularly surprising, but to some people it was kind of a flop that they kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress like scores on math, and coding, and the reduction in the rate of hallucinations, and solving chemistry and biology problems, and designing new chips, and so forth, there’s quite rapid progress happening still.
Considering national security (21:39)
I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.
I’m not sure if you’re familiar with some of the work being done by former Google CEO Eric Schmidt, who’s been doing a lot of work on national security and AI, and his work, it doesn’t use the word AGI, but it talks about AI certainly smart enough to be able to have certain capabilities which our national security establishment should be aware of, should be planning, and those capabilities, I think to most people, would seem sort of science fictional: being able to launch incredibly sophisticated cyber-attacks, or be able to improve itself, or be able to create some other sort of capabilities. And from that, I’m like, whether or not you think that’s possible, to me, the odds of that being possible are not zero, and if they’re not zero, some bit of the bandwidth of the Pentagon should be thinking about that. I mean, is that sensible?
Yeah, it’s totally sensible. I’m not going to argue with you there. In fact, I’ve done some collaboration with the Rand Corporation, which has a pretty heavy investment in what they call the geopolitics of AGI and kind of studying what are the scenarios, including AI and AGI being used to produce “wonder weapons” and super-weapons of some kind.
Basically, I think this is super important and in fact, I have a paper coming out that was in collaboration with some folks there pretty soon. I won’t spoil all the details, but if you search “Miles Brundage US China,” you’ll see some things that I’ve discussed there. And basically my perspective is we need to strike a balance between competing vigorously on the commercial side with countries like China and Russia on AI — more so China, Russia is less of a threat on the commercial side, at least — and also making sure that we’re fielding national security applications of AI in a responsible way, but also recognizing that there are these ways in which things could spiral out of control in a scenario with totally unbridled competition. I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.
If you think that, again, the odds are not zero that a technology which is fast-evolving, that we have no previous experience with because it’s fast-evolving, could create the kinds of doomsday scenarios that there’s new books out about, people are talking about. And so if you think, okay, not a zero percent chance that could happen, but it is kind of a zero percent chance that we’re going to stop AI, smash the GPUs, as someone who cares about policy, are you just hoping for the best, or are the kinds of things we’ve already talked about — transparency, testing, maybe that testing becoming mandatory at some point — is that enough?
It’s hard to say what’s enough, and I agree that . . . I don’t know if I give it zero, maybe if there’s some major pandemic caused by AI and then Xi Jinping and Trump get together and say, okay, this is getting out of control, maybe things could change. But yeah, it does seem like continued investment and a large-scale deployment of AI is the most likely scenario.
Generally, the way that I see this playing out is that there are kind of three pillars of a solution. There’s kind of some degree of safety and security standards. Maybe we won’t agree on everything, but we should at least be able to agree that you don’t want to lose control of your AI system, you don’t want it to get stolen, you don’t want a $10 billion AI system to be stolen by a $10 million-scale hacking effort. So I think there are sensible standards you can come up with around safety and security. I think you can have evidence produced or required that companies are following these things. That includes transparency.
It also includes, I would say, third-party auditing where there’s kind of third parties checking the claims and making sure that these standards are being followed, and then you need some incentives to actually participate in this regime and follow it. And I think the incentives part is tricky, particularly at an international scale. What incentive does China have to play ball other than obviously they don’t want to have their AI kill them or overthrow their government or whatever? So where exactly are the interests aligned or not? Is there some kind of system of export control policies or sanctions or something that would drive compliance or is there some other approach? I think that’s the tricky part, but to me, those are kind of the rough outlines of a solution. Maybe that’s enough, but I think right now it’s not even really clear what the rough rules of the road are, who’s playing by the rules, and we’re relying a lot on goodwill and voluntary reporting. I think we could do better, but is that enough? That’s harder to say.
Grounds for optimism and pessimism (27:15)
. . . it seems to me like there is at least some room for learning from experience . . . So in that sense, I’m more optimistic. . . I would say, in another respect, I’m maybe more pessimistic in that I am seeing value being left on the table.
Did your experience at OpenAI make you more or make you more optimistic or worried that, when we look back 10 years from now, that AI will have, overall on net, made the world a better place?
I am sorry to not give you a simpler answer here, and maybe think I should sit on this one and come up with a kind of clearer, more optimistic or more pessimistic answer, but I’ll give you kind of two updates in different directions, and I think they’re not totally inconsistent.
I would say that I have gotten more optimistic about the solvability of the problem in the following sense. I think that things were very fuzzy five, 10 years ago, and when I joined OpenAI almost seven years now ago now, there was a lot of concern that it could kind of come about suddenly — that one day you don’t have AI, the next day you have AGI, and then on the third day you have artificial superintelligence and so forth.
But we don’t live to see the fourth day.
Exactly, and so it seems more gradual to me now, and I think that is a good thing. It also means that — and this is where I differ from some of the more extreme voices in terms of shutting it all down — it seems to me like there is at least some room for learning from experience, iterating, kind of taking the lessons from GPT-5 and translating them into GPT-6, rather than it being something that we have to get 100 percent right on the first shot and there being no room for error. So in that sense, I’m more optimistic.
I would say, in another respect, I’m maybe more pessimistic in that I am seeing value being left on the table. It seems to me like, as I said, we’re not on the Pareto frontier. It seems like there are pretty straightforward things that could be done for a very small fraction of, say, the US federal budget, or very small fraction of billionaires’ personal philanthropy or whatever. That in my opinion, would dramatically reduce the likelihood of an AI-enabled pandemic or various other issues, and would dramatically increase the benefits of AI.
It’s been a bit sad to continuously see those opportunities being neglected. I hope that as AI becomes more of a salient issue to more people and people start to appreciate, okay, this is a real thing, the benefits are real, the risks are real, that there will be more of a kind of efficient policy market and people take those opportunities, but right now it seems pretty inefficient to me. That’s where my pessimism comes from. It’s not that it’s unsolvable, it’s just, okay, from a political economy and kind of public-choice perspective, are the policymakers going to make the right decisions?
On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
Micro Reads
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe