Faster, Please! — The Podcast

James Pethokoukis
undefined
Sep 12, 2025 • 33min

🛩️ Human aspiration and the legacy of 'To Fly!': My chat (+transcript) with filmmaker Greg MacGillivray

My fellow pro-growth/progress/abundance Up Wingers,In 1976, America celebrated 200 years of independence, democracy, and progress. Part of that celebration was the release of To Fly!, a short but powerful docudrama on the history of American flight. With To Fly!, Greg MacGillivray and his co-director Jim Freeman created one of the earliest IMAX films, bringing cinematography to new heights.After a decade of war and great social unrest, To Fly! celebrated the American identity and freedom to innovate. Today on Faster, Please! — The Podcast, I talk with MacGillivray about filming To Fly! and its enduring message of optimism.MacGillivray has produced and directed films for over 60 years. In that time, his production company has earned two Academy Award nominations, produced five of the Top 10 highest-grossing IMAX films, and has reached over 150 million viewers.In This Episode* The thrill of watching To Fly! (1:38)* An innovative filming process (8:25)* A “you can do it” movie (19:07)* Competing views of technology (25:50)Below is a lightly edited transcript of our conversation. The thrill of watching To Fly! (1:38)What Jim and I tried to do is put as many of the involving, experiential tricks into that film as we possibly could. We wrote the film based on all of these moments that we call “IMAX moments.”Pethokoukis: The film To Fly! premiered at the Smithsonian Air and Space Museum, at the IMAX Theater, July 1976. Do you happen know if it was it the 4th of July or. . . ?MacGillivray: No, you know, what they did is they had the opening on the 2nd of July so that it wouldn't conflict with the gigantic bicentennial on the 4th, but it was all part of the big celebration in Washington at that moment.I saw the film in the late ’70s at what was then called the Great America Amusement Park in Gurnee, Illinois. I have a very clear memory of this, of going in there, sitting down, wondering why I was sitting and going to watch a movie as opposed to being on a roller coaster or some other ride — I've recently, a couple of times, re-watched the film — and I remember the opening segment with the balloonist, which was shot in a very familiar way. I have a very clear memory because when that screen opened up and that balloon took off, my stomach dropped.It was a film as a thrill ride, and upon rewatching it — I didn't think this as a 10-year-old or 11-year-old — but what it reminded me upon rewatching was of Henry V, Lawrence Olivier, 1944, where the film begins in the Globe Theater and as the film goes on, it opens up and expands into this huge technicolor extravaganza as the English versus the French. It reminds me of that. What was your reaction the first time you saw that movie, that film of yours you made with Jim Freeman, on the big screen where you could really get the full immersive effect?It gave me goosebumps. IMAX, at that time, was kind of unknown. The Smithsonian Air and Space Museum was the fourth IMAX theater built, and very few people had seen that system unless you visited world's fairs around the world. So we knew we had something that people were going to grasp a hold of and love because, like you said, it's a combination of film, and storytelling, and a roller coaster ride. You basically give yourself away to the screen and just go with it.What Jim and I tried to do is put as many of the involving, experiential tricks into that film as we possibly could. We wrote the film based on all of these moments that we call “IMAX moments.” We tried to put as many in there as we could, including the train coming straight at you and bashing right into the camera where the audience thinks it's going to get run over. Those kinds of moments on that gigantic screen with that wonderful 10 times, 35-millimeter clarity really moved the audience and I guess that's why they used it at Great America where you saw it.You mentioned the train and I remember a story from the era of silent film and the first time people saw a train on silent film, they jumped, people jumped because they thought the train was coming at them. Then, of course, we all kind of got used to it, and this just occurred to me, that film may have been the first time in 75 years that an audience had that reaction again, like they did with first with silent film where they thought the train was going to come out of the screen to To Fly! where, once again, your previous experience looking at a visual medium was not going to help you. This was something completely different and your sense perception was totally surprised by it.Yeah, it's true. Obviously we were copying that early train shot that started the cinema way back in probably 1896 or 1898. You ended up with To Fly! . . . we knew we had an opportunity because the Air and Space Museum, we felt, was going to be a huge smash hit. Everyone was interested in space right at that moment. Everyone was interested in flying right at that moment. Basically, as soon as it opened its doors, the Air and Space Museum became the number one museum in America, and I think it even passed the Louvre that year in attendance.Our film had over a million and a half people in its first year, which was astounding! And after that year of run, every museum in the world wanted an IMAX theater. Everyone heard about it. They started out charging 50 cents admission for the 27-minute IMAX film, and halfway through the season, they got embarrassed because they were making so much money. They reduced the admission price to 25 cents and everyone was happy. The film was so fun to watch and gave you information in a poetic way through the narration. The storytelling was simple and chronological. You could follow it even if you were a 10-year-old or an 85-year-old, and people just adored the movie. They wrote letters to the editor. The Washington Post called it the best film in the last 10 years, or something like that. Anyway, it was really a heady of time for IMAX.An innovative filming process (8:25)It was one of those things where our knowledge of technology and shooting all kinds of various films prior to that that used technology, we just basically poured everything into this one movie to try to prove the system, to try to show people what IMAX could do . . .I may have just read the Washington Post review that you mentioned. It was a Washington Post review from just three or four years later, so not that long after, and in the conclusion to that piece, it said, “You come away from the film remembering the flying, the freedom of it, the glee, the exaltation. No Wonder ‘To Fly’ is a national monument.” So already calling it a national monument, but it took some innovation to create that monument. This isn't just a piece of great filmmaking and great storytelling, it's a piece of technological innovation. I wonder if you could tell me about that.We've worked with the IMAX corporation, particularly Graeme Ferguson, who is gone now, but he was a filmmaker and helped us immensely. Not only guiding, because he'd made a couple of IMAX films previously that just showed at individual theaters, but was a great filmmaker and we wanted three more cameras built—there was only one camera when we began, and we needed three, actually, so we could double shoot and triple shoot different scenes that were dangerous. They did that for us in record time. Then we had to build all these kind of imaginative camera mounts. A guy named Nelson Tyler, Tyler Camera Systems in Hollywood, helped us enormously. He was a close friend and basically built an IMAX camera mount for a helicopter that we called the “monster mount.” It was so huge.The IMAX camera was big and huge on its own, so it needed this huge mount, and it carried the IMAX camera flawlessly and smoothly through the air in a helicopter so that there weren't any bumps or jarring moments so the audience would not get disturbed but they would feel like they were a bird flying. You needed that smoothness because when you're sitting up close against that beautifully detailed screen, you don't want any jerk or you're going to want to close your eyes. It's going to be too nauseating to actually watch. So we knew we had to have flawlessly smooth and beautiful aerials shot in the best light of the day, right at dawn or right at sunset. The tricks that we used, the special camera mounts, we had two different camera mounts for helicopters, one for a Learjet, one for a biplane. We even had a balloon mount that went in the helium balloon that we set up at the beginning of the film.It was one of those things where our knowledge of technology and shooting all kinds of various films prior to that that used technology, we just basically poured everything into this one movie to try to prove the system, to try to show people what IMAX could do . . . There are quiet moments in the film that are very powerful, but there's also these basic thrill moments where the camera goes off over the edge of a cliff and your stomach kind of turns upside down a little bit. Some people had to close their eyes as they were watching so they wouldn't get nauseated, but that's really what we wanted. We wanted people to experience that bigness and that beauty. Basically the theme of the movie was taking off into the air was like the opening of a new eye.Essentially, you re-understood what the world was when aviation began, when the first balloonists took off or when the first airplane, the Wright Brothers, took off, or when we went into space, the change of perspective. And obviously IMAX is the ultimate change of perspectiveWhen I watched the entire film — I've watched it a few times since on YouTube, which I think somebody ripped from a laser disc or something — maybe six months ago, I had forgotten the space sequence. This movie came out a year before Star Wars, and I was looking at that space sequence and I thought, that's pretty good. I thought that really held up excellent. As a documentary, what prepared you to do that kind of sequence? Or was that something completely different that you really had to innovate to do?I had loved 2001: A Space Odyssey, the Kubrick film, and one of the special effects supervisors was Doug Trumbull. So we called Doug and said, “Look, I want to make the sequence. It's going to be short, but it's going to pay homage to space travel and what could happen in the future.” And he guided us a little bit, showed us how to make kind of the explosions of space that he'd done in 2001 using microscopic paint, so we had to develop a camera lens that fit on the IMAX camera that could shoot just a very small area, like half an inch across, where paint in a soluble mixture could then explode. We shot it in slow motion, and then we built a Starship, kind of like a Star Wars-looking — though, as you mentioned, Star Wars had not come out yet — kind of a spaceship that we then superimposed against planets that we photographed, Jupiter and Saturn. We tried to give the feeling and the perspective that that could give us with our poetic narrator, and it worked. It kind of worked, even though it was done on a very small budget. We had $690,000 to make that movie. So we only had one SAG actor who actually got paid the regular wage, that was Peter Walker.Was that the balloonist?Yeah, he was the balloonist. And he was a stage actor, so he was perfect, because I wanted something to obviously be a little bit overblown, make your gestures kind of comically big, and he was perfect for it. But we only had enough money to pay him for one day, so we went to Vermont and put him in the balloon basket, and we shot everything in one day. We never actually shot him flying. We shot him hanging in the balloon basket and the balloon basket was hanging from a crane that was out of the picture, and so we could lift him and make him swing past us and all that stuff, and he was terrific.Then we shot the real balloon, which was a helium balloon. We got the helium from the Navy — which would've been very costly, but they donated the helium — and went to West Virginia where the forest was basically uncut and had no power lines going through it so we could duplicate 1780 or whatever the year was with our aerial shooting. And we had a guy named Kurt Snelling, who was probably the best balloonist at that particular moment, and he dressed like Peter in the same costume and piloted the balloon across. And balloons, you can't tell where they're going, they just follow the wind, and so it was a little dangerous, but we got it all done. It was about a week and a half because we had to wait for weather. So we had a lot of weather days and bad rain in West Virginia when we shot that, but we got it all done, and it looks beautiful, and it matches in with Peter pretty well.Just what you’ve described there, it sounds like a lot: You're going to Maine, you're in West Virginia, you're getting helium from — it sounds like there were a lot of moving parts! Was this the most ambitious thing you had done up until that point?Well, we'd worked on some feature films before, like The Towering Inferno and Jonathan Livingston Seagull, and things like that, which were involved and very complicated. But yeah, it was very much the biggest production that we put together on our own, and it required us to learn how to produce in a big fashion. It was a thrill for us. Essentially, we had about 10 people working on the film in Laguna Beach, and none of them, except for maybe Jim and I, who we'd worked on feature films and complicated shoots with actors and all that, but a lot of our team hadn’t. And so it was an adventure. Every day was a thrill.A “you can do it” movie (19:07). . . we were celebrating 200 years of democracy, of individual freedom, of individual inspiration, getting past obstacles, because you can do it — you have that belief that you can do it.There's a version of this podcast where we spend a half hour talking about The Towering Inferno. I just want you to know that it's very hard for me not to derail the conversation into talking about The Towering Inferno. I will not do that, but let me ask you this, the movie is about flight, it's about westward expansion, but that movie, it came out for the bicentennial, we'd gone through a tumultuous, let's say past 10 years: You had Vietnam, there's social unrest, you had Watergate. And the movie really must have just seemed like a breath of fresh air for people.As you put the movie together, and wrote it, and filmed it, did you feel like you were telling a message other than just about our connection with flight? It really seemed to me to be more than that, a movie about aspiration, and curiosity, and so forth.It was, and pretty much all of our films have been that positive spirit, “You can do it” kind of movie. Even our surfing films that we started with 20 years, maybe 10 years before To Fly!, you end up with that spirit of the human's ability to go beyond. And obviously celebrating the bicentennial and the beginning of democracy here in this country and the fact that we were celebrating 200 years of democracy, of individual freedom, of individual inspiration, getting past obstacles, because you can do it — you have that belief that you can do it.Of course, this was right there when everyone had felt, okay, we went to the moon, we did all kinds of great things. We were inventive and a lot of that spirit of invention, and curiosity, and accomplishment came from the fact that we were free as individuals to do it, to take risks. So I think To Fly! had a lot of that as part of it.But the interesting thing, I thought, was I had one meeting with Michael Collins, who was the director of the Air and Space Museum and the astronaut who circled the moon as Neil and Buzz Aldrin were on the moon walking around, and here he is, hoping that these two guys will come back to him so that the three of them can come back to Earth — but they'd never tested the blast-off from the moon’s surface, and they didn't know 100 percent that it was going to work, and that was the weirdest feeling.But what Collins told me in my single meeting that I had with him, he said, “Look, I've got a half an hour for you, I'm building a museum, I've got two years to do it.” And I said, “Look, one thing I want to know is how much facts and figures do you want in this movie? We've got a little over a half an hour to do this film. The audience sits down in your theater, what do you want me to do?” And he said, “Give me fun. Give me the IMAX experience. I don't want any facts and figures. I don't want any dates. I don't want any names. I’ve got plenty of those everywhere else in the museum. People are going to be sick of dates and names. Give me fun, give me adventure.” And I said, “Oh gosh, we know how to do that because we started out making surfing films.” and he goes, “Do that. Make me a surfing film about aviation.” It was probably the best advice, because he said, “And I don't want to see you again for two years. Bring me back a film. I trust you. I've seen your films. Just go out and do it.” And that was probably the best management advice that I've ever received.So you weren't getting notes. I always hear about studios giving filmmakers notes. You did not get notes.The note I got was, “We love it. Put it on the screen now.” What they did do is they gave me 26 subjects. They said, “Here's the things that we think would be really cool in the movie. We know you can't use 26 things because that’s like a minute per sequence, so you pick which of those 26 to stick in.” And I said, “What I'm going to do then is make it chronological so people will somewhat understand it, otherwise it's going to be confusing as heck.” And he said, “Great, you pick.” So I picked things that I knew I could do, and Jim, of course, was right there with me all the time.Then we had a wonderful advisor in Francis Thompson who at that time was an older filmmaker from New York who had done a lot of world's fair films, hadn't ever done IMAX, but he'd done triple-screen films and won an Academy Award with a film called To Be Alive! and he advised us. Graeme Ferguson, as I mentioned, advised us, but we selected the different sequences, probably ended up with 12 sequences, each of which we felt that we could handle on our meager budget.It was delightful that Conoco put up the money for the film as a public service. They wanted to be recognized in the bicentennial year, and they expected that the film was going to run for a year, and then of course today it's still running and it's going into its 50th year now. And so it's one of those things that was one of those feel-good moments of my life and feel-good moments for the Air and Space Museum, Michael Collins, for everyone involved.Competing views of technology (25:50)Our film was the feel-good, be proud to be an American and be proud to be a human being, and we're not messing up everything. There's a lot that's going right.When rewatching it, I was reminded of the 1982 film Koyaanisqatsi by Godfrey Reggio, which also had a very famous scene of a 747 looming at the camera. While yours was a joyous scene, I think we’re supposed to take away an ominous message about technology in that film. That movie was not a celebration of flight or of technology. Have you wondered why just six years after To Fly!, this other film came out and conveyed a very different message about technology and society.I love Koyaanisqatsi, and in fact, we helped work on that. We did a lot of the aerial shooting for that.I did not know that.And Godfrey Reggio is an acquaintance, a friend. We tried to actually do a movie together for the new millennium, and that would've been pretty wild.Certainly a hypnotic film, no doubt. Fantastic.Yeah. But their thesis was, yeah, technology's gotten beyond us. It's kind of controlled us in some fashions. And with the time-lapse sequences and the basic frenetic aspects of life and war and things like that. And with no narration. That film lets the audience tell the story to themselves, guided by the visuals and the technique. Our film was absolutely a 100 percent positive that the 747 that we had was the number one 747 ever built. Boeing owned it. I don't think they'd started selling them, or they were just starting to use them. Everyone was amazed by the size of this airplane, and we got to bolt our IMAX camera on the bottom of it, and then it was such a thrill to take that big 747.The guy took off from Seattle and the pilot said, “Okay, now where do you want to go?” I said, “Well, I want to find clouds. And he goes, “Well, there's some clouds over next to Illinois. We could go there,” so we go two hours towards Illinois. And I'm in a 737 that they loaned us with the IMAX camera in a brand new window that we stuck in the side of the 737, just absolutely clear as the sheet of glass, just a single pane, and the camera's right up against that piece of plexiglass and with the 40-millimeter lens, which is a 90-degree lens.So I said, “We've got to fly the 737 really close to the 747 and through clouds so that the clouds are wisping through, and so the 747 is disappearing and then appearing and then disappearing and then appear, and we have to do this right at sunset in puffy clouds, these big cumulus clouds.” And so they said, “We can do that, let's go find it!” The two guys who were piloting were both military pilots, so they were used to flying in formation and it was a delight. We shot roll, after roll, after roll and got some of those moments where that 747 comes out into light after being in the white of the cloud are just stunning. So we made the 747 look almost like a miniature plane, except for the shot from underneath where you see the big wheels coming up. So it was a really cool, and I don't know what it cost Boeing to do that, but hundreds of thousands, maybe.Another public service.But they got it back. Obviously it was a heroic moment in the film, and their beautiful plane, which went on to sell many, many copies and was their hero airplane for so many years.Yeah, sure.It was a fun deal. So in comparison to Koyaanisqatsi, our film was the exact opposite. Our film was the feel-good, be proud to be an American and be proud to be a human being, and we're not messing up everything. There's a lot that's going right.I feel like there's a gap in what we get out of Hollywood, what we get out of the media. You don't want just feel-good films. You don't want just celebrations. You want the full range of our lives and of human experience, but I feel like, Koyaanisqatsi is about being out of balance, I think we've gotten out of balance. I just don't see much out there that has the kind of aspirational message with To Fly! I'm not sure what you think. I feel like we could use more of that.Yeah, I'm hopeful that I'm going to be able to make a movie called A Beautiful Life, which is all about the same thing that I was talking about, the freedom that the individual has here in America. I was hopeful to do it for the 250th anniversary, but I'm not going to get it done by that time next year. But I want to do that movie kind of as a musical celebration of almost a “family of man” sort of movie located around the world with various cultures and positive spirit. I'm an optimist, I'm a positive person. That's the joy I get out of life. I suppose that's why Jim and I were perfect to make To Fly! We infused beauty into everything that we tried to do.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro ReadsPlease check out the website or Substack app for the latest Up Wing economic, business, and tech news contained in this new edition of the newsletter. Lots of great stuff! Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Aug 22, 2025 • 33min

👶 Bracing for depopulation: My chat (+transcript) with 'After the Spike' coauthor Dean Spears

My fellow pro-growth/progress/abundance Up Wingers,Global population growth is slowing, and it’s not showing any signs of recovery. To the environmentalists of the 1970s, this may have seemed like a movement in the right direction. The drawbacks to population decline, however, are severe and numerous, and they’re not all obvious.Today on Faster, Please! — The Podcast, I talk with economist and demographer Dean Spears about the depopulation trend that is transcending cultural barriers and ushering in a new global reality. We discuss the costs to the economy and human progress, and the inherent value of more people.Spears is an associate professor of economics at Princeton University where he studies demography and development. He is also the founding executive director of r.i.c.e., a nonprofit research organization seeking to uplift children in rural northern India. He is a co-author with Michael Geruso of After the Spike: Population, Progress, and the Case for People.In This Episode* Where we’re headed (1:32)* Pumping the breaks (5:41)* A pro-parenting culture (12:40)* A place for AI (19:13)* Preaching to the pro-natalist choir (23:40)* Quantity and quality of life (28:48)Below is a lightly edited transcript of our conversation. Where we’re headed (1:32). . . two thirds of people now live in a country where the birth rate is below the two children per two adults level that would stabilize the population.Pethokoukis: Who are you and your co-author trying to persuade and what are you trying to persuade them of? Are you trying to persuade them that global depopulation is a real thing, that it's a problem? Are you trying to persuade them to have more kids? Are you trying to persuade them to support a certain set of pro-child or pro-natalist policies?Spears: We are trying to persuade quite a lot of people of two important things: One is that global depopulation is the most likely future — and what global depopulation means is that every decade, every generation, the world's population will shrink. That's the path that we're on. We're on that path because birth rates are low and falling almost everywhere. It’s one thing we're trying to persuade people of, that fact, and we're trying to persuade people to engage with a question of whether global depopulation is a future to welcome or whether we should want something else to happen. Should we let depopulation happen by default or could it be better to stabilize the global population at some appropriate level instead?We fundamentally think that this is a question that a much broader section of society, of policy discourse, of academia should be talking about. We shouldn't just be leaving this discussion to the population scientists, demographic experts, not only to the people who already are worried about, or talking about low birth rates, but this is important enough and unprecedented enough that everybody should be engaging in this question. Whatever your ongoing values or commitments, there's a place for you in this conversation.Is it your impression that the general public is aware of this phenomenon? Or are they still stuck in the ’70s thinking that population is running amok and we’ll have 30 billion people on this planet like was the scenario in the famous film, Soylent Green? I feel like the people I know are sort of aware that this is happening. I don't know what your experience is.I think it's changing fast. I think more and more people are aware that birth rates are falling. I don't think that people are broadly aware — because when you hear it in the news, you might hear that birth rates in the United States have fallen low or birth rates in South Korea have fallen low. I think what not everybody knows is that two thirds of people now live in a country where the birth rate is below the two children per two adults level that would stabilize the population.I think people don't know that the world's birth rate has fallen from an average around five in 1950 to about 2.3 today, and that it's still falling and that people just haven't engaged with the thought that there's no special reason to expect it to stop and hold it to. But the same processes that have been bringing birth rates down will continue to bring them down, and people don't know that there's no real automatic stabilizer to expect it to come back up. Of the 26 countries that have had the lifetime birth rate fall below 1.9, none of them have had it go back up to two.That's a lot of facts that are not as widely known as they should be, but then the implication of it, that if the world's birth rate goes below two and stays there, we're going to have depopulation generation after generation. I think for a lot of people, they're still in the mindset that depopulation is almost conceptually impossible, that either we're going to have population growth or something else like zero population growth like people might've talked about in the ’70s. But the idea that a growth rate of zero is just a number and then that it's not going to stop there, it's going to go negative, I think that's something that a lot of people just haven't thought about.Pumping the breaks (5:41)We wrote this book because we hope that there will be an alternative to depopulation society will choose, but there’s no reason to expect or believe that it's going happen automatically.You said there's no automatic stabilizers — at first take, that sounds like we're going to zero. Is there a point where the global population does hit a stability point?No, that's just the thing.So we're going to zero?Well, “there's no automatic stabilizer” isn't the same thing as “we're definitely going to zero.” It could be that society comes together and decides to support parenting, invest more in the next generation, invest more in parents and families, and do more to help people choose to be parents. We wrote this book because we hope that there will be an alternative to depopulation society will choose, but there’s no reason to expect or believe that it's going happen automatically. In no country where the birth rate has gone to two has it just magically stopped and held there forever.I think a biologist might say that the desire to reproduce, that's an evolved drive, and even if right now we're choosing to have smaller families, that biological urge doesn't vanish. We've had population, fertility rates, rise and fall throughout history — don't you think that there is some sort of natural stabilizer?We've had fluctuations throughout history, but those fluctuations have been around a pretty long and pretty widely-shared downward trend. Americans might be mostly only now hearing about falling birth rates because the US was sort of anomalous amongst richer countries and having a relatively flat period from the 1970s to around 2010 or so, whereas birth rates were falling in other countries, they weren't falling in the US in the same way, but they were falling in the US before then, they're falling in the US since then, and when you plot it over the long history with other countries, it's clear that, for the world as a whole, as long as we've had records, not just for decades, but for centuries, we've seen birth rates be falling. It's not just a new thing, it's a very long-term trend.It's a very widely-shared trend because humans are unlike other animals in the important way that we make decisions. We have culture, we have rationality, we have irrationality, we have all of these. The reason the population grew is because we've learned how to keep ourselves and our children alive. We learned how to implement sanitation, implement antibiotics, implement vaccines, and so more of the children who were born survived even as the birth rate was falling all along. Other animals don't do that. Other animals don't invent sanitation systems and antibiotics and so I think that we can't just reason immediately from other animal populations to what's going to happen to humans.I think one can make a plausible case that, even if you think that this is a problem — and again, it's a global problem, or a global phenomenon, advanced countries, less-advanced countries — that it is a phenomenon of such sweep that if you're going to say we need to stabilize or slow down, that it would take a set of policies of equal sweep to counter it. Do those actually exist?No. Nobody has a turnkey solution. There's nothing shovel-ready here. In fact, it's too early to be talking about policy solutions or “here's my piece of legislation, here's what the government should do” because we’re just not there yet, both in terms of the democratic process of people understanding the situation and there even being a consensus that stabilization, at some level, would be better than depopulation, nor are we there yet on having any sort of answer that we can honestly recommend as being tested and known to be something that will reliably stabilize the population.I think the place to start is by having conversations like this one where we get people to engage with the evidence, and engage with the question, and just sort of move beyond a reflexive welcoming of depopulation by default and start thinking about, well, what are the costs of people and what are the benefits of people? Would we be better off in a future that isn't depopulating over the long run?The only concrete step I can think of us taking right now is adapting the social safety net to a new demographic reality. Beyond that, it seems like there might have to be a cultural shift of some kind, like a large-scale religious revival. Or maybe we all become so rich that we have more time on our hands and decide to have more kids. But do you think at some point someone will have a concrete solution to bring global fertility back up to 2.1 or 2.2?Look at it like this: The UN projects that the peak will be about six decades from now in 2084. Of course, I don't have a crystal ball, I don't know that it's going to be 2084, but let's take that six-decades timeline seriously because we're not talking about something that's going to happen next year or even next decade.But six decades ago, people were aware that — or at least leading scientists and even some policymakers were aware that climate change was a challenge. The original computations by Arrhenius of the radiative forcing were long before that. You have the Johnson speech to Congress, you have Nixon and the EPA. People were talking about climate change as a challenge six decades ago, but if somebody had gotten on their equivalent of a podcast and said, “What we need to do is immediately get rid of the internal combustion engine,” they would've been rightly laughed out of the room because that would've been the wrong policy solution at that time. That would've been jumping to the wrong solution. Instead, what we needed to do was what we've done, which is the science, the research, the social change that we're now at a place where emissions per person in the US have been falling for 20 years and we have technologies — wind, and solar, and batteries — that didn't exist before because there have been decades of working on it.So similarly, over the next six decades, let's build the research, build the science, build the social movement, discover things we don't know, more social science, more awareness, and future people will know more than you and I do about what might be constructive responses to this challenge, but only if we start talking about it now. It's not a crisis to panic about and do the first thing that comes to mind. This is a call to be more thoughtful about the future.A pro-parenting culture (12:40)The world's becoming more similar in this important way that the difference across countries and difference across societies is getting smaller as birth rates converge downward.But to be clear, you would like people to have more kids.I would like for us to get on a path where more people who want to be parents have the sort of support, and environment, and communities they need to be able to choose that. I would like people to be thinking about all of this when they make their family decisions. I'd like the rest of us to be thinking about this when we pitch in and do more to help us. I don't think that anybody's necessarily making the wrong decision for themselves if they look around and think that parenting is not for them or having more children is not for them, but I think we might all be making a mistake if we're not doing more to support parents or to recognize the stake we have in the next generation.But all those sorts of individual decisions that seem right for an individual or for a couple, combined, might turn into a societal decision.Absolutely. I'm an economics professor. We call this “externalities,” where there are social benefits of something that are different from the private costs and benefits. If I decide that I want to drive and I contribute to traffic congestion, then that's an externality. At least in principle, we understand what to do about that: You share the cost, you share the benefits, you help the people internalize the social decision.It's tied up in the fact that we have a society where some people we think of as doing care work and some people we think of as doing important work. So we've loaded all of these costs of making the next generation on people during the years of their parenting and especially on women and mothers. It's understandable that, from a strictly economic point of view, somebody looks at that and thinks, “The private costs are greater than the private benefits. I'm not going to do that.” It's not my position to tell somebody that they're wrong about that. What you do in a situation like that is share and lighten that burden. If there's a social reason to solve traffic congestion, then you solve it with public policy over the long run. If the social benefits of there being a flourishing next generation are greater than people are finding in their own decision making, then we need to find the ways to invest in families, invest in parenting, lift and share those burdens so that people feel like they can choose to be parents.I would think there’s a cultural component here. I am reminded of a book by Jonathan Last about this very issue in which he talks about Old Town Alexandria here in Virginia, how, if you go to Old Town, you can find lots of stores selling stuff for dogs, but if you want to buy a baby carriage, you can't find anything.Of course, that's an equilibrium outcome, but go on.If we see a young couple pushing a stroller down the street and inside they have a Chihuahua — as society, or you personally, would you see that and “Think that's wrong. That seems like a young couple living in a nice area, probably have plenty of dough, they can afford daycare, and yet they're still not going to have a kid and they're pushing a dog around a stroller?” Should we view that as something's gone wrong with our society?My own research is about India. My book's co-authored with Mike Geruso. He studies the United States more. I'm more of an expert on India.Paul Ehrlich, of course, begins his book, The Population Bomb, in India.Yes, I know. He starts with this feeling of being too crowded with too many people. I say in the book that I almost wonder if I know the exact spot where he has that experience. I think it's where one of my favorite shops are for buying scales and measuring tape for measuring the health of children in Uttar Pradesh. But I digress about Paul Ehrlich.India now, where Paul Ehrlich was worried about overpopulation, is now a society with an average birth rate below two kids per two adults. Even Uttar Pradesh, the big, disadvantaged, poor state where I do my work in research, the average young woman there says that they want an average of 1.9 children. This is a place where society and culture is pretty different from the United States. In the US, we're very accustomed to this story of work and family conflict, and career conflicts, especially for women, and that's probably very important in a lot of people's lives. But that's not what's going on in India where female labor force participation is pretty low. Or you hear questions about whether this is about the decline of religiosity, but India is a place where religion is still very important to a lot of people's lives. Marriage is almost universal. Marriage happens early. People start their childbearing careers in their early twenties, and you still see people having an average below two kids. They start childbearing young and they end childbearing young.Similarly, in Latin America, where religiosity, at least as reported in surveys, remains pretty high, but Latin America is at an average of 1.8, and it's not because people are delaying fertility until they're too old to get pregnant. You see a lot of people having permanent contraception surgery, tubal obligations.And so this cultural story where people aren't getting married, they're starting too late, they're putting careers first, it doesn't match the worldwide diversity. These diverse societies we're seeing are all converging towards low birth rates. The world's becoming more similar in this important way that the difference across countries and difference across societies is getting smaller as birth rates converge downward. So I don't think we can easily point towards any one cultural for this long-term and widely shared trend.A place for AI (19:13)If AI in the future is a compliment to what humans produce . . . if AI is making us more productive, then it's all the bigger loss to have fewer people.At least from an economic perspective, I think you can make the case: fewer people, less strain on resources, you’re worried about workers, AI-powered robots are going to be doing a lot of work, and if you're worried about fewer scientists, the scientists we do have are going to have AI-powered research assistants.Which makes the scientists more important. Many technologies over history have been compliments to what humans do, not substitutes. If AI in the future is a compliment to what humans produce — scientific research or just the learning by doing that people do whenever they're engaging in an enterprise or trying to create something — if AI is making us more productive, then it's all the bigger loss to have fewer people.To me, the best of both worlds would be to have even more scientists plus AI. But isn’t the fear of too few people causing a labor shortage sort of offset by AI and robotics? Maybe we’ll have plenty of technology and capital to supply the workers we do have. If that’s not the worry, maybe the worry is that the human experience is simply worse when there are fewer children around.You used the term “plenty of,” and I think that sort of assumes that there's a “good enough,” and I want to push back on that because I think what matters is to continue to make progress towards higher living standards, towards poverty alleviation, towards longer, better, healthier, safer, richer lives. What matters is whether we're making as much progress as we could towards an abundant, rich, safe, healthy future. I think we shouldn't let ourselves sloppily accept a concept of “good enough.” If we're not making the sort of progress that we could towards better lives, then that's a loss, and that matters for people all around the world.We're better off for living in a world with other people. Other people are win-win: Their lives are good for them and their lives are good for you. Part of that, as you say, is people on the supply side of the economy, people having the ideas and the realizations that then can get shared over and over again. The fact that ideas are this non-depletable resource that don't get used up but might never be discovered if there aren't people to discover them. That's one reason people are important on the supply side of the economy, but other people are also good for you on the demand side of the economy.This is very surprising because people think that other people are eating your slice of the pie, and if there are more other people, there's less for me. But you have to ask yourself, why does the pie exist in the first place? Why is it worth some baker's while to bake a pie that I could get a slice of? And that's because there were enough people wanting slices of pie to make it worth paying the fixed costs of having a bakery and baking a whole pie.In other words, you're made better off when other people want and need the same things that you want and need because that makes it more likely for it to exist. If you have some sort of specialized medical need and need specialized care, you're going to be more likely to find it in a city where there are more other people than in a less-populated rural place, and you're going to be more likely to find it in a course of history where there have been more other people who have had the same medical need that you do so that it's been worthwhile for some sort of cure to exist. The goodness of other people for you isn't just when they're creating things, it's also when they're just needing the same things that you do.And, of course, if you think that getting to live a good life is a good thing, that there's something valuable about being around to have good experiences, that a world of more people having good experiences has more goodness in it than a world of fewer people having good experiences in it. That's one thing that counts, and it's one important consideration for why a stabilized future might be better than a depopulating future. Now, I don't expect everyone to immediately agree with that, but I do think that the likelihood of depopulation should prompt us to ask that question.Preaching to the pro-natalist choir (23:40)If you are already persuaded listening to this, then go strike up a conversation with somebody.Now, listening to what you just said, which I thought was fantastic, you're a great explainer, that is wonderful stuff — but I couldn't help but think, as you explained that, that you end up spending a lot of time with people who, because they read the New York Times, they may understand that the ’70s population fears aren't going to happen, that we're not going to have a population of 30 billion that we're going to hit, I don't know, 10 billion in the 2060s and then go down. And they think, “Well, that's great.”You have to spend a lot of time explaining to them about the potential downsides and why people are good, when like half the population in this country already gets it: “You say ‘depopulation,’ you had us at the word, ‘depopulation.’” You have all these people who are on the right who already think that — a lot of people I know, they're there.Is your book an effective tool to build on that foundation who already think it's an issue, are open to policy ideas, does your book build on that or offer anything to those people?I think that, even if this is something that people have thought about before, a lot of how people have thought about it is in terms of pension plans, the government's budget, the age structure, the nearer-term balance of workers to retirees.There's plenty of people on the right who maybe they're aware of those things, but also think that it really is kind of a The Children of Men argument. They just think a world with more children is better. A world where the playgrounds are alive is better — and yes, that also may help us with social security, but there's a lot of people for whom you don't have to even make that economic argument. That seems to me that that would be a powerful team of evangelists — and I mean it in a nonreligious way — evangelists for your idea that population is declining and there are going to be some serious side effects.If you are already persuaded listening to this, then go strike up a conversation with somebody. That's what we want to have happen. I think minds are going to be changed in small batches on this one. So if you're somebody who already thinks this way, then I encourage you to go out there and start a conversation. I think not everybody, even people who think about population for a living — for example, one of the things that we engage with in the book is the philosophy of population ethics, or population in social welfare as economists might talk about it.There have been big debates there over should we care about average wellbeing? Should we care about total wellbeing? Part of what we're trying to say in the book is, one, we think that some of those debates have been misplaced or are asking what we don't think are the right questions, but also to draw people to what we can learn from thinking of where questions like this agree. Because this whole question of should we make the future better in total or make the better on average is sort of presuming this Ehrlich-style mindset that if the future is more populous, then it must be worse for each. But once you see that a future that's more populous is also more prosperous, it'd be better in total and better on average, then a lot of these debates might still have academic interest, but both ways of thinking about what would be a better future agree.So there are these pockets of people out there who have thought about this before, and part of what we're trying to do is bring them together in a unified conversation where we're talking about the climate modeling, we're talking about the economics, we're talking about the philosophy, we're talking about the importance of gender equity and reproductive freedom, and showing that you can think and care about all of these things and still think that a stabilized future might be better than depopulation.In the think tank world, the dream is to have an idea and then some presidential candidate adopts the idea and pushes it forward. There’s a decent chance that the 2028 Republican nominee is already really worried about this issue, maybe someone like JD Vance. Wouldn’t that be helpful for you?I've never spoken with JD Vance, but from my point of view, I would also be excited for India's population to stabilize and not depopulate. I don't see this as an “America First” issue because it isn't an America First issue. It's a worldwide, broadly-shared phenomenon. I think that no one country is going to be able to solve this all on its own because, if nothing else, people move, people immigrate, societies influence one another. I think it's really a broadly-shared issue.Quantity and quality of life (28:48)What I do feel confident about is that some stabilized size would be better than depopulation generation after generation, after generation, after generation, without any sort of leveling out, and I think that's the plan that we're on by default.Can you imagine an earth of 10 to 12 billion people at a sustained level being a great place to live, where everybody is doing far better than they are today, the poorest countries are doing better — can you imagine that scenario? Can you also imagine a scenario where we have a world of three to four billion, which is a way nicer place to live for everybody than it is today? Can both those scenarios happen?I don't see any reason to think that either of those couldn't be an equilibrium, depending on all the various policy choices and all the various . . .This is a very broad question.Exactly. I think it's way beyond the social science, economics, climate science we have right now to say “three billion is the optimal size, 10 billion is the optimal size, eight billion is the optimal size.” What I do feel confident about is that some stabilized size would be better than depopulation generation after generation, after generation, after generation, without any sort of leveling out, and I think that's the plan that we're on by default. That doesn't mean it's what's going to happen, I hope it's not what happens, and that's sort of the point of the conversation here to get more people to consider that.But let's say we were able to stabilize the population at 11 billion. That would be fine.It could be depending on what the people do.But I’m talking about a world of 11 billion, and I'm talking about a world where the average person in India is as wealthy as, let's say this is in the year 2080, 2090, and at minimum, the average person in India is as wealthy as the average American is today. So that's a big huge jump in wealth and, of course, environmentalism.And we make responsible environmental choices, whether that's wind, or solar, or nuclear, or whatever, I'm not going to be prescriptive on that, but I don't see any reason why not. My hope is that future people will know more about that question than I do. Ehrlich would've said that our present world of eight billion would be impossible, that we would've starved long before this, that England would've ceased to exist, I think is a prediction in his book somewhere.And there's more food per person on every continent. Even in the couple decades that I've been going to India, children are taller than they used to be, on average. You can measure it, and maybe I'm fooling myself, but I feel like I can see it. Even as the world's been growing more populous, people have been getting better off, poverty has been going down, the absolute number of people in extreme poverty has been going down, even as the world's been getting more populous. As I say, emissions per person have been going down in a lot of places.I don't see any in principle, reason, if people make the right decisions, that we couldn't have a sustainable, healthy, and good, large sustained population. I've got two kids and they didn't add to the hole in the ozone layer, which I would've heard about in school as a big problem in the ’80s. They didn't add to acid rain. Why not? Because the hole in the ozone layer was confronted with the Montreal Protocol. The acid rain was confronted with the Clean Air Act. They don't drive around in cars with leaded gasoline because in the ’70s, the gasoline was unleaded. Adding more people doesn't have to make things worse. It depends on what happens. Again, I hope future people will know more about this than I do, but I don't see any, in principle reason why we couldn't stabilize at a size larger than today and have it be a healthy, and sustainable, and flourishing society.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* Generative AI's Impact on Student Achievement and Implications for Worker Productivity - SSRN* The Real China Model: Beijing’s Enduring Formula for Wealth and Power - FA* What Matters More to the Stock Market? The Fed or Nvidia? - NYT* AI Isn’t Really Stealing Jobs Yet. That Doesn’t Mean We’re Ready for It. - Barron's* Trump’s Attacks on the Fed and BLS Threaten Key Source of Economic Strength - NYT* A Stock Market Crash Foretold - PS* The Macro Impact of AI on GDP - The Overshoot* Powell Sends Strongest Signal Yet That Interest Rate Cuts Are Coming - NYT* Big Announcements, Small Results: FDI Falls Yet Again - ITIF▶ Business* An MIT report that 95% of AI pilots fail spooked investors. But the reason why those pilots failed is what should make the C-suite anxious - Fortune* Alexandr Wang is now leading Meta’s AI dream team. Will Mark Zuckerberg's big bet pay off? - Fortune* Amazon is betting on agents to win the AI race - The Verge* Intuit Earnings Beat Estimates as Company Focuses on Artificial Intelligence Growth Drivers - Barron's* Will Tesla Robotaxis Kill Auto Insurers? Hardly. - Barron's* Wall Street Is Too Complex to Be Left to Humans - Bberg Opinion* Meta Freezes AI Hiring After Blockbuster Spending Spree - WSJ* Trump Is Betting Big on Intel. Will the Chips Fall His Way? - Wired* Trump Says Intel Has Agreed to Give the US 10% Equity Stake - Bberg▶ Policy/Politics* Poll shows California policy influencers want harsher social media laws than voters - Politico* How Trump Will Decide Which Chips Act Companies Must Give Up Equity - WSJ* This Democrat Thinks Voters Seeking Order Will Make or Break Elections - WSJ* California Republicans trust tech companies as much as Trump on AI - Politico* The Japanese city betting on immigrants to breathe life into its economy - FT▶ AI/Digital* AI Is Designing Bizarre New Physics Experiments That Actually Work - Wired* Generative AI in Higher Education: Evidence from an Elite College - SSRN* AI Unveils a Major Discovery in Ancient Microbes That Could Hold the Key to Next Generation Antibiotics - The Debrief* A.I. May Be Just Kind of Ordinary - NYT Opinion* Is the AI bubble about to pop? Sam Altman is prepared either way. - Ars* China's DeepSeek quietly releases an open-source rival to GPT-5—optimized for Chinese chips and priced to undercut OpenAI - Fortune* The world should prepare for the looming quantum era - FT* Brace for a crash before the golden age of AI - FT* How AI will change the browser wars - FT* Can We Tell if ChatGPT is a Parasite? Studying Human-AI Symbiosis with Game Theory - Arxiv* Apple Explores Using Google Gemini AI to Power Revamped Siri - Bberg* The AI Doomers Are Getting Doomier - The Atlantic* State of AI in Business 2025 - MIT NANDA* Silicon Valley Is Drifting Out of Touch With the Rest of America - NYT Opinion* What Workers Really Want from Artificial Intelligence - Stanford HAI▶ Biotech/Health* A 1990 Measles Outbreak Shows How the Disease Can Roar Back - NYT* Corporate egg freezing won’t break the glass ceiling - FT* How to Vaccinate the World - Asterisk* COVID Revisionism Has Gone Too Far - MSN* Securing America’s Pharmaceutical Innovation Edge - JAMA Forum▶ Clean Energy/Climate* Trump’s Global War on Decarbonization - PS* Aalo Atomics secures funding to build its first reactor - WNN* Trump’s nuclear policy favors startups, widening industry rifts - E&E* How Electricity Got So Expensive - Heatmap* Nuclear fusion gets a boost from a controversial debunked experiment - NS* Google Wants You to Know the Environmental Cost of Quizzing Its AI - WSJ* Trump Blamed Rising Electricity Prices on Renewables. It’s Not True. - Heatmap* Trump's Cuts May Spell the End for America's Only Antarctic Research Ship - NYT* How Bill McKibben Lost the Plot - The New Atlantis* Does it make sense for America to keep subsidising a sinking city? - Economist▶ Robotics/Drones/AVs* I'm a cyclist. Will the arrival of robotaxis make my journeys safer? - NS* Si chiplet–controlled 3D modular microrobots with smart communication in natural aqueous environments - Science▶ Space/Transportation* On the ground in Ukraine’s largest Starlink repair shop - MIT* Trump can’t stop America from building cheap EVs - Vox* SpaceX has built the machine to build the machine. But what about the machine? - Ars* 'Invasion' Season 3 showrunner Simon Kinberg on creating ''War of the Worlds' meets 'Babel'' (exclusive) - Space▶ Up Wing/Down Wing* The era of the public apology is ending - Axios* Warren Brodey, 101, Dies; a Visionary at the Dawn of the Information Age - NYT* Reality is evil - Aeon* The Case for Crazy Philanthropy - Palladium▶ Substacks/Newsletters* Claude Code is growing crazy fast, and it’s not just for writing code - AI Supremacy* No, ‘the Economists’ Didn’t Botch Trump’s Tariffs - The Dispatch* How Does the US Use Water? - Construction Physics* A Climate-Related Financial Risk Boondoggle - The Ecomodernist* What's up with the States? - Hyperdimensional▶ Social Media* On why AI won't take all the jobs - @Dan_Jeffries1* On four nuclear reactors to be built in Amarillo, TX - @NuclearHazelnut* On AI welfare and consciousness - @sebkrier Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Aug 12, 2025 • 27min

⚛️ Our fission-powered future: My chat (+transcript) with nuclear scientist and author Tim Gregory

My fellow pro-growth/progress/abundance Up Wingers,Nuclear fission is a safe, powerful, and reliable means of generating nearly limitless clean energy to power the modern world. A few public safety scares and a lot of bad press over the half-century has greatly delayed our nuclear future. But with climate change and energy-hungry AI making daily headlines, the time — finally — for a nuclear renaissance seems to have arrived.Today on Faster, Please! — The Podcast, I talk with Dr. Tim Gregory about the safety and efficacy of modern nuclear power, as well as the ambitious energy goals we should set for our society.Gregory is a nuclear scientist at the UK National Nuclear Laboratory. He is also a popular science broadcaster on radio and TV, and an author. His most recent book, Going Nuclear: How Atomic Energy Will Save the World is out now.In This Episode* A false start for a nuclear future (1:29)* Motivators for a revival (7:20)* About nuclear waste . . . (12:41)* Not your mother’s reactors (17:25)* Commercial fusion, coming soon . . . ? (23:06)Below is a lightly edited transcript of our conversation. A false start for a nuclear future (1:29)The truth is that radiation, we're living in it all the time, it's completely inescapable because we're all living in a sea of background radiation.Pethokoukis: Why do America, Europe, Japan not today get most of their power from nuclear fission, since that would've been a very reasonable prediction to make in 1965 or 1975, but it has not worked out that way? What's your best take on why it hasn't?Going back to the ’50s and ’60s, it looked like that was the world that we currently live in. It was all to play for, and there were a few reasons why that didn't happen, but the main two were Three Mile Island and Chernobyl. It's a startling statistic that the US built more nuclear reactors in the five years leading up to Three Mile Island than it has built since. And similarly on this side of the Atlantic, Europe built more nuclear reactors in the five years leading up to Chernobyl than it has built since, which is just astounding, especially given that nobody died in Three Mile Island and nobody was even exposed to anything beyond the background radiation as a result of that nuclear accident.Chernobyl, of course, was far more consequential and far more serious than Three Mile Island. 30-odd people died in the immediate aftermath, mostly people who were working at the power station and the first responders, famously the firefighters who were exposed to massive amounts of radiation, and probably a couple of hundred people died in the affected population from thyroid cancer. It was people who were children and adolescents at the time of the accident.So although every death from Chernobyl was a tragedy because it was avoidable, they're not in proportion to the mythic reputation of the night in question. It certainly wasn't reason to effectively end nuclear power expansion in Europe because of course we had to get that power from somewhere, and it mainly came from fossil fuels, which are not just a little bit more deadly than nuclear power, they’re orders of magnitude more deadly than nuclear power. When you add up all of the deaths from nuclear power and compare those deaths to the amount of electricity that we harvest from nuclear power, it's actually as safe as wind and solar, whereas fossil fuels kill hundreds or thousands of times more people per unit of power. To answer your question, it's complicated and there are many answers, but the main two were Three Mile Island and Chernobyl.I wonder how things might have unfolded if those events hadn’t happened or if society had responded proportionally to the actual damage. Three Mile Island and Chernobyl are portrayed in documentaries and on TV as far deadlier than they really were, and they still loom large in the public imagination in a really unhelpful way.You see it online, actually, quite a lot about the predicted death toll from Chernobyl, because, of course, there's no way of saying exactly which cases of cancer were caused by Chernobyl and which ones would've happened anyway. Sometimes you see estimates that are up in the tens of thousands, hundreds of thousands of deaths from Chernobyl. They are always based on a flawed scientific hypothesis called the linear no-threshold model that I go into in quite some detail in chapter eight of my book, which is all about the human health effects of exposure to radiation. This model is very contested in the literature. It's one of the most controversial areas of medical science, actually, the effects of radiation on the human body, and all of these massive numbers you see of the death toll from Chernobyl, they're all based on this really kind of clunky, flawed, contentious hypothesis. My reading of the literature is that there's very, very little physical evidence to support this particular hypothesis, but people take it and run. I don’t know if it would be too far to accuse people of pushing a certain idea of Chernobyl, but it almost certainly vastly, vastly overestimates the effects.I think a large part of the reason of why this had such a massive impact on the public and politicians is this lingering sense of radiophobia that completely blight society. We've all seen it in the movies, in TV shows, even in music and computer games — radiation is constantly used as a tool to invoke fear and mistrust. It's this invisible, centerless, silent specter that's kind of there in the background: It means birth defects, it means cancers, it means ill health. We've all kind of grown up in this culture where the motif of radiation is bad news, it's dangerous, and that inevitably gets tied to people's sense of nuclear power. So when you get something like Three Mile Island, society's imagination and its preconceptions of radiation, it's just like a dry haystack waiting for a flint spark to land on it, and up it goes in flames and people's imaginations run away with them.The truth is that radiation, we're living in it all the time, it's completely inescapable because we're all living in a sea of background radiation. There's this amazing statistic that if you live within a couple of miles of a nuclear power station, the extra amount of radiation you're exposed to annually is about the same as eating a banana. Bananas are slightly radioactive because of the slight amount of potassium-40 that they naturally contain. Even in the wake of these nuclear accidents like Chernobyl, and more recently Fukushima, the amount of radiation that the public was exposed to barely registers and, in fact, is less than the background radiation in lots of places on the earth.Motivators for a revival (7:20)We have no idea what emerging technologies are on the horizon that will also require massive amounts of power, and that's exactly where nuclear can shine.You just suddenly reminded me of a story of when I was in college in the late 1980s, taking a class on the nuclear fuel cycle. You know it was an easy class because there was an ampersand in it. “Nuclear fuel cycle” would've been difficult. “Nuclear fuel cycle & the environment,” you knew it was not a difficult class.The man who taught it was a nuclear scientist and, at one point, he said that he would have no problem having a nuclear reactor in his backyard. This was post-Three Mile Island, post-Chernobyl, and the reaction among the students — they were just astounded that he would be willing to have this unbelievably dangerous facility in his backyard.We have this fear of nuclear power, and there's sort of an economic component, but now we're seeing what appears to be a nuclear renaissance. I don't think it's driven by fear of climate change, I think it's driven A) by fear that if you are afraid of climate change, just solar and wind aren't going to get you to where you want to be; and then B) we seem like we're going to need a lot of clean energy for all these AI data centers. So it really does seem to be a perfect storm after a half-century.And who knows what next. When I started writing Going Nuclear, the AI story hadn't broken yet, and so all of the electricity projections for our future demand, which, they range from doubling to tripling, we're going to need a lot of carbon-free electricity if we've got any hope of electrifying society whilst getting rid of fossil fuels. All of those estimates were underestimates because nobody saw AI coming.It's been very, very interesting just in the last six, 12 months seeing Big Tech in North America moving first on this. Google, Microsoft, Amazon, and Meta have all either invested or actually placed orders for small modular reactors specifically to power their AI data centers. In some ways, they've kind of led the charge on this. They've moved faster than most nation states, although it is encouraging, actually, here in the UK, just a couple of weeks ago, the government announced that our new nuclear power station is definitely going ahead down in Sizewell in Suffolk in the south of England. That's a 3.2 gigawatt nuclear reactor, it's absolutely massive. But it's been really, really encouraging to see Big Tech in the private sector in North America take the situation into their own hands. If anyone's real about electricity demands and how reliable you need it, it's Big Tech with these data centers.I always think, go back five, 10 years, talk of AI was only on the niche subreddits and techie podcasts where people were talking about it. It broke into the mainstream all of a sudden. Who knows what is going to happen in the next five or 10 years. We have no idea what emerging technologies are on the horizon that will also require massive amounts of power, and that's exactly where nuclear can shine.In the US, at least, I don’t think decarbonization alone is enough to win broad support for nuclear, since a big chunk of the country doesn’t think we actually need to do that. But I think that pairing it with the promise of rapid AI-driven economic growth creates a stronger case.I tried to appeal to a really broad church in Going Nuclear because I really, really do believe that whether you are completely preoccupied by climate change and environmental issues or you're completely preoccupied by economic growth, and raising living, standards and all of that kind of thing, all the monetary side of things, nuclear is for you because if you solve the energy problem, you solve both problems at once. You solve the economic problem and the environmental problem.There's this really interesting relationship between GDP per head — which is obviously incredibly important in economic terms — and energy consumption per head, and it's basically a straight line relationship between the two. There are no rich countries that aren't also massive consumers of energy, so if you really, really care about the economy, you should really also be caring about energy consumption and providing energy abundance so people can go out and use that energy to create wealth and prosperity. Again, that's where nuclear comes in. You can use nuclear power to sate that massive energy demand that growing economies require.This podcast is very pro-wealth and prosperity, but I'll also say, if the nuclear dreams of the ’60s where you had, in this country, what was the former Atomic Energy Commission expecting there to be 1000 nuclear reactors in this country by the year 2000, we're not having this conversation about climate change. It is amazing that what some people view as an existential crisis could have been prevented — by the United States and other western countries, at least — just making a different political decision.We would be spending all of our time talking about something else, and how nice would that be?For sure. I'm sure there'd be other existential crises to worry about.But for sure, we wouldn't be talking about climate change was anywhere near the volume or the sense of urgency as we are now if we would've carried on with the nuclear expansion that really took off in the ’70s and the ’80s. It would be something that would be coming our way in a couple of centuries.About nuclear waste . . . (12:41). . . a 100 percent nuclear-powered life for about 80 years, their nuclear waste would barely fill a wine glass or a coffee cup. I don't know if you've ever seen the television show For All Mankind?I haven't. So many people have recommended it to me.It’s great. It’s an alt-history that looks at what if the Space Race had never stopped. As a result, we had a much more tech-enthusiastic society, which included being much more pro-nuclear.Anyway, imagine if you are on a plane talking to the person next to you, and the topic of your book comes up, and the person says hey, I like energy, wealth, prosperity, but what are you going to do about the nuclear waste?That almost exact situation has happened, but on a train rather than an airplane. One of the cool things about uranium is just how much energy you can get from a very small amount of it. If typical person in a highly developed economy, say North America, Europe, something like that, if they produced all of their power over their entire lifetime from nuclear alone, so forget fossil fuels, forget wind and solar, a 100 percent nuclear-powered life for about 80 years, their nuclear waste would barely fill a wine glass or a coffee cup. You need a very small amount of uranium to power somebody's life, and the natural conclusion of that is you get a very small amount of waste for a lifetime of power. So in terms of the numbers, and the amount of nuclear waste, it's just not that much of a problem.However, I don't want to just try and trivialize it out of existence with some cool pithy statistics and some cool back-of-the-envelopes physics calculations because we still have to do something with the nuclear waste. This stuff is going to be radioactive for the best part of a million years. Thankfully, it's quite an easy argument to make because good old Finland, which is one of the most nuclear nations on the planet as a share of nuclear in its grid, has solved this problem. It has implemented — and it's actually working now — the world's first and currently only geological repository for nuclear waste. Their idea is essentially to bury it in impermeable bedrock and leave it there because, as with all radioactive objects, nuclear waste becomes less radioactive over time. The idea is that, in a million years, Finland's nuclear waste won't be nuclear waste anymore, it will just be waste. A million years sounds like a really long time to our ears, but it's actually —It does.It sounds like a long time, but it is the blink of an eye, geologically. So to a geologist, a million years just comes and goes straight away. So it's really not that difficult to keep nuclear waste safe underground on those sorts of timescales. However — and this is the really cool thing, and this is one of the arguments that I make in my book — there are actually technologies that we can use to recycle nuclear waste. It turns out that when you pull uranium out of a reactor, once it's been burned for a couple of years in a reactor, 95 percent of the atoms are still usable. You can still use them to generate nuclear power. So by throwing away nuclear waste when it's been through a nuclear reactor once, we're actually squandering like 95 percent of material that we're throwing away.The theory is this sort of the technology behind breeder reactors?That's exactly right, yes.What about the plutonium? People are worried about the plutonium!People are worried about the plutonium, but in a breeder reactor, you get rid of the plutonium because you split it into fission products, and fission products are still radioactive, but they have much shorter half-lives than plutonium. So rather than being radioactive for, say, a million years, they're only radioactive, really, for a couple of centuries, maybe 1000 years, which is a very, very different situation when you think about long-term storage.I read so many papers and memos from the ’50s when these reactors were first being built and demonstrated, and they worked, by the way, they're actually quite easy to build, it just happened in a couple of years. Breeder reactors were really seen as the future of humanity's power demands. Forget traditional nuclear power stations that we all use at the moment, which are just kind of once through and then you throw away 95 percent of the energy at the end of it. These breeder reactors were really, really seen as the future.They never came to fruition because we discovered lots of uranium around the globe, and so the supply of uranium went up around the time that the nuclear power expansion around the world kind of seized up, so the uranium demand dropped as the supply increased, so the demand for these breeder reactors kind of petered out and fizzled out. But if we're really, really serious about the medium-term future of humanity when it comes to energy, abundance, and prosperity, we need to be taking a second look at these breeder reactors because there's enough uranium and thorium in the ground around the world now to power the world for almost 1000 years. After that, we'll have something else. Maybe we'll have nuclear fusion.Well, I hope it doesn't take a thousand years for nuclear fusion.Yes, me too.Not your mother’s reactors (17:25)In 2005, France got 80 percent of its electricity from nuclear. They almost decarbonized their grid by accident before anybody cared about climate change, and that was during a time when their economy was absolutely booming.I don’t think most people are aware of how much innovation has taken place around nuclear in the past few years, or even few decades. It’s not just a climate change issue or that we need to power these data centers — the technology has vastly improved. There are newer, safer technologies, so we’re not talking about 1975-style reactors.Even if it were the 1975-style reactors, that would be fine because they’re pretty good and they have an absolutely impeccable safety record punctuated by a very small number of high-profile events such as Chernobyl and Fukushima. I'm not to count Three Mile Island on that list because nobody died, but you know what I mean.But the modern nuclear reactors are amazing. The ones that are coming out of France, the EPRs, the European Power Reactors, there are going to be two of those in the UK's new nuclear power station, and they've been designed to withstand an airplane flying into the side of them, so they're basically bomb-proof.As for these small modular reactors, that's getting people very excited, too. As their name suggests, they're small. How small is a reasonable question — the answer is as small as you want to go. These things are scalable, and I've seen designs for just one-megawatt reactors that could easily fit inside a shipping container. They could fit in the parking lots around the side of a data center, or in the basement even, all the way up to multi-hundred-megawatt reactors that could fit on a couple of tennis courts worth of land. But it's really the modular part that's the most interesting thing. That's the ‘M’ and that's never been done before.Which really gets to the economics of the SMRs.It really does. The idea is you could build upwards of 90 percent of these reactors on a factory line. We know from the history of industrialization that as soon as you start mass producing things, the unit cost just plummets and the timescales shrink. No one has achieved that yet, though. There's a lot of hype around small modular reactors, and so it's kind of important not to get complacent and really keep our eye on the ultimate goal, which is mass-production and mass rapid deployment of nuclear power stations, crucially in the places where you need them the most, as well.We often think about just decarbonizing our electricity supply or decoupling our electricity supply from volatilities in the fossil fuel market, but it’s about more than electricity, as well. We need heat for things like making steel, making the ammonia that feeds most people on the planet, food and drinks factories, car manufacturers, plants that rely on steam. You need heat, and thankfully, the primary energy from a nuclear reactor is heat. The electricity is secondary. We have to put effort into making that. The heat just kind of happens. So there's this idea that we could use the surplus heat from nuclear reactors to power industrial processes that are very, very difficult to decarbonize. Small modular reactors would be perfect for that because you could nestle them into the industrial centers that need the heat close by. So honestly, it is really our imaginations that are the limits with these small modular reactors.They've opened a couple of nuclear reactors down in Georgia here. The second one was a lot cheaper and faster to build because they had already learned a bunch of lessons building that first one, and it really gets at sort of that repeatability where every single reactor doesn't have to be this one-off bespoke project. That is not how it works in the world of business. How you get cheaper things is by building things over and over, you get very good at building them, and then you're able to turn these things out at scale. That has not been the economic situation with nuclear reactors, but hopefully with small modular reactors, or even if we just start building a lot of big advanced reactors, we'll get those economies of scale and hopefully the economic issue will then take care of itself.For sure, and it is exactly the same here in the UK. The last reactor that we connected to the grid was in 1995. I was 18 months old. I don't even know if I was fluent in speaking at 18 months old. I was really, really young. Our newest nuclear power station, Hinkley Point C, which is going to come online in the next couple of years, was hideously expensive. The uncharitable view of that is that it's just a complete farce and is just a complete embarrassment, but honestly, you've got to think about it: 1995, the last nuclear reactor in the UK, it was going to take a long time, it was going to be expensive, basically doing it from scratch. We had no supply chain. We didn't really have a workforce that had ever built a nuclear reactor before, and with this new reactor that just got announced a couple of weeks ago, the projected price is 20 percent cheaper, and it is still too expensive, it's still more expensive than it should be, but you're exactly right.By tapping into those economies of scale, the cost per nuclear reactor will fall, and France did this in the ’70s and ’80s. Their nuclear program is so amazing. France is still the most nuclear nation on the planet as a share of its total electricity. In 2005, France got 80 percent of its electricity from nuclear. They almost decarbonized their grid by accident before anybody cared about climate change, and that was during a time when their economy was absolutely booming. By the way, still today, all of those reactors are still working and they pay less than the European Union average for that electricity, so this idea that nuclear makes your electricity expensive is simply not true. They built 55 nuclear reactors in 25 years, and they did them in parallel. It was just absolutely amazing. I would love to see a French-style nuclear rollout in all developed countries across the world. I think that would just be absolutely amazing.Commercial fusion, coming soon . . . ? (23:06)I think we're pretty good at doing things when we put our minds to it, but certainly not in the next couple of decades. But luckily, we already have a proven way of producing lots of energy, and that's with nuclear fission, in the meantime.What is your enthusiasm level or expectation about nuclear fusion? I can tell you that the Silicon Valley people I talk to are very positive. I know they're inherently very positive people, but they're very enthusiastic about the prospects over the next decade, if not sooner, of commercial fusion. How about you?It would be incredible. The last question that I was asked in my PhD interview 10 years ago was, “If you could solve one scientific or engineering problem, what would it be?” and my answer was nuclear fusion. And that would be the answer that I would give today. It just seems to me to be obviously the solution to the long-term energy needs of humanity. However, I'm less optimistic, perhaps, than the Silicon Valley crowd. The running joke, of course, is that it's always 40 years away and it recedes into the future at one year per year. So I would love to be proved wrong, but realistically — no one's even got it working in a prototype power station. That’s before we even think about commercializing it and deploying it at scale. I really, really think that we're decades away, maybe even something like a century. I'd be surprised if it took longer than a century, actually. I think we're pretty good at doing things when we put our minds to it, but certainly not in the next couple of decades. But luckily, we already have a proven way of producing lots of energy, and that's with nuclear fission, in the meantime.Don't go to California with that attitude. I can tell you that even when I go there and I talk about AI, if I say that AI will do anything less than improve economic growth by a factor of 100, they just about throw me out over there. Let me just finish up by asking you this: Earlier, we mentioned Three Mile Island and Chernobyl. How resilient do you think this nuclear renaissance is to an accident?Even if we take the rate of accident over the last 70 years of nuclear power production and we maintain that same level of rate of accident, if you like, it's still one of the safest things that our species does, and everyone talks about the death toll from nuclear power, but nobody talks about the lives that it's already saved because of the fossil fuels, that it's displaced fossil fuels. They're so amazing in some ways, they're so convenient, they're so energy-dense, they've created the modern world as we all enjoy it in the developed world and as the developing world is heading towards it. But there are some really, really nasty consequences of fossil fuels, and whether or not you care about climate change, even the air pollution alone and the toll that that takes on human health is enough to want to phase them out. Nuclear power already is orders of magnitude safer than fossil fuels and I read this really amazing paper that globally, it was something like between the ’70s and the ’90s, nuclear power saved about two million lives because of the fossil fuels that it displaced. That's, again, orders of magnitude more lives that have been lost as a consequence of nuclear power, mostly because of Chernobyl and Fukushima. Even if the safety record of nuclear in the past stays the same and we forward-project that into the future, it's still a winning horse to bet on.If in the UK they've started up one new nuclear reactor in the past 30 years, right? How many would you guess will be started over the next 15 years?Four or five. Something like that, I think; although I don't know.Is that a significant number to you?It's not enough for my liking. I would like to see many, many more. Look at France. I know I keep going back to it, but it's such a brilliant example. If France hadn't done what they'd done in between the ’70s and the ’90s — 55 nuclear reactors in 25 years, all of which are still working — it would be a much more difficult case to make because there would be no historical precedent for it. So, maybe predictably, I wouldn't be satisfied with anything less than a French-scale nuclear rollout, let's put it that way.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* The U.S. Marches Toward State Capitalism With American Characteristics - WSJ* AI Spending Is Propping Up the Economy, Right? It’s Complicated. - Barron’s* Goodbye, $165,000 Tech Jobs. Student Coders Seek Work at Chipotle. - NYT* Sam Altman says Gen Z are the 'luckiest' kids in history thanks to AI, despite mounting job displacement dread - NYT* Lab-Grown Diamonds Are Testing the Power of Markets - Bberg Opinion* Why globalisation needs a leader: Hegemons, alignment, and trade - CEPR* The Rising Returns to R&D: Ideas Are not Getting Harder to Find - SSRN* An Assessment of China's Innovative Capacity - The Fed* Markets are so used to the TACO trade they didn't even blink when Trump extended a tariff delay with China - Fortune* Labor unions mobilize to challenge advance of algorithms in workplaces - Wapo* ChatGPT loves this bull market. Human investors are more cautious. - Axios* What is required for a post-growth model? - Arxiv* What Would It Take to Bring Back US Manufacturing? - Bridgewater▶ Business* An AI Replay of the Browser Wars, Bankrolled by Google - Bberg* Alexa Got an A.I. Brain Transplant. How Smart Is It Now? - NYT* Google and IBM believe first workable quantum computer is in sight - FT* Why does Jeff Bezos keep buying launches from Elon Musk? - Ars* Beijing demands Chinese tech giants justify purchases of Nvidia’s H20 chips - FT* An AI Replay of the Browser Wars, Bankrolled by Google - Bberg Opinion* Why Businesses Say Tariffs Have a Delayed Effect on Inflation - Richmond Fed* Lisa Su Runs AMD—and Is Out for Nvidia’s Blood - Wired* Forget the White House Sideshow. Intel Must Decide What It Wants to Be. - WSJ* With Billions at Risk, Nvidia CEO Buys His Way Out of the Trade Battle - WSJ* Donald Trump’s 100% tariff threat looms over chip sector despite relief for Apple - FT* Sam Altman challenges Elon Musk with plans for Neuralink rival - FT* Threads is nearing X's daily app users, new data shows - TechCrunch▶ Policy/Politics* Trump's China gamble - Axios* U.S. Government to Take Cut of Nvidia and AMD A.I. Chip Sales to China - NYT* A Guaranteed Annual Income Flop - WSJ Opinion* Big Tech’s next major political battle may already be brewing in your backyard - Politico* Trump order gives political appointees vast powers over research grants - Nature* China has its own concerns about Nvidia H20 chips - FT* How the US Could Lose the AI Arms Race to China - Bberg Opinion* America’s New AI Plan Is Great. There’s Just One Problem. - Bberg Opinion* Trump, Seeking Friendlier Economic Data, Names New Statistics Chief - NYT* Trump’s chief science adviser faces a storm of criticism: what's next? - Nature* Trump Is Squandering the Greatest Gift of the Manhattan Project - NYT Opinion▶ AI/Digital* Can OpenAI’s GPT-5 model live up to sky-high expectations? - FT* Google, Schmoogle: When to Ditch Web Search for Deep Research - WSJ* AI Won’t Kill Software. It Will Simply Give It New Life. - Barron's* Chatbot Conversations Never End. That’s a Problem for Autistic People. - WSJ* Volunteers fight to keep ‘AI slop’ off Wikipedia - Wapo* Trump’s Tariffs Won’t Solve U.S. Chip-Making Dilemma - WSJ* GenAI Misinformation, Trust, and News Consumption: Evidence from a Field Experiment - NBER* GPT-5s Are Alive: Basic Facts, Benchmarks and the Model Card - Don’t Worry About the Vase* What you may have missed about GPT-5 - MIT* Why A.I. Should Make Parents Rethink Posting Photos of Their Children Online - NYT* 21 Ways People Are Using A.I. at Work - NYT* AI and Jobs: The Final Word (Until the Next One) - EIG* These workers don’t fear artificial intelligence. They’re getting degrees in it. - Wapo* AI Gossip - Arxiv* Meet the early-adopter judges using AI - MIT* The GPT-5 rollout has been a big mess - Ars* A Humanoid Social Robot as a Teaching Assistant in the Classroom - Arxiv* OpenAI Scrambles to Update GPT-5 After Users Revolt - Wired* Sam Altman and the whale - MIT* This is what happens when ChatGPT tries to write scripture - Vox* How AI could create the first one-person unicorn - Economist* AI Robs My Students of the Ability to Think - WSJ Opinion* Part I: Tricks or Traps? A Deep Dive into RL for LLM Reasoning - Arxiv▶ Biotech/Health* Scientists Are Finally Making Progress Against Alzheimer’s - WSJ Opinion* The Dawn of a New Era in Alzheimer’s and Parkinson's Treatment - RealClearScience* RFK Jr. shifts $500 million from mRNA research to 'safer' vaccines. Do the data back that up? - Reason* How Older People Are Reaping Brain Benefits From New Tech - NYT* Did Disease Defeat Napoleon? - SciAm* Scientists Discover a Viral Cause of One of The World's Most Common Cancers - ScienceAlert* ‘A tipping point’: An update from the frontiers of Alzheimer’s disease research - Yale News* A new measure of health is revolutionising how we think about ageing - NS* First proof brain’s powerhouses drive – and can reverse – dementia symptoms - NA* The Problem Is With Men’s Sperm - NYT Opinion▶ Clean Energy/Climate* The Whole World Is Switching to EVs Faster Than You - Bberg Opinion* Misperceptions About Air Pollution: Implications for Willingness to Pay and Environmental Inequality - NBER* Texas prepares for war as invasion of flesh-eating flies appears imminent - Ars* Data Center Energy Demand Will Double Over the Next Five Years - Apollo Academy* Why Did Air Conditioning Adoption Accelerate Faster Than Predicted? Evidence from Mexico - NBER* Microwaving rocks could help mining operations pull CO2 out of the air - NS* Ford’s Model T Moment Isn’t About the Car - Heatmap* Five countries account for 71% of the world’s nuclear generation capacity - EIA* AI may need the power equivalent of 50 large nuclear plants - E&E▶ Space/Transportation* NASA plans to build a nuclear reactor on the Moon—a space lawyer explains why - Ars* Rocket Lab's Surprise Stock Move After Solid Earnings - Barron’s▶ Up Wing/Down Wing* James Lovell, the steady astronaut who brought Apollo 13 home safely, has died - Ars* Vaccine Misinformation Is a Symptom of a Dangerous Breakdown - NYT Opinion* We’re hardwired for negativity. That doesn’t mean we’re doomed to it. - Vox* To Study Viking Seafarers, He Took 26 Voyages in a Traditional Boat - NYT* End is near for the landline-based service that got America online in the ’90s - Wapo▶ Substacks/Newsletters* Who will actually profit from the AI boom? - Noahpinion* OpenAI GPT-5 One Unified System - AI Supremacy* Proportional representation is the solution to gerrymandering - Slow Boring* Why I Stopped Being a Climate Catastrophist - The Ecomodernist* How Many Jobs Depend on Exports? - Conversable Economist* ChatGPT Classic - Joshua Gans’ Newsletter* Is Air Travel Getting Worse? - Maximum Progress▶ Social Media* On AI Progress - @daniel_271828* On AI Usage - @emollick* On Generative AI and Student Learning - @jburnmurdoch Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Jul 31, 2025 • 23min

✨ AI and the future of R&D: My chat (+transcript) with McKinsey's Michael Chui

My fellow pro-growth/progress/abundance Up Wingers,The innovation landscape is facing a difficult paradox: Even as R&D investment has increased, productivity per dollar invested is in decline. In his recent co-authored paper, The next innovation revolution—powered by AI, Michael Chui explores AI as a possible solution to this dilemma.Today on Faster, Please! — The Podcast, Chui and I explore the vast potential for AI-augmented research and the challenges and opportunities that come with applying it to the real-world.Chui is a senior fellow at QuantumBlack, McKinsey’s AI unit, where he leads McKinsey research in AI, automation, and the future of work.In This Episode* The R&D productivity problem (01:21)* The AI solution (6:13)* The business-adoption bottleneck (11:55)* The man-machine team (18:06)* Are we ready? (19:33)Below is a lightly edited transcript of our conversation. The R&D productivity problem (01:21)All the easy stuff, we already figured out. So the low-hanging fruit has been picked, things are getting harder and harder.Pethokoukis: Do we understand what explains this phenomenon where we seem to be doing lots of science, and we're spending lots of money on R&D, but the actual productivity of that R&D is declining? Do we have a good explanation for that?I don't know if we have just one good explanation. The folks that we both know have been both working on what are the causes of this, as well as what are some of the potential solutions, but I think it's a bit of a hidden problem. I don't think everyone understands that there are a set of people who have looked at this — quite notably Nick Bloom at Stanford who published this somewhat famous paper that some people are familiar with. But it is surprising in some sense.At one level, it's amazing what science and engineering has been able to do. We continue to see these incredible advances, whether it's in AI, or biotechnology, or whatever; but also, what Nick and other researchers have discovered is that we are producing less for every dollar we spend in R&D. That's this little bit of a paradox, or this challenge, that we see. What some of the research we've been trying to do is understand, can AI try to contribute to bending those curves?. . . I'm a computer scientist by training. I love this idea of Moore's Law: Every couple of years you can double the number of transistors you can put on a chip, or whatever, for the same amount of money. There's something called “Eroom's Law,” which is Moore spelled backwards, and basically it said: For decades in the pharmaceutical industry, the number of compounds or drugs you would produce for every billion dollars of R&D would get cut in half every nine years. That's obviously moving in the wrong direction. That challenge, I don't think everyone is aware of, but one that we need to address.I suppose, in a way, it does make sense that as we tackle harder problems, and we climb the tree of knowledge, that it's going to take more time, maybe more researchers, the researchers themselves may have to spend more time in school, so it may be a bit of a hidden problem, but it makes some intuitive sense to me.I think there's a way to think about it that way, which is: All the easy stuff, we already figured out. So the low-hanging fruit has been picked, things are getting harder and harder. It's amazing. You could look at some of the early papers in any field and it have a handful of authors, right? The DNA paper, three authors — although it probably should have included Rosalyn Franklin . . . Now you look at a physics paper or a computer science paper — the author list just goes on sometimes for pages. These problems are harder. They require more and more effort, whether it's people's talents, or whether it's computing power, or large-scale experiments, things are getting harder to do. I think there's ways in which that makes sense. Are there other ways in which we could improve processes? Probably, too.We could invest more in research, make it more efficient, and encourage more people to become researchers. To me, what’s more exciting than automating different customer service processes is accelerating scientific discovery. I think that’s what makes AI so compelling.That is exactly right. Now, by the way, I think we need to continue to invest in basic research and in science and engineering, I think that's absolutely important, but —That's worth noting, because I'm not sure everybody thinks that, so I'm glad you highlighted that.I don't think AI means that everything becomes cheaper and we don't need to invest in both human talent as well as in research. That's number one.Number two, as you said, we spend a lot of time, and appropriately so, talking about how AI can improve productivity, make things more efficient, do the things that we do already cheaper and faster. I think that's absolutely true. But we had the opportunity to look over history, and what has actually improved the human condition, what has been one of the things that has been necessary to improve the human condition over decades, and centuries, and millennia, is, in fact, discovering new ideas, having scientific breakthroughs, turning those scientific breakthroughs into engineering that turn into products and services, that do everything from expand our lifespans to be able to provide us with food, more energy. All those sorts of things require innovation, require R&D, and what we've discovered is the potential for AI, not only to make things more efficient, but to produce more innovation, more ideas that hopefully will lead to breakthroughs that help us all.The AI solution (6:13)I think that's one of the other potentials of using AI, that it could both absorb some of the experience that people have, as well as stretch the bounds of what might be possible.I've heard described as an “IMI,” it's an invention that makes more invention. It's an invention of a method of invention. That sounds great — how's it going to do that?There are a couple of ways. We looked at three different channels through which AI could improve this process of innovation and R&D. The first one is just increasing the volume, velocity, and variety of different candidates. One way you could think about innovation is you create a whole bunch of candidates and then you filter them down to the ones that might be most effective. Number one, you can just fill that funnel faster, better, and with greater variety. That's number one.The candidates could be a molecule, it could be a drug, it could be a new alloy, it could be lots of things.Absolutely, or a design for a physical product. One of the interesting things is, this quote-unquote “modern AI” — AI's been around for 70 years — is based on foundation models, these large artificial neural networks trained on huge amounts of data, and they produce unstructured outputs. In many cases, language, we talk about LLMs.The interesting thing is, you can train these foundation models not just to generate language, but you can generate a protein, or a drug candidate, as you were saying. You can imagine the prompt being, “Please produce 10 drug candidates that address this condition, but without the following side effects.” That’s not exactly how it works, but roughly speaking, that's the potential to generate these things, or generate an electrical circuit, or a design for an air foil or an airframe that has these characteristics. Being able to just generate those.The interesting thing is, not only can you generate them faster, but there's this idea that you can create more variety. We're usefully proud as humans about our creativity, but also, that judgment or that training that we have, that experience sometimes constrains it. The famous example was some folks created this machine called AlphaGo which was meant to compete against the world champion in this game called Go, a very complex strategic game. Famously, it beat the world champion, but one of the things it did is this famous Move 37, this move that everyone who was an expert at Go said, “That is nuts. Why would you possibly do that?” Because the machine was a little bit more unconstrained, actually came up with what you might describe as a creative idea. I think that's one of the other potentials of using AI, that it could both absorb some of the experience that people have, as well as stretch the bounds of what might be possible.So you come up with the design, and then a variety of options, and then AI can help model and test them.Exactly. So you generate a broader and more voluminous set of potential designs, candidates, whether it's molecules, or chemicals, or what have you. Now you need to narrow that down. Traditionally you would narrow it down either one, through physical testing — so put something into a wind tunnel or run it through the water if you're looking at a boat design, or something like that, or put it in an electromagnetic chamber and see how the antenna operates. You'd either test it physically, and then, of course, lots of people figured out how to use physics, mathematical equations, in order to create “digital twins.” So you have these long acronyms like CFD for computational fluid dynamics, basically a virtual wind tunnel, or what have you. Or you have finite element analysis, another way to model how a structure might perform, or computational electromagnetic modeling. All these ways that you can use physics to simulate things, and that's been terrific.But some of those models actually take hours, sometimes days, to run these models. It might be faster than building the physical prototype and then modeling it — again, sometimes you just wait until something breaks, you're doing failure testing. Then you could do that in a computer using these models. But sometimes they take a really long time, and one of the really interesting discoveries in “AI” is you can use that same neural network that we've used to simulate cognition or intelligence, but now you use it to simulate physical systems. So in some ways it's not AI, because you're not creating an artificial intelligence, you're creating an artificial wind tunnel. It's just a different way to model physics. Sometimes these problems get even more complicated . . . If you're trying to put an antenna on an airplane, you need to know how the airflow is going to go over it, but you need to know whether or not the radio frequency stuff works out too, all that RF stuff.So these multiphysics models, the complexity is even higher, and you can train these neural nets . . . even faster than these physics-based models. So we have these things called AI surrogate models. They're sort of surrogates. It's two steps removed, in some ways, from actual physical testing . . . Literally we've seen models that can run in minutes rather than hours, or an hour rather than a few days. That can accelerate things. We see this in weather forecasting in a number of different ways in which this can happen. If you can generate more candidates and then test them faster, you can imagine the whole R&D process really accelerating.The business-adoption bottleneck (11:55)We know that companies are using AI surrogates, deep learning surrogates, already, but is it being applied as many places as possible? No, it isn't.Does achieving your estimated productivity increases depend more on further technological advances or does it depend more on how companies adopt and implement the technology? Is the bottleneck still in the tech itself, or is it more about business adaptation?Mostly number two. The technology is going to continue to advance. As a technologist, I love all that stuff, but as usual, a lot of the challenges here are organizational challenges. We know that companies are using AI surrogates, deep learning surrogates, already, but is it being applied as many places as possible? No, it isn't. A lot of these things are organizational. Does it match your strategy, for instance? Do you have the right talent and organization in place?Let me just give one very specific example. In a lot of R&D organizations we know, there's a separate organization for physical testing and a separate organization for simulations. Simulation, in many cases, us physics-based, but you add these deep-learning surrogates as well. That doesn't make sense at some level. I'm not saying physical testing goes away, but you need to figure out when you should physically test, when you should use which simulation methods, when you should use deep-learning surrogates or AI techniques, et cetera, and that's just one organizational difference that you could make if you were in an organization that was actually taking this whole testing regime seriously, where you're actually parsing out when the optimal amount of physical testing is versus simulation, et cetera. There's a number of things where that's true.Even before AI, historically, there was a gap between novel, new technologies, what they can do in lab settings, and then how they’re applied in real-world research or in business environments. That gap, I would guess, probably requires companies to rewire how they operate, which takes time.It is indeed, and it's funny that you use the word “rewiring.” My colleagues wrote a book entitled Rewired, which literally is about the different ways, together, that you need to, as you say, rewire or change the way an organization operates. Only one of those six chapters is around the tech stack. It's still absolutely important. You've got to get all that stuff right. But it is mostly all of the other things surrounding how you change and what organization operates in order to bring the full value of this together to reach scale.We also talk about pilot purgatory: “We did this cool experiment . . .” but when is it good enough that the CFOs talks about it at the quarterly earnings report? That requires the organization to change the way it operates. That's the learning we've seen all the time.We've been serving thousands of executives on their use of AI for seven years now. Nearly 80 percent of organizations say they're regularly using AI someplace in the business, but in a separate survey, only one percent say they're mature in that usage. There's this giant gap between just using AI and then actually having the value be created. And by the way, organizations that are creating that value are accelerating their performance difference. If you have a much more productive R&D organization that churns out products that are successful in the market, you're going to be ahead of your competitors, and that's what we're seeing too.Is there a specific problem that comes up over and over again with companies, either in their implementation of AI, maybe they don't trust it, they may not know how to use it? What do you think is the problem?Unfortunately, I don't think there's just one thing. My colleagues who do this work on Rewired, for instance — you kind of have to do all those things. You do have to have the right talent and organization in place. You have to figure out scaling, for instance. You have to figure out change management. All of those things together are what underpins outsized performance, so all those things have to be done.So if companies are successful, what is the productivity impact you see? We're talking about basically the current technology level, give or take. We're not talking about human-level AI, superintelligence, we're talking about AI more or less as it exists today. Everybody wants to accelerate productivity: governments around the world, companies. So give me a feel for that.There are different measures of productivity, but here what we're talking about is basically: How many new products, successful products, can you put out in the market? Our modeling says, depending on your industry, you could double your productivity, in other words, of R&D. In other words, you could put out double the amount of products and services — new products and services — that you have been previously.Now, that's not true for every industry. By the way, the impact of that is different for different industries because for some industries you are dependent — In pharmaceuticals, the majority of your value comes from producing new products and services over time because eventually the patent runs out or whatever. There are other industries, we talk about science-based industries like chemicals, for instance. The new-product development process in chemicals is very, very close to the science of chemistry. So these levers that I just talked about — producing more candidates, being able to evaluate them more quickly, and all the other things that LLMs can do, in general, we could see potential doubling in the pace of which innovation happens.On the other hand, the chemicals industry — let's leave out specialty chemicals, but the commodity chemicals — they'll still produce ethylene, right? So to a certain extent, while the R&D process can be accelerated a great deal, the EBIT [Earnings Before Interest and Taxes] impact on the industry might be lower than it is for pharmaceuticals, for instance. But still, it's valuable. And then, again, if you're in specialty chem, it means a lot to you. So depending on where you sit in your position in the market, it can vary, but the potential is really high.The man-machine team (18:06)At least for the medium term, we're not going to be able to get rid of all the people. The people are going to be absolutely important to the process.Will future R&D look more like researchers augmented by AI or AI systems assisted by researchers? Who's the assistant in this equation? Who’s working for who?It's “all of the above” and it depends on how you decide to use these technologies, but we even write in our paper that we need to be thoughtful about where you put the human in the loop. Every study, the conditions matter, but there are lots of studies where you say, look, the combination of machines and humans — so AI and researchers — is the most powerful combination. Each brings their respective strengths to it, but the funny thing is that sometimes the human biases actually decrease the performance of the overall system, and so, oh, maybe we should just go with machines. At least for the medium term, we're not going to be able to get rid of all the people. The people are going to be absolutely important to the process.When is it that people either are necessary to the process or can be helpful? In many cases, it is around things like, when is it that you need to make a decision that's a safety-critical decision, a regulatory decision where you just have to have a person look at it? That's the sort of necessity argument for people in the loop. But also, there are things that machines just don't do well enough yet, and there's a little bit of that.Are we ready? (19:33). . . AI is one of those things that can produce potentially more of those ideas that can underpin, hopefully, an improved quality of life for us and our children.If we can get more productive R&D, and then businesses get better at incorporating this into their processes and they could potentially generate more products and services, do we have a government ready for that world of accelerated R&D? Can we handle that flow? My bias says probably not, but please correct me if I'm wrong.I think one of the interesting things is people talk about AI regulation. In many of these industries, the regulations already exist. We have regulations for what goes out in pharmaceuticals, for instance. We have regulations in the aviation industry, we have regulations in the automobile industry, and in many ways, AI in the R&D process doesn't change that — maybe it should, people talk about, can you actually accelerate the process of approving a drug, for instance, but that wasn't the thing that we studied. In some ways, those processes are applied now, already, so that's something that doesn't necessarily have to changeThat said, are some of these potential innovations gated by approval processes or clinical trials processes? Absolutely. In some of those cases, the clinical trials process gait is not necessarily a regulation, but we know there's a big problem just finding enough potential subjects in order to do clinical trials. That's not a regulatory problem, that's a problem of finding people who are good candidates for actually testing these drugs.So yes, in some cases, even if we were able to double the amount of candidates that can go through the funnel on a number of these things, there will be these exogenous issues that would constrain society's ability to bring these to market. So that just says, you squeeze the balloon here and it opens up there, but let's go solve each of these problems, and one of the problems that we said that AI can help solve is increasing the number of things that you could potentially put into market if it can get past the other necessities.For a general public where so much of what they're hearing about AI tends to be about job loss, or are they stealing copyrighted material, or, yeah, people talk about these huge advances, but they're not seeing them yet. What is your elevator optimistic pitch why you may be worried about the impact of AI, but here's why I'm excited about it? Why are you excited by it?By the way, I think all those things are really important. All of those concerns, and how do we reskill the workforce, all those things, and we've done work on that as well. But the thing that I'm excited about is we need innovation, we need new ideas, we need scientific advancements, and engineering that turns them into products in order for us to improve their human condition, whether it's living longer lives, or living higher quality life, whether it's having the energy, whether it's to be able to support that in a way that doesn't cause other problems. All of those things, we need to have them, and what we've discovered is AI is one of those things that can produce potentially more of those ideas that can underpin, hopefully, an improved quality of life for us and our children.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* The Tariffs Kicked In. The Sky Didn’t Fall. Were the Economists Wrong? - NYT Opinion* AI Disruption Is Coming for These 7 Jobs, Microsoft Says - Barron's* One Way to Ease the US Debt Crisis? Productivity - Bberg Opinion* So far, only one-third of Americans have ever used AI for work - Ars▶ Business* Meta and Microsoft Keep Their License to Spend - WSJ* Meta Pivots on AI Under the Cover of a Superb Quarter - Bberg Opinion* Will Mark Zuckerberg’s secret, multibillion-dollar AI plan win over Wall Street? - FT* The AI Company Capitalizing on Our Obsession With Excel - WSJ* $15 billion in NIH funding frozen, then thawed Tuesday in ongoing power war - Ars* Mark Zuckerberg promises you can trust him with superintelligent AI - The Verge* AI Finance App Ramp Is Valued at $22.5 Billion in Funding Round - WSJ▶ Policy/Politics* Trump’s Tariff Authority Is Tested in Court as Deadline on Trade Deals Looms - WSJ* China is betting on a real-world use of AI to challenge U.S. control - Wapo▶ AI/Digital* ‘Superintelligence’ Will Create a New Era of Empowerment, Mark Zuckerberg Says - NYT* How Exposed Are UK Jobs to Generative AI? Developing and Applying a Novel Task-Based Index - Arxiv* Mark Zuckerberg Details Meta’s Plan for Self-Improving, Superintelligent AI - Wired* A Catholic AI app promises answers for the faithful. Can it succeed? - Wapo* Power Hungry: How Ai Will Drive Energy Demand - SSRN* The two people shaping the future of OpenAI’s research - MIT* Task-based returns to generative AI: Evidence from a central bank - CEPR▶ Biotech/Health* How to detect consciousness in people, animals and maybe even AI - Nature* Why living in a volatile age may make our brains truly innovative - NS▶ Clean Energy/Climate* The US must return to its roots as a nation of doers - FT* How Trump Rocked EV Charging Startups - Heatmap* Countries Promise Trump to Buy U.S. Gas, and Leave the Details for Later - NYT* Startup begins work on novel US fusion power plant. Yes, fusion. - E&E* Scientists Say New Government Climate Report Twists Their Work - Wired▶ Robotics/Drones/AVs* The grand challenges of learning medical robot autonomy - Science* Coal-Powered AI Robots Are a Dirty Fantasy - Bberg Opinion▶ Up Wing/Down Wing* A Revolutionary Reflection - WSJ Opinion* Why Did the Two Koreas Diverge? - SSRN* The best new science fiction books of August 2025 - NS* As measles spreads, old vaccination canards do too - FT Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Jul 25, 2025 • 27min

📊 The US economy at midyear: My chat (+transcript) with economic analyst Joey Politano

Joey Politano, an economic analyst and author of the Apricitas Economics Substack, dives deep into the turbulent waters of the current U.S. economy. He discusses the unprecedented changes in trade policy, emphasizing how tariffs and immigration trends are reshaping the landscape. Politano also highlights the unpredictability of trade decisions and their chaotic impact on businesses. Additionally, he explores the potential of AI to propel productivity, while stressing the need for sound policies to harness technological advantages.
undefined
Jul 10, 2025 • 31min

🎇 An age of transformation: My chat (+transcript) with techno-futurist Peter Leyden

In this enlightening conversation, Peter Leyden, a futurist and technology expert, discusses the pivotal role of emerging technologies like AI, clean energy, and bioengineering. Leyden highlights America's need for leadership and a mindset shift to harness these changes. He addresses both the optimism and fears surrounding AI, contrasting Western pessimism with Asian hope. Additionally, he explores the revolutionary potential of bioengineering and the impact of demographic shifts on innovation, advocating for proactive strategies to shape a sustainable future.
undefined
Jun 27, 2025 • 36min

✨ 🧬 When AI meets biotechnology: My chat (+transcript) with techno-futurist Jamie Metzl

Jamie Metzl, a senior fellow at the Atlantic Council and author of 'Superconvergence,' dives into the intersection of AI and biotechnology. He discusses how these once-sci-fi technologies are reshaping our future and emphasizes the need for strategic oversight. Metzl explores societal fears of disruption, introduces the concept of 'newnimals,' and argues for a risk-tolerant approach to innovation. The conversation also highlights the importance of curiosity in fostering global advancements and the ethical implications of these technologies.
undefined
Jun 12, 2025 • 30min

🚀 NASA and the New Space Age: My chat (+transcript) with James Meigs

My fellow pro-growth/progress/abundance Up Wingers,America is embarking upon a New Space Age, with companies like SpaceX and Blue Origin ready to partner with NASA to take Americans to a new frontier — possibly as far as Mars. Lately, however, the world is witnessing uncertainty surrounding NASA leadership and even an odd feud between SpaceX boss Elon Musk and the White House. At a critical time for US space competition, let’s hope key players can stick the landing.Today on Faster, Please! — The Podcast, I chat with James Meigs about the SLS rocket, NASA reforms, and the evolving private sector landscape.Meigs is a senior fellow at the Manhattan Institute. He is a contributing editor of City Journal and writer of the Tech Commentary column at Commentary magazine. He is also the former editor of Popular Mechanics.Meigs is the author of a recent report from the Manhattan Institute, U.S. Space Policy: The Next Frontier.In This Episode* So long, Jared Isaacman (1:29)* Public sector priorities (5:36)* Supporting the space ecosystem (11:52)* A new role for NASA (17:27)* American space leadership (21:17)Below is a lightly edited transcript of our conversation. So long, Jared Isaacman (1:29)The withdrawal of Jared Isaacman . . . has really been met with total dismay in the space community. Everyone felt like he was the right kind of change agent for the agency that desperately needs reform, but not destruction.Pethokoukis: We're going to talk a lot about your great space policy report, which you wrote before the withdrawal of President Trump's NASA nominee, Jared Isaacman.What do you think of that? Does that change your conclusions? Good move, bad move? Just sort of your general thoughts apart from the surprising nature of it.Meigs: I worked sort of on and off for about a year on this report for the Manhattan Institute about recommendations for space policy, and it just came out a couple of months ago and already it's a different world. So much has happened. The withdrawal of Jared Isaacman — or the yanking of his nomination — has really been met with total dismay in the space community. Everyone felt like he was the right kind of change agent for the agency that desperately needs reform, but not destruction.Now, it remains to be seen what happens in terms of his replacement, but it certainly pulled the rug out from under the idea that NASA could be reformed and yet stay on track for some ambitious goals. I'm trying to be cautiously optimistic that some of these things will happen, but my sense is that the White House is not particularly interested in space.Interestingly, Musk wasn't really that involved in his role of DOGE and stuff. He didn't spend that much time on NASA. He wasn't micromanaging NASA policy, and I don't think Isaacman would've been just a mouthpiece for Musk either. He showed a sense of independence. So it remains to be seen, but my recommendations . . . and I share this with a lot of people advocating reform, is that NASA more or less needs to get out of the rocket-building business, and the Space Launch System, this big overpriced rocket they've been working on for years — we may need to fly it two more times to get us back to the moon, but after that, that thing should be retired. If there's a way to retire it sooner, that would be great. At more than $4 billion a launch, it's simply not affordable, and NASA will not be an agency that can routinely send people into space if we're relying on that white elephant.To me what was exciting about Isaacman was his genuine enthusiasm about space. It seemed like he understood that NASA needed reform and changes to the budget, but that the result would be an agency that still does big things. Is there a fear that his replacement won’t be interested in NASA creative destruction, just destruction?We don't know for sure, but the budget that's been proposed is pretty draconian, cutting NASA's funding by about a quarter and recommending particularly heavy cuts in the science missions, which would require cutting short some existing missions that are underway and not moving ahead with other planned missions.There is room for saving in some of these things. I advocate a more nimble approach to NASA's big science missions. Instead of sending one $4 billion rover to Mars every 20 years, once launch costs come down, how about we send ten little ones and if a couple of them don't make it, we could still be getting much more science done for the same price or less. So that's the kind of thing Isaacman was talking about, and that's the kind of thing that will be made possible as launch costs continue to fall, as you've written about, Jim. So it requires a new way of thinking at NASA. It requires a more entrepreneurial spirit and it remains to be seen whether another administrator can bring that along the way. We were hoping that Isaacman would.Public sector priorities (5:36)Congress has never deviated from focusing more on keeping these projects alive than on whether these projects achieve their goals.It seems to me that there are only two reasons, at this point, to be in favor of the SLS rocket. One: There’s a political pork jobs aspect. And the other is that it’s important to beat China to the moon, which the Artemis program is meant to do. Does that seem accurate?Pretty much, yeah. You can be for beating China the moon and still be against the SLS rocket, you kind of just grit your teeth and say, okay, we've got to fly it two more times because it would be hard to cobble together, in the timeframe available, a different approach — but not impossible. There are other heavy lift rockets. Once you can refuel in orbit and do other things, there's a lot of ways to get a heavy payload into orbit. When I started my report, it looked like SLS was the only game in town, but that's really not the case. There are other options.The Starship has to quit blowing up.I would've loved to have seen the last couple of Starship missions be a little more successful. That's unfortunate. The pork part of SLS just can't be underestimated. From the get go, going way back to when the Space Shuttle was retired in 2011, and even before to when after the Columbia Space Shuttle disaster — that's the second disaster — there was a really big effort to figure out how to replace the space shuttle, what would come next. There was a strong movement in Congress at that time to say, “Well, whatever you build, whatever you do, all the factories that are involved in working on the Space Shuttle, all of the huge workforces in NASA that work on the space shuttle, all of this manpower has to be retained.” And Congress talked a lot about keeping the experience, the expertise, the talent going.I can see some legitimacy to that argument, but if you looked at the world that way, then you would always focus on keeping the jobs of the past viable instead of the jobs of the future: What are we going to do with the blacksmiths who shoe horses? If we lose all this technological capability of shoeing horses . . . we’d better not bring in all these cars! That's an exaggeration, but as a result, first they aim to replace the Space Shuttle with a rocket called Constellation that would recycle some of the Shuttle components. And then eventually they realized that that was just too bloated, too expensive. That got canceled during the Obama administration replaced with the Space Launch System, which is supposed to be cheaper, more efficient, able to be built in a reasonable amount of time.It wound up being just as bloated and also technologically backward. They're still keeping technology from the Shuttle era. The solid fuel engines, which, as we recall from the first Shuttle disaster, were problematic, and the Shuttle main engine design as well. So when SLS flies with humans on board for the first time, supposedly next year, it'll be using technology that was designed before any of the astronauts were even born.In this day and age, that's kind of mind-blowing, and it will retain these enormous workforces in these plants that happen to be located in states with powerful lawmakers. So there's an incredible incentive to just keep it all going, not to let things change, not to let anything be retired, and to keep that money flowing to contractors, to workers and to individual states. Congress has never deviated from focusing more on keeping these projects alive than on whether these projects achieve their goals.I've seen a video of congressional hearings from 15 years ago, and the hostility toward the idea of there being a private-sector alternative to NASA, now it seems almost inexplicable seeing that even some of these people were Republicans from Texas.Seeing where we are now, it’s just amazing because now that we have the private sector, we're seeing innovation, we're seeing the drop in launch costs, the reusability — just a completely different world than what existed 15, 16, 17 years ago.I don't think people really realize how revolutionary NASA's commercial programs were. They really sort of snuck them in quietly at first, starting as far back as 2005, a small program to help companies develop their own space transportation systems that could deliver cargo to the International Space Station.SpaceX was initially not necessarily considered a leader in that. It was a little startup company nobody took very seriously, but they wound up doing the best job. Then later they also led the race to be the first to deliver astronauts to the International Space Station, saved NASA billions of dollars, and helped launch this private-industry revolution in space that we're seeing today that's really exciting.It's easy to say, “Oh, NASA's just this old sclerotic bureaucracy,” and there's some truth to that, but NASA has always had a lot of innovative people, and a lot of the pressure of the push to move to this commercial approach where NASA essentially charters a rocket the way you would charter a fishing boat rather than trying to build and own its own equipment. That's the key distinction. You’ve got to give them credit for that and you also have to give SpaceX enormous credit for endless technological innovation that has brought down these prices.So I totally agree, it's inconceivable to think of trying to run NASA today without their commercial partners. Of course, we'd like to see more than just SpaceX in there. That's been a surprise to people. In a weird way, SpaceX's success is a problem because you want an ecosystem of competitors that NASA can choose from, not just one dominant supplier.Supporting the space ecosystem (11:52)There's a reason that the private space industry is booming in the US much more than elsewhere in the world. But I think they could do better and I'd like to see reform there.Other than the technical difficulty of the task, is there something government could be doing or not doing, perhaps on the regulatory side, to encourage a more sort of a bigger, more vibrant space ecosystem.In my Manhattan Institute report, I recommend some changes, particularly, the FAA needs to continue reforming its launch regulations. They’re more restrictive and take longer than they should. I think they're making some progress. They recently authorized more launches of the experimental SpaceX Starship, but it shouldn't take months to go through the paperwork to authorize the launch of a new spacecraft.I think the US, we’re currently better than most countries in terms of allowing private space. There's a reason that the private space industry is booming in the US much more than elsewhere in the world. But I think they could do better and I'd like to see reform there.I also think NASA needs to continue its efforts to work with a wide range of vendors in this commercial paradigm and accept that a lot of them might not pan out. We've seen a really neat NASA program to help a lot of different companies, but a lot of startups have been involved in trying to build and land small rovers on the moon. Well, a lot of them have crashed.Not an easy task apparently.No. When I used to be editor of Popular Mechanics magazine, one of the great things I got to do was hang out with Buzz Aldrin, and Buzz Aldrin talking about landing on the moon — now, looking back, you realize just how insanely risky that was. You see all these rovers designed today with all the modern technology failing to land a much smaller, lighter object safely on the moon, and you just think, “Wow, that was an incredible accomplishment.” And you have so much admiration for the guts of the guys who did it.As they always say, space is hard, and I think NASA working with commercial vendors to help them, give them some seed money, help them get started, pay them a set fee for the mission that you're asking for, but also build into your planning — just the way an entrepreneur would — that some product launches aren't going to work, some ideas are going to fail, sometimes you're going to have to start over. That's just part of the process, and if you're not spending ridiculous amounts of money, that's okay.When we talk about vendors, who are we talking about? When we talk about this ecosystem as it currently exists, what do these companies do besides SpaceX?The big one that everybody always mentions first, of course, is Blue Origin, Jeff Bezos's startup that's been around as long as SpaceX, but just moved much more slowly. Partly because when it first started up, it was almost as much of a think tank to explore different ideas about space and less of a scrappy startup trying to just make money by launching satellites for paying customers as soon as possible. That was Musk's model. But they've finally launched. They've launched a bunch of suborbital flights, you've seen where they carry various celebrities and stuff up to the edge of space for a few minutes and they come right back down. That's been a chance for them to test out their engines, which have seemed solid and reliable, but they've finally done one mission with their New Glenn rocket. Like SpaceX, it's a reusable rocket which can launch pretty heavy payloads. Once that gets proven and they've had a few more launches under their belt, should be an important part of this ecosystem.But you've got other companies, you've got Stoke Aerospace, you've got Firefly . . . You've got a few companies that are in the launch business, so they want to compete with SpaceX to launch mostly satellites for paying customers, also cargo for payloads for governments. And then you have a lot of other companies that are doing various kinds of space services and they're not necessarily going to try to be in the launch business per se. We don't need 40 different companies doing launches with different engines, different designs, different fuels, and stuff like that. Eight or 10 might be great, six might be great. We’ll see how the market sorts out.But then if you look at the development of the auto industry, it started with probably hundreds of little small shops, hand-building cars, but by the mid-century it had settled down to a few big companies through consolidation. And instead of hundreds of engine designs that were given 1950, there were probably in the US, I don't know, 12 engine designs or something like that. Stuff got standardized — we'll see the same thing happen in space — but you also saw an enormous ecosystem of companies building batteries, tires, transmissions, parts, wipers, all sorts of little things and servicing in an industry to service the automobile. Now, rockets are a lot more centralized and high-tech, but you're going to see something like that in the space economy, and it's already happening.A new role for NASA (17:27)I think NASA should get more ambitious in deep-space flight, both crewed and uncrewed.What do you see NASA should be doing? We don't want them designing rockets anymore, so what should they do? What does that portfolio look like?That's an excellent question. I think that we are in this pivotal time when, because of the success of SpaceX, and hopefully soon other vendors, they can relieve themselves of that responsibility to build their own rockets. That gets out of a lot of the problems of Congress meddling to maximize pork flowing to their states and all of that kind of stuff. So that's a positive in itself.Perhaps a bug rather than a feature for Congress.Right, but it also means that technology will move much, much faster as private companies are innovating and competing with each other. That gives NASA an opportunity. What should they do with it? I think NASA should get more ambitious in deep-space flight, both crewed and uncrewed. Because it'll get much cheaper to get cargo into orbit to get payload up there, as I said, they can launch more science missions, and then when it comes to human missions, I like the overall plan of Artemis. The details were really pulled together during the first Trump administration, which had a really good space policy overall, which is to return to the moon, set up a permanent or long-term habitation on the moon. The way NASA sketches it out, not all the burden is carried by NASA.They envision — or did envision — a kind of ecosystem on the moon where you might have private vendors there providing services. You might have a company that mines ice and makes oxygen, and fuel, and water for the residents of these space stations. You might have somebody else building habitation that could be used by visiting scientists who are not NASA astronauts, but also used by NASA.There's all this possibility to combine what NASA does with the private sector, and what NASA should always do is be focused on the stuff the private sector can't yet do. That would be the deep-space probes. That would be sending astronauts on the most daring non-routine missions. As the private sector develops the ability to do some of those things, then NASA can move on to the next thing. That's one set of goals.Another set of goals is to do the research into technologies, things that are hard for the private sector to undertake. In particular, things like new propulsion for deep-space travel. There’s a couple of different designs for nuclear rocket engines that I think are really promising, super efficient. Sadly, under the current budget cuts that are proposed at NASA, that's one of the programs that's being cut, and if you really want to do deep space travel routinely, ultimately, chemical fuels, they're not impossible, but they're not as feasible because you’ve got to get all that heavy — whatever your fuel is, methane or whatever it is — up into either into orbit or you’ve got to manufacture it on the moon or somewhere. The energy density of plutonium or uranium is just so much higher and it just allows you to do so much more with lighter weight. So I'd like to see them research those kinds of things that no individual private company could really afford to do at this point, and then when the technology is more mature, hand it off to the private sector.American space leadership (21:17)Exploration's never been totally safe, and if people want to take risks on behalf of a spirit of adventure and on behalf of humanity at large, I say we let them.If things go well —reforms, funding, lower launch costs — what does America’s role in space look like in 10 to 15 years, and what’s your concern if things go a darker route, like cutting nuclear engine research you were just talking about?I'll sketch out the bright scenario. This is very up your alley, Jim.Yeah, I viewed this as a good thing, so you tell me what it is.In 15 years I would love to see a small permanent colony at the south pole of the moon where you can harvest ice from the craters and maybe you'd have some habitation there, maybe even a little bit of space tourism starting up. People turn up their nose at space tourism, but it's a great way to help fund really important research. Remember the Golden Age of Exploration, James Cook and Darwin, those expeditions were self-funded. They were funded by rich people. If rich people want to go to space, I say I'm all for it.So a little base on the moon, important research going on, we're learning how to have people live on a foreign body, NASA is gathering tons of information and training for the next goal, which I think is even more important: I do agree we should get people to Mars. I don't think we should bypass the moon to get to Mars, I don't think that's feasible, that's what Elon Musk keeps suggesting. I think it's too soon for that. We want to learn about how people handle living off-planet for a long period of time closer to home — and how to mine ice and how to do all these things — closer to home, three or four days away, not months and months away. If something goes wrong, they'll be a lot more accessible.But I'd like to see, by then, some Mars missions and maybe an attempt to start the first long-term habitation of Mars. I don't think we're going to see that in 10 years, but I think that's a great goal, and I don't think it's a goal that taxpayers should be expected to fund 100 percent. I think by then we should see even more partnerships where the private companies that really want to do this — and I'm looking at Elon Musk because he's been talking about it for 20 years — they should shoulder a lot of the costs of that. If they see a benefit in that, they should also bear some of the costs. So that's the bright scenario.Along with that, all kinds of stuff going on in low-earth orbit: manufacturing drugs, seeing if you can harness solar energy, private space stations, better communications, and a robust science program exploring deep space with unmanned spacecraft. I'd like to see all of that. I think that could be done for a reasonable amount of money with the proper planning.The darker scenario is that we've just had too much chaos and indecision in NASA for years. We think of NASA as being this agency of great exploration, but they've done very little for 20 years . . . I take that back — NASA's uncrewed space program has had a lot of successes. It's done some great stuff. But when it comes to manned space flight, it's pretty much just been the International Space Station, and I think we've gotten most of the benefit out of that. They're planning to retire that in 2030. So then what happens? After we retired the Space Shuttle, space practically went into a very low-growth period. We haven't had a human being outside of low-earth orbit since Apollo, and that's embarrassing, frankly. We should be much more ambitious.I'm afraid we're entering a period where, without strong leadership and without a strong focus on really grand goals, then Congress will reassert its desire to use NASA as a piggy bank for their states and districts and aerospace manufacturers will build the stuff they're asked to build, but nothing will move very quickly. That's the worst-case scenario. We'll see, but right now, with all of the kind of disorder in Washington, I think we are in a period where we should be concerned.Can America still call itself the world’s space leader if its role is mainly launching things into Earth orbit, with private companies running space stations for activities like drug testing or movie production if, meanwhile, China is building space stations and establishing a presence on the Moon? In that scenario, doesn’t it seem like China is the world’s leader in space?That's a real issue. China has a coherent nationalistic plan for space, and they are pursuing it, they're pouring a lot of resources into it, and they're making a lot of headway. As always, when China rolls out its new, cutting-edge technology, it usually looks a lot like something originally built in the US, and they're certainly following SpaceX's model as closely as they can in terms of reusable rockets right now.China wants to get to the moon. They see this as a space race the way the Soviets saw a space race. It's a battle for national prestige. One thing that worries me, is under the Artemis plan during the first Trump administration, there was also something called the Artemis Accords — it still exists — which is an international agreement among countries to A) join in where they can if they want, with various American initiatives. So we've got partners that we're planning to build different parts of the Artemis program, including a space station around the moon called Gateway, which actually isn't the greatest idea, but the European Space Agency and others were involved in helping build it.But also, all these countries, more than 50 countries have signed on to these aspirational goals of the Artemis Accords, which are: freedom of navigation, shared use of space, going for purposes of peaceful exploration, being transparent about what you're doing in space so that other countries can see it, avoiding generating more space junk, space debris, which is a huge problem with all the stuff we've got up there now, including a lot of old decrepit satellites and rocket bodies. So committing to not just leaving your upper-stage rocket bodies drifting around in space. A lot of different good goals, and the fact that all these countries wanted to join in on this shows America's preeminence. But if we back away, or become chaotic, or start disrespecting those allies who've signed on, they're going to look for another partner in space and China is going to roll out the red carpet for them.You get a phone call from SpaceX. They've made some great leap forwards. That Starship, it's ready to go to Mars. They're going to create a human habitation out there. They need a journalist. By the way, it's a one-way trip. Do you go?I don't go to Mars. I've got family here. That comes first for me. But I know some people want to do that, and I think that we should celebrate that. The space journalist Rand Simberg wrote a book years ago called Safe Is Not An Option — that we should not be too hung up on trying to make space exploration totally safe. Exploration's never been totally safe, and if people want to take risks on behalf of a spirit of adventure and on behalf of humanity at large, I say we let them. So maybe that first trip to Mars is a one-way trip, or at least a one-way for a couple of years until more flights become feasible and more back-and-forth return flights become something that can be done routinely. It doesn't really appeal to me, but it'll appeal to somebody, and I'm glad we have those kinds of people in our society.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* Trump economy shows surprising resilience despite tariff impacts - Wapo* Supply Chains Become New Battleground in the Global Trade War - WSJ* This A.I. Company Wants to Take Your Job - NYT* The Mirage of Geoeconomics - PS* Japan urged to use gloomier population forecasts after plunge in births - FT* Europe’s nuclear fusion potential draws record investment round - FT▶ Business* How Disney’s AI lawsuit could shift the future of entertainment - Wapo* Meta plans big bet on AI’s secret ingredient: human brains - FT* Nvidia and Perplexity Team Up in European AI Push - WSJ* CRMArena-Pro: Holistic Assessment of LLM Agents Across Diverse Business Scenarios and Interactions - Arxiv* Fervo Snags $206 Million for Cape Station Geothermal - Heatmap* BYD launches cut-price EVs in Europe amid global price war - Semafor▶ Policy/Politics* The right refuses to take AI seriously - Vox* The Gig Economy Benefits Freelance Workers—Until Regulation Steps In - AEI* The war is on for Congress’ AI law ban - The Verge* Disney and Universal Sue AI Company Midjourney for Copyright Infringement - Wired* Big Tech Is Finally Losing - NYT Opinion* American Science's Culture Has Contributed to the Grave Threat It Now Faces - Real Clear Science▶ AI/Digital* New Apple study challenges whether AI models truly “reason” through problems - Ars* The problem of AI chatbots telling people what they want to hear - FT* With the launch of o3-pro, let’s talk about what AI “reasoning” actually does - Ars* ‘This is coming for everyone’: A new kind of AI bot takes over the web - Wapo* Europe’s AI computing shortage ‘will be resolved’ soon, says Nvidia chief - FT* We’re Not Ready for the AI Power Surge - Free Press▶ Biotech/Health* Pancreatic cancer vaccine eradicates trace of disease in early trials - New Atlas* World first: brain implant lets man speak with expression — and sing - Nature* The Alzheimer’s drug pipeline is healthier than you might think - The Economist▶ Clean Energy/Climate* Big Tech Cares About Clean Energy Tax Credits — But Maybe Not Enough - Heatmap* Nvidia ‘Climate in a Bottle’ Opens a View Into Earth’s Future. What Will We Do With It? - WSJ* Oil’s Lost Decade Is About to Be Repeated - Bberg Opinion* How the Pentagon Secretly Sparked America's Clean Energy Boom - The Debrief▶ Space/Transportation* Musk-Trump feud is a wake-up call on space - FT* Trump's 2026 budget cuts would force the world's most powerful solar telescope to close - Space▶ Up Wing/Down Wing* ‘Invasive Species’? Japan’s Growing Pains on Immigration - Bberg Opinion* Incredible Testimonies - Aeon* How and When Was the Wheel Invented? - Real Clear Science▶ Substacks/Newsletters* Trump's "beautiful" bill wrecks our energy future - Slow Boring* DOGE Looked Broken Before the Trump-Musk Breakup - The Dispatch* Steve Teles on abundance: prehistory, present, and future - The Permanent Problem* Is Macroeconomics a Mature Science? - Conversable EconomistFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
May 8, 2025 • 39min

🤖 Superintelligence and national security: My chat (+transcript) with AI expert Dan Hendrycks

My fellow pro-growth/progress/abundance Up Wingers,As we seemingly grow closer to achieving artificial general intelligence — machines that are smarter than humans at basically everything — we might be incurring some serious geopolitical risks.In the paper Superintelligence Strategy, his joint project with former Google CEO Eric Schmidt and Alexandr Wang, Dan Hendrycks introduces the idea of Mutual Assured AI Malfunction: a system of deterrence where any state’s attempt at total AI dominance is sabotaged by its peers. From the abstract: Just as nations once developed nuclear strategies to secure their survival, we now need a coherent superintelligence strategy to navigate a new period of transformative change. We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state’s aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals. Given the relative ease of sabotaging a destabilizing AI project—through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters—MAIM already describes the strategic picture AI superpowers find themselves in. Alongside this, states can increase their competitiveness by bolstering their economies and militaries through AI, and they can engage in nonproliferation to rogue actors to keep weaponizable AI capabilities out of their hands. Taken together, the three-part framework of deterrence, nonproliferation, and competitiveness outlines a robust strategy to superintelligence in the years ahead.Today on Faster, Please! — The Podcast, I talk with Hendrycks about the potential threats posed by superintelligent AI in the hands of state and rogue adversaries, and what a strong deterrence strategy might look like.Hendrycks is the executive director of the Center for AI Safety. He is an advisor to Elon Musk’s xAI and Scale AI, and is a prolific researcher and writer.In This Episode* Development of AI capabilities (1:34)* Strategically relevant capabilities (6:00)* Learning from the Cold War (16:12)* Race for strategic advantage (18:56)* Doomsday scenario (28:18)* Maximal progress, minimal risk (33:25)Below is a lightly edited transcript of our conversation. Development of AI capabilities (1:34). . . mostly the systems aren't that impressive currently. People use them to some extent, but I'd more emphasize the trajectory that we're on rather than the current capabilities.Pethokoukis: How would you compare your view of AI . . . as a powerful technology with economic, national security, and broader societal implications . . . today versus November of 2022 when OpenAI rolled out ChatGPT?Hendrycks: I think that the main difference now is that we have the reasoning paradigm. Back in 2022, GPT couldn't think for an extended period of time before answering and try out multiple different ways of dissolving a problem. The main new capability is its ability to handle more complicated reasoning and science, technology, engineering, mathematics sorts of tasks. It's a lot better at coding, it's a lot better at graduate school mathematics, and physics, and virology.An implication of that for national security is that AIs have some virology capabilities that they didn't before, and virology is dual-use that can be used for civilian applications and weaponization applications. That's a new concerning capability that they have, but I think, overall, the AI systems are still fairly similar in their capabilities profile. They're better in lots of different ways, but not substantially.I think the next large shift is when they can be agents, when they can operate more autonomously, when they can book you flights reliably, make PowerPoints, play through long-form games for extended periods of time, and that seems like it's potentially on the horizon this year. It didn't seem like that two years ago. That's something that a lot of people are keeping an eye on and think could be arriving fairly soon. Overall, I think the capabilities profile is mostly the same except now it has some dual-use capabilities that they didn't have earlier, in particular virology capabilities.To what extent are your national security concerns based on the capabilities of the technology as it is today versus where you think it will be in five years? This is also a way of me asking about the extent that you view AGI as a useful framing device — so this is also a question about your timeline.I think that mostly the systems aren't that impressive currently. People use them to some extent, but I'd more emphasize the trajectory that we're on rather than the current capabilities. They still can't do very interesting cyber offense, for instance. The virology capabilities is very recent. We just, I think maybe a week ago, put out a study with SecureBio from MIT where we had Harvard, MIT virology postdocs doing wet lab skills, trying to work on viruses. So, “Here's a picture of my petri dish, I heated it to 37 degrees, what went wrong? Help me troubleshoot, help me guide me through this step by step.” We were seeing that it was getting around 95th percentile compared to those Harvard-MIT virology postdocs in their area of expertise. This is not a capability that the models had two years ago.That is a national security concern, but I think most of the national security concerns where it's strategically relevant, where it can be used for more targeted weapons, where it affects the basis of a nation's power, I think that's something that happens in the next, say, two to five years. I think that's what we mostly need to be thinking about. I’m not particularly trying to raise the alarm saying that the AI systems right now are extremely scary in all these different ways because they're not even agential. They can't book flights yet.Strategically relevant capabilities (6:00). . . when thinking about the future of AI . . . it's useful to think in terms of specific capabilities, strategically-relevant capabilities, as opposed to when is it truly intelligent . . .So that two-to-five-year timeline — and you can debate whether this is a good way of thinking about it — is that a trajectory or timeline to something that could be called “human-level AI” — you can define that any way you want — and what are the capabilities that make AI potentially dangerous and a strategic player when thinking about national security?I think having a monolithic term for AGI or for advanced AI systems is a little difficult, largely because there's been a consistently-moving goalpost. So right now people say, “AIs are dumb because they can't do this and that.” They can't play video games at the level of a teenager, they can't code for a day-long project, and things like that. Neither can my grandmother. That doesn't mean that she's not human-level intelligence, it's just a lot of people don't have some of these capabilities.I think when thinking about the future of AI, especially when thinking about national security, it's useful to think in terms of specific capabilities, strategically-relevant capabilities, as opposed to when is it truly intelligent or something like that. This is because the capabilities of AI systems are very jagged: they're good at some things and terrible at others. They can't fold clothes that reliably — most of the AI can't —and they're okay at driving in some cities but not others, but they can solve really difficult mathematics problems, they can write really long essays and provide pretty good legal analysis very rapidly, and they can also forecast geopolitical events better than most forecasters. It's a really weird capabilities profile.When I'm thinking about national security from a malicious-use standpoint, I'm thinking about weapon capabilities, I'm thinking about cyber-offensive capabilities, which they don't yet have, but that's an important one to track, and, outside of malicious use, I'm thinking about what's their ability to do AI research and how much of that can they automate? Because if they can automate AI research, then you could just run 100,000 of these artificial AGI researchers to build the next generations of AGI, and that could get very explosive extremely quickly. You're moving from human-speed research to machine-speed research. They’re typing 100 times faster than people, they're running tons of experiments simultaneously. That could be quite explosive, and that's something that the founders of AI pointed at as a really relevant capability, like Alan Turing and others, where that’s you could have a potential loss-of-control type of event is with this sort of runaway process of AI's building future generations of AIs quite rapidly.So that's another capability. What fraction of AI research can they automate? For weaponization, I think if it gets extremely smart, able to do research in lots of other sorts of fields, then that would raise concerns of its ability to be used to disrupt the balance of power. For instance, if it can do research well, perhaps it could come up with a breakthrough that makes oceans more transparent so we can find where nuclear submarines are or find the mobile launches extremely reliably, or a breakthrough in driving down the cost by some orders of magnitude of anti-ballistic missile systems, which would disrupt having a secure second-strike, and these would be very geopolitically salient. To do those things, though, that seems like a bundle of capabilities as opposed to a specific thing like cyber-offensive capabilities, but those are the things that I'm thinking about that can really disrupt the geopolitical landscape.If we put them in a bucket called, to use your phrase, “strategically-relevant capabilities,” are we on a trajectory of a data- and computing-power-driven trajectory to those capabilities? Or do there need to be one or two key innovations before those relevant capabilities are possible?It doesn't seem like it currently that we need some new big insights, in large part because the rate of improvement is pretty good. So if we look at their coding capabilities — there's a benchmark called SWE-bench verified (SWE is software engineering). Given a set of coding tasks — and this benchmark was weighed in some years ago — the models are poised to get something like 90 percent on this this summer. Right now they're in this 60 percent range. If we just extrapolate the trend line out some more months, then they'll be doing nine out of 10 of those software engineering tasks that were set some years ago. That doesn't mean that that's the entirety of software engineering. Still need coders. It's not 100 percent, obviously, but that suggests that the capability is still improving fairly rapidly in some of these domains. And likewise, with their ability to play that take games that take 20-plus hours, a few months ago they couldn't — Pokémon, for instance, is something that kids play and that takes 20 hours or so to beat. The models from a few months ago couldn't beat the game. Now, the current models can beat the game, but it takes them a few hundred hours. It would not surprise me if in a few months they'll get it down to around human-level on the order of tens of hours, and then from there they'll be able to play harder and harder sorts of games that take longer periods of time, and I think that this would be indicative of higher general capabilities.I think that there's a lot of steam in the current way that things are being done and I think that they've been trapped at the floor in their agent capabilities for a while, but I think we're starting to see the shift. I think that most people at the major AI companies would also think that agents are on the horizon and I don't think they were thinking that, myself included, a year ago. We were not seeing the signs that we're seeing now.So what we're talking about is AIs is having, to use your phrase, which I like, “strategically-relevant capabilities” on a timeline that is soon enough that we should be having the kinds of conversations and the kind of thinking that you put forward in Superintelligence [Strategy]. We should be thinking about that right now very seriously.Yeah, it's very difficult to wrap one's head around because, unlike other domains, AI is much more general and broad in its impacts. So if one's thinking about nuclear strategy, you obviously need to think about bombs going off, and survivability, and second strike. The failure modes are: one state strikes the other, and then there's also, in the civilian applications, fissile material leaking or there being a nuclear power plant meltdown. That's the scenario space, there’s what states can do and then there's also some of these civilian application issues.Meanwhile, with AI, we've got much more than power plants melting down or bombs going off. We've got to think about how it transforms the economy, how it transforms people's private life, the sort of issues with them being sentient. We've got to think about it potentially disrupting mutual assured destruction. We've got to think about the AIs themselves being threats. We've got to think about regulations for autonomous AI agents and who's accountable. We've got to think about this open-weight, closed-weight issue. We've got, I think, a larger host of issues that touch on all the important spheres society. So it's not a very delimited problem and I think it's a very large pill to swallow, this possibility that it will be not just strategically relevant but strategically decisive this decade.Consequently, and thinking a little bit beforehand about it is, useful. Otherwise, if we just ignore it, I think we reality will slap us across the face and AI will hit us like a truck, and then we're going, “Wow, I wish we did something, had some more break-glass measures at a time right now, but the cupboard is bare in terms of strategic options because we didn't do some prudent things a while ago, or we didn't even bother thinking about what those are.”I keep thinking of the Situation Room in two years and they get news that China's doing some new big AI project, and it's fairly secretive, and then in the Situation Room they're thinking, “Okay, what do we know?” And the answer is nothing. We don't have really anybody on this. We're not collecting any information about this. We didn't have many concerted programs in the IC really tracking this, so we’re flying blind. I really don't want to be in that situationLearning from the Cold War (16:12). . . mutual assured destruction is an ugly reality that took decision-makers a long time to internalize, but that's just what the game theory showed would make the most sense. As I'm sure you know, throughout the course of the Cold War, there was a considerable amount of time and money spent on thinking about these kinds of problems. I went to college just before the end of the Cold War and I took an undergraduate class on nuclear war theory. There was a lot of thinking. To what extent does that volume of research and analysis over the course of a half-century, to what extent is that helpful for what you're trying to accomplish here?I think it's very fortunate that, because of the Cold War, a lot of people started getting more of a sense of game theory and when it's rational to conflict versus negotiate, and offense can provide a good defense, some of these counterintuitive things. I think mutual assured destruction is an ugly reality that took decision-makers a long time to internalize, but that's just what the game theory showed would make the most sense. Hopefully we'll do a lot better with AI because strategic thinking can be a lot more precise and some of these things that are initially counterintuitive, if you reason through them, you go, actually no, this makes a lot of sense. We're trying to shape each other's intentions in this kind of complicated way. I think that makes us much better poised to address these geopolitical issues than last time.I think of the Soviets, for instance, when talking about anti-ballistic missile systems. At one point, I forget who said that offense is immoral, defense is moral. So pointing these nuclear weapons at each other, this is the immoral thing. We need missile-defense systems. That's the moral option. It's just like, no, this is just going to eat up all of our budget. We're going to keep building these defense systems and it's not going to make us safer, we're just going to be spending more and more.That was not intuitive. Offense does feel viscerally more mean, hostile, but that's what you want. That's what you want, to preserve for strategic stability. I think that a lot of the thinking is helpful with that, and I think the education for appreciating the strategic dynamics is more in the water, it's more diffused across the decision-makers now, and I think that that's great.Race for strategic advantage (18:56)There is also a risk that China builds [AGI] first, so I think what we want to do in the US is build up the capabilities to surgically prevent them . . .I was recently reviewing a scenario slash world-building exercise among technologists, economists, forecasting people, and they were looking at various scenarios assuming that we're able to, on a rather short timeline, develop what they termed AGI. And one of the scenarios was that the US gets there first . . . probably not by very long, but the US got there first. I don't know how far China was behind, but that gave us the capability to sort of dictate terms to China about what their foreign policy would be: You're going to leave Taiwan alone . . . So it gave us an amazing strategic advantage.I'm sure there are a lot of American policymakers who would read that scenario and say, “That's the dream,” that we are able to accelerate progress, that we are able to get there first, we can dictate foreign policy terms to China, game over, we win. If I've read Superintelligence correctly, that scenario would play out in a far more complicated way than what I've just described.I think so. I think any bid for being a, not just unipolar force, but having a near-strategic-monopoly on power and able to cause all other superpowers to capitulate in arbitrary ways, concerns the other superpower. There is also a risk that China builds it first, so I think what we want to do in the US is build up the capabilities to surgically prevent them, if they are near or eminently going to gain a decisive advantage that would become durable and sustained over us, we want the ability to prevent that.There's a variety of ways one can do things. There's the classic grayer ways like arson, and cutting wires in data centers, and things like that, or for power plants . . . There's cyber offense, and there's other sorts of kinetic sabotage, but we want it nice and surgical and having a good, credible threat so that we can deter that from happening and shaping their intentions.I think it will be difficult to limit their capabilities, their ability to build these powerful systems, but I think being able to shape their intentions is something that is more tractable. They will be building powerful AI systems, but if they are making an attempt at leapfrogging us in a way that we never catch up and lose our standing and they get AIs that could also potentially disrupt MAD, for instance, we want to be able to prevent that. That is an important strategic priority, is developing a credible deterrent and saying there are some AI scenarios that are totally unacceptable to us and we want to block them off through credible threats.They'll do the same to us, as well, and they can do it more easily to us. They know what's going on at all of our AI companies, and this will not change because we have a double digit percentage of the employees who are Chinese nationals, easily extortable, they have family back home, and the companies do not have good information security — that will probably not change because that will slow them down if they really try and lock them up and move everybody to North Dakota or wherever to work in the middle of nowhere and have everything air-gapped. We are an open book to them and I think they can make very credible threats for sabotage and preventing that type of outcome.If we are making a bid for dictating their foreign policy and all of this, if we're making a bid for a strategic monopoly on power, they will not sit idly by, they will not take kindly to that when they recognize the stakes. If the US were to do a $500 billion program to achieve this faster than them, that would not go unnoticed. There's not a way of hiding that.But we are trying to achieve it faster than them.I would distinguish between trying to develop just generally more capable AI technologies than some of these strategically relevant capabilities or some of these strategically relevant programs. Like if we get AI systems that are generally useful for healthcare and for . . . whatever your pet cause area, we can have that. That is different from applying the AI systems to rapidly build the next generation of AIs, and the next generation of that. Just imagine if you have, right now, OpenAI’s got a few hundred AI researchers, imagine if you've got ones that are at that level that are artificial, AGI-type of researchers or are artificial researchers. You run 10,000, 100,000 thousand of them, they're operating around the clock at a hundred X speed, I think expecting a decade's worth of development compressed or telescoped into a year, that seems very plausible — not certain, but certainly double-digit percent chance.China or Russia for instance, would perceive that as, “This is really risky. They could get a huge leap from this because these rate of development will be so high that we could never catch up,” and they could use their new gains to clobber us. Or, if they don't control it, then we're also dead, or lose our power. So if the US controls it, China would reason that, “Our survival is threatened and how we do things is threatened,” and if they lose control of it, “Our survival is also threatened.” Either way, provided that this automated AI research and development loop produces some extremely powerful AI systems, China would be fearing for their survival.It's not just China: India, the global south, all the other countries, if they're more attuned to this situation, would be very concerned. Russia as well. Russia doesn't have the hope about competing, they don't have a $100 billion data centers, they're busy with Ukraine, and when they're finished with that, they may reassess it, but they're too many years behind. I think the best they can do is actually try and shape other states' intents rather than try to make a bid for outcompeting them.If we're thinking about deterrence and what you call Mutually Assured AI Malfunction [MAIM], there's a capability aspect that we want to make sure that we would have the capability to check that kind of dash for dominance. But there's also a communication aspect where both sides have to understand and trust what the other side is trying to do, which was a key part of classic Cold War deterrence. Is that happening?Information problems, yeah, if there's worse information then that can lead to conflict. I think China doesn't really need to worry about their access to information of what's going on. I think the US will need to develop more of its capabilities to have more reliable signals abroad. But I think there's different ways of getting information and producing misunderstandings, like the confidence-building measures, all these sorts of things. I think that the unilateral one is just espionage, and then the multilateral one is verification mechanisms and building some of that institutional or international infrastructure.I think the first step in all of this is the states need to at least take matters into their own hands by building up these unilateral options, the unilateral option to prevent adversaries from doing a dash for domination and also know what's going on with each other's projects. I think that's what the US should focus on right now. Later on, as the salience of AI increases, I think then just international discussions to increase more strategic stability around this would be more plausible to emerge. But if they're not trying to take basic things to defend themselves and protect their own security, then I don't think international stuff that makes that much sense. That's kind of out of order.Doomsday scenario (28:18)If our institutions wake up to this more and do some of the basic stuff . . . to prevent another state dominating the other, I think that will make this go quite a bit better. . .I have in my notes here that you think there's an 80 percent chance that an AI arms race would result in a catastrophe that would kill most of humanity. Do I have that right?I think it's not necessarily just the race. Let's think of people's probabilities for this. There's a wide spectrum of probability. Elon, who I work with at xAI, a company I advise, xAI is his company, Elon thinks it's generally on the order of 20 to 30 percent. Dario Amodei, the CEO of philanthropic, I think thinks it's around 20 percent, as well. Sam Altman around 10 percent. I think it's more likely than not that this doesn't go that well for people, but there's a lot of tractability and a lot of volatility here.If our institutions wake up to this more and do some of the basic stuff of knowing what's going on and sharpen your ability to have credible threats, credible, targeted threats to prevent another state dominating the other, I think that will make this go quite a bit better. . . I think if we went back in time in the 1940s and were saying, “Do we think that this whole nuclear thing is going to turn out well in 50 years?” I think we actually got a little lucky. I mean the Cuban Missile Crisis itself was . . .There were a lot of bad moments in the ’60s. There were quite a few . . .I think it's more likely than not, but there's substantial tractability and it's important not to be fatalistic about it or just deny it’s an issue, itself. I think it's like, do we think AI will go well? I don’t know, it depends on what our policy is. Right now, we're in the very early days and I'm still not noticing many of our institutions that are rising to the occasion that I think is warranted, but this could easily change in a few months with some larger event.Not to be science fictional or anything, but you talk about a catastrophe, are you talking about: AI creates some sort of biological weapon? Back and forth cyber attacks destroy all the electrical infrastructure for China and the United States, so all of a sudden we're back into the 1800s? Are you talking about some sort of more “Terminator”-like scenario, rogue AI? When you think about the kind of catastrophe that could be that dangerous humanity, what do you think about?We have three risk sources: one are states, the other are rogue actors like terrorists and pariah states, and then there's the AI themselves. The AI themselves are not relevant right now, but I think could be quite capable of causing damage on their own in even a year or two. That's the space of threat actors; so yes, AI could in the future . . . I don't see anything that makes them logically not controllable. They're mostly controllable right now. Maybe it's one out of 100, one out of 1000 of the times you run these AI systems and deploy them in some sort of environments [that] they do try breaking free. That's a bit of a problem later on when they actually gain the capability to break free and when they are able to operate autonomously.There's been lots of studies on this and you can see this in OpenAI’s reports whenever they release new models. It's like, “Oh, it's only a 0.1 percent chance of it trying to break free,” but if you run a million of these AI agents, that's a lot of them that are going to be trying to break free. They're just not very capable currently. So I think that the AIs themselves are risky, and if you're having humanity going up against AIs that aren't controlled by anybody, or AIs that broke free, that could get quite dangerous if you also have, as we're seeing now, China and others building more of these humanoid robots in the next few years. This could make them be concerning in that they could just by themselves create some sort of bioweapon. You don't need even human hands to do it, you can just instruct a robot to do it and disperse it. I think that's a pretty easy way to take out biological opposition, so to speak, in kind of an eccentric way.That's a concern. Rogue actors themselves doing this, them reasoning that, “Oh, this bioweapon gives us a secure second strike,” things like that would be a concern from rogue actors. Then, of course, states using this to make an attempt to crush the other state or develop a technology that disables an adversary’s secure second strike. I think these are real problems.Maximal progress, minimal risk (33:25)I think what we want to shoot for is [a world] where people have enough resources and the ability to just live their lives in ways as they self-determine . . .Let me finish with this: I want continuing AI progress such that we can cure all the major chronic diseases, that we can get commercial nuclear fusion, that we can get faster rockets, all the kinds of optimistic stuff, accelerate economic growth to a pace that we've never seen. I want all of that.Can I get all of that and also avoid the kinds of scenarios you're worried about without turning the optimistic AI project into something that arrives at the end of the century, rather than arrives midcentury? I’m just worried about slowing down all that progress.I think we can. In the Superintelligence Strategy, we have three parts to that: We have the deterrence part, which I’m speaking about here, and we have making sure that the capabilities aren't falling into the hands of rogue actors — and I think this isn't that difficult, good export controls and add some just basic safeguards of we need to know who you are if we're going to be helping you manipulate viruses, things like that. That's easy to handle.Then on the competition aspect, there are many ways the US can make itself more competitive, like having more guaranteed supply chains for AI chips, so more manufacturing here or in allied states instead of all of it being in Taiwan. Currently, all the cutting-edge AI chips are made in Taiwan, so if there's a Taiwan invasion, the US loses in this AI race. They lose. This is double-digit probability. This is very foreseeable. So trying to robustify our manufacturing capabilities, quite essential; likewise for making robotics and drones.I think there's still many axes to compete in. I don't think it makes sense to try and compete in building a sort of superintelligence versus one of these potentially mutual assured destruction-disrupting AIs. I don't think you want to be building those, but I think you can have your AIs for healthcare, you can have your AIs doing all the complicated math you want, and whatever, all this coding, and driving your vehicles, and folding your laundry. You can have all of that. I think it's definitely feasible.What we did in the Cold War with the prospect of nuclear weapons, we obviously got through it, and we had deterrence through mutual assured destruction. We had non-proliferation of fissile materials to lesser states and rogue actors, and we had containment of the Soviet Union. I think the Superintelligence Strategy is somewhat similar: If you deter some of the most stabilizing AI projects, you make sure that some of these capabilities are not proliferating to random rogue actors, and you increase your competitiveness relative to China through things like incorporating AI into your military by, for instance, improving your ability to manufacture drones and improving your ability to reliably get your hands on AI chips even if there's a Taiwan conflict.I think that's the strategy and this doesn't make us uncompetitive. We are still focusing on competitiveness, but this does put barriers around some of the threats that different states could pose to us and that rogue actors using AI could pose to us while still shoring up economic security and positioning ourselves if AI becomes really relevant.I lied, I had one more short question: If we avoid the dire scenarios, what does the world look like in 2045?I would guess that it would be utterly transformed. I wouldn't expect people would be working then as much, hopefully. If you've controlled it well, there could be many ways of living, as there is now, and people would have resources to do so. It’s not like there's one way of living — that seems bad because there's many different values to pursue. So letting people pursue their own values, so long as it doesn't destroy the system, and things like that, as we have today. It seems like an abstract version of the picture.People keep thinking, “Are we in zoos? Are AIs keeping us in zoos?” or something like that. It's like, no. Or like, “Are we just all in the Zuckerberg sort of virtual reality, AI friend thing?” It's like no, you can choose to do otherwise, as well. I think we want to preserve that ability.Good news: we won't have to fold laundry. Bad news: in zoos. There's many scenarios.I think what we want to shoot for is one where people have enough resources and the ability to just live their lives in ways as they self-determine, subject to not harming others in severe ways. But people tend to think there's same sort of forced dichotomy of it's going to be aWALL-EWALL-E world where everybody has to live the same way, or everybody's in zoos, or everybody's just pleasured-out and drugged-up or something. It’s forced choices. Some people do that, some people choose to have drugs, and we don't hear much from them, and others choose to flourish, and pursue projects, and raise children and so on.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* Is College Still Worth It? - Liberty Street Economics* Scalable versus Productive Technologies - Fed in Print▶ Business* AI’s Threat to Google Just Got Real - WSJ* AI Has Upended the Search Game. Marketers Are Scrambling to Catch Up. - WSJ▶ Policy/Politics* U.S. pushes nations facing tariffs to approve Musk’s Starlink, cables show - Wapo* US scraps Biden-era rule that aimed to limit exports of AI chips - FT* Singapore’s Vision for AI Safety Bridges the US-China Divide - Wired* A ‘Trump Card Visa’ Is Already Showing Up in Immigration Forms - Wired▶ AI/Digital* AI agents: from co-pilot to autopilot - FT* China’s AI Strategy: Adoption Over AGI - AEI* How to build a better AI benchmark - MIT* Introducing OpenAI for Countries - OpenAI* Why humans are still much better than AI at forecasting the future - Vox* Outperformed by AI: Time to Replace Your Analyst? Find Out Which GenAI Model Does It Best - SSRN▶ Biotech/Health* Scientists Hail This Medical Breakthrough. A Political Storm Could Cripple It. - NYT* DARPA-Funded Research Develops Novel Technology to Combat Treatment-Resistant PTSD - The Debrief▶ Clean Energy/Climate* What's the carbon footprint of using ChatGPT? - Sustainability by Numbers* OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation - Wired▶ Robotics/AVs* Jesse Levinson of Amazon Zoox: ‘The public has less patience for robotaxi mistakes’ - FT▶ Space/Transportation* NASA scrambles to cut ISS activity due to budget issues - Ars* Statistically Speaking, We Should Have Heard from Aliens by Now - Universe Today▶ Substacks/Newsletters* Globalization did not hollow out the American middle class - Noahpinion* The Banality of Blind Men - Risk & Progress* Toys, Pencils, and Poverty at the Margins - The Dispatch* Don’t Bet the Future on Winning an AI Arms Race - AI Prospects* Why Is the US Economy Surging Ahead of the UK? - Conversable EconomistFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
May 2, 2025 • 26min

🗽 America's immigration edge: My chat (+transcript) with policy expert Alex Nowrasteh

My fellow pro-growth/progress/abundance Up Wingers,With the rise of American populist nationalism has come the rise of nativism: a belief in the concept of “heritage Americans” and a deep distrust of immigration. Today on Faster, Please! — The Podcast, I talk with Alex Nowrasteh about the ideology beneath this severe skepticism, as well as what Americans lose economically if we shut our doors to both low- and high-skilled immigrants.Nowrasteh is the vice president for economic and social policy studies at the Cato Institute. He is the author of his own Substack with David Bier, as well as the co-author of Wretched Refuse? The Political Economy of Immigration and Institutions.Read more of Nowrasteh’s work on immigration, nationalism, and other research.In This Episode* Illegal immigration (1:16)* Rise of xenophobia (3:48)* Psychology of immigration skeptics (9:20)* The future American workforce (14:04)* Population decline and assimilation (17:35)Below is a lightly edited transcript of our conversation. Illegal immigration (1:16)The system that I would favor is one that allows a substantially larger number of people at every skill level to come into this country legally, to work, to live, and to become Americans . . . because this country demands their labor and there's no way for them to come legally.Pethokoukis: Will you, in a very short period of time, give me a sense of the situation at the southern border of the United States of America in terms of immigration, how that has evolved from Trump 1, to Biden, to now? Is it possible to give me a concise summary of that?Nowrasteh: From Obama through Trump 1, the border apprehension numbers were pretty reasonable, you were talking about somewhere between 400,000 and 800,000 per year. Then came Covid, crashed those numbers down to basically nothing by April of 2020.After that, the numbers progressively rose. They were at the highest point in December of 2020 than they had been for any other December going back over 25 years. Then Biden takes office, the numbers shoot through the roof. We're talking about 170,000 to 250,000, sometimes 300,000 a month until January or so of 2024; those numbers start coming down precipitously. December of 2024, they're at 40,000 or so, 45,000. January 2025, Trump comes in, they go down again. First full month of Trump's administration in February, they're about 8,000, the lowest numbers without a pandemic in a very long time.What's the right number?That's a hard question to answer? In an ideal world where costs and benefits didn't matter, I think the ideal number is zero. But the question is how do you get to that ideal number, right? Is it by having an insane amount of enforcement, of existing laws where you basically end up brutalizing people to an incredible extent? Or is it practically zero because we let people come in lawfully to work in this country. The system that I would favor is one that allows a substantially larger number of people at every skill level to come into this country legally, to work, to live, and to become Americans, and that would bring that number down to about what it is now or even lower than what it is now every month, because the reason people come illegally is because this country demands their labor and there's no way for them to come legally.Rise of xenophobia (3:48). . . I just don't think the economic argument is what moves people on this topic.As I’ve understood it, and maybe understand it wrong, is this issue has developed that — at first it seemed like the concern, and it still is the concern, was with illegal undocumented immigrants. And then it seems to me the argument became, “Well, we don't want those, and then we also really don't want low-skill immigrants either.” And now it seems, and maybe you have a different perspective, that it's, “Well, we don't really want those high-skill immigrants either.”You gave me the current state of illegal immigration at the southern border. What is the current state of the argument among people who want less, perhaps even no immigration in this country?State of the argument is actually what you described. When I started working on this topic about 15 years ago, I never thought I would've heard people come out against the H-1B visa, or against high-skilled immigrants, or against foreign entrepreneurs. But you saw this over Christmas actually, December of 2024. You saw this basically online “H-1 B-gate” where Vivek Ramaswamy and Elon Musk were saying H-1Bs are great. I think Musk had tweeted, “over my dead body we're going to cut the H-1B,” right? And you see this groundswell of conservatives and Republicans — not all of them, by any means — come out and say, “We don't even want these guys. We don't want these skilled immigrants,” using a whole range of arguments. None of them economic, by the way. Almost none of them economics; all culture, all voting habits, all stereotypes, a lot of them pretty nasty in my opinion.So there is this sense where some people just don't want immigrants. The first time I think I encountered this in writing from a person who was prominent was Anne Coulter, Jeff Sessions when he was senator, and these types of people around 2015, in a big way, and it seems to have become much more prominent than I ever thought it would be.Is it that they don't understand the economic argument or they just don't care about that argument?They don't care about it. I have come to the realization — this makes me sad because I'm an economist by training — but I just don't think the economic argument is what moves people on this topic. I don't think it's what they care about. I don't think it animates . . . It animates me as a pro-immigration person, I think it animate you, right?It does, yeah, it sure does.It does not animate the people who are opposed to it. I think it is a cultural argument, it is a crime element, it is a threat element, it is a, “This makes us less American somehow” weird, fuzzy-feeling argument.Would it matter if the immigrants were all coming from Germany, France, and Norway?Maybe for a handful of them, but generally no, I don't think so. I think the idea that America is special, is different, is some kind of unique nation that ethnically, or in other ways cannot be pierced or contaminated by foreigners — I think it's just like an “Ew, foreigners,” type of sentiment that people have. A base xenophobia that a lot of people have combined with a very reasonable fear and dislike of chaos. When people see chaos on the border, they hate it.I hate chaos on the border. My answer is to get rid of the chaos by letting people come in legally, because you legalize a market, you can actually regulate it. You can't regulate an illegal market. But I think other people see chaos, they have this sort of purity conception of America that's just fanciful, in my opinion, and they just don't want foreigners, and the chaos prompts them, makes it even more powerful.To what extent is it fear that all these immigrants will eventually vote for things you don't want? Or in this case, they're all going to become Democrats, so Republicans don't want them.That’s definitely part of it. I think that's more of an elite Republican fear, or an elite sort of nativist or conservative fear than it is amongst the people online who are yelling at me all the time or yelling at Elon Musk. I think that resonates a lot more in this city and in online conservative publications, I think that resonates much more. I don't think it's borne out by the facts, and people who say this will also loudly trumpet how Hispanics now basically split their vote in the 2024 election. David Shore, who is the progressive analyst of electoral politics, said he thinks that Trump actually won the naturalized immigrant vote, which is probably the first time a Republican has won the naturalized immigrant vote since the 19th century.The immediate question is, does that kind of thing, will that resonate into a changing opinion among folks on the right if they feel like they feel like they can win these voters?I don't think so because I think it's about deeper issues than that. I think it's a real feelings-, values-based issue.Psychology of immigration skeptics (9:20)When people feel like they don't have control of something in their country or their government doesn't have control of something, they become anti- whatever is the source of that chaos, even the legal versions of it.Has this been there for a long time? Was it exacerbated for some reason? Was it exacerbated by the financial crisis and the slow economy afterward? The only time I remember hearing about people using the idea of “heritage Americans” were elite people whose great great grandparents came over on the Mayflower and they thought they were better than everybody else, they were elites, they were these kind of Boston Brahmans. So I was aware of the concept from that, but I've never heard people — and I hear it now — about people who were not part of the original Mayflower wave, or Pilgrims, think of themselves as “heritage Americans” because their parents came over in the 1850s or the 1880s, but now their “heritage.” That idea to me seems new.I hadn't heard of it until just a few years ago, frankly, at all. I racked my brain about this because I used to have a lot of affinity for the Republican Party, just to be frank. And I'm from California, and I'm in my ’40s, so I remember Prop 187 in 1994 when the state had a big campaign about illegal immigrants’ enforcement and welfare, and it really changed the state's voting patterns to be much more democratic, eventually.Then I saw the Republican Party under George W. Bush, and John McCain, and all these other guys who were pro-Republican, but always in California the Republicans were very skeptical of immigration across the board, but I didn't really see that spread. Then I saw it go to Arizona in 2010, 2009, 2008, around there. I saw it go to South Carolina, Mississippi, some of these places, and then all of a sudden with Trump, it went everywhere.So I racked my brain thinking, did I miss something? Was there always something there and I was just too myopic to view it, or I wasn't in those circles, or I wanted to convince myself that it wasn't there? And I really think that it was always there to some small extent, but Trump is the most brilliant political entrepreneur of our lifetime and probably of our country's history, and that he took over this party from the outside and he convinced people to be nativists. Because what he was saying, the words — not that different from Scott Walker saying about immigration. It was not that different from what Mike Huckabee was saying about immigration. It wasn't that different from Santorum. But he said it or sold it in a way that just worked, I guess. That maybe absolves me of some responsibility or maybe allows me to say that I didn't miss anything, but I do think that that largely explains it.And how does it explain that, and you may not have an answer. I can sort of understand the visceral concern about chaos at the border or people coming here illegally. But then to take it to the point that we don't even want AI engineers to come to this country from India, or, “I'm really angry that someone from a foreign country is taking my kid's spot at Harvard.” That, to me, seems almost inexplicable.It's not the fact of the chaos, but it's the perception of the chaos, because when Trump came in in 2015, the border crossing numbers were really low. They were in the 300,000s, low 400,000s, but he talked about it like it was millions, and he created this perception of just insane, outrageous chaos.There's a research and political psychology field about the locus of control. When people feel like they don't have control of something in their country or their government doesn't have control of something, they become anti- whatever is the source of that chaos, even the legal versions of it. In some way, it's an understandable human reaction, but in some ways it is so destructive. But, like you said, it spreads to AI engineers from China because it's like all immigration, and it's so bad, and it's so destructive, and that is the best explanation that I've seen out there about that.The future American workforce (14:04)What we notice in the economics of immigration, when we do these types of studies and we take a look at the wage impacts, we've got basically no wage effect on those of native-born Americans.I write a lot about, hopefully, this technological wave that we're going to be experiencing, and then I also write a little about immigration. The question I get is, if we're going to be worried about the jobs of the future being taken over by software or by robots, if we really think that's going to happen, shouldn’t we really be thinking very hard about the kinds of people we let enter into this country, even legally, and their ability to function in that kind of economy?I think we need to think about what is the best mechanism to select people to come here that the economy needs. What you described . . . assumes an amount of knowledge, and foresight, and, frankly, the incentive to make a wise decision in the hands of bureaucrats and politicians that they just do not have and that they will never have. and what matters most and who can pick the best in the market,You can say STEM degrees only. I only want people who have STEM degrees from colleges that, on some global ranking, are in the top 500 universities. You could say that. That would be one way of selecting.They could try to centrally plan it like that. . .You're saying “centrally planned” because you know that's going to get a reaction out of me, but go ahead.I do. The thing is, there's all different types of ways to have an immigration system and there's going to be a little bit of planning any immigration system. But I think the one that will work best is the one that allows the market to have the widest possible choice. We don't know how automation is going to turn out.There's this thing called Moravec’s paradox in a lot of AI writing, which is the idea that you'll probably be able to automate a lot of high-skill jobs more easily than you will be able to automate, say, somebody who's a maid, or a nanny, or a nurse, or a plumber, just because the real world is harder than . . . You and I type, and talk, and do math. That's probably easier to do. So maybe the optimal thing to do would be to increase immigration for low-skilled people because all the jobs in the future are going to be low-skilled anyway, because we're going to be able to automate all the high-skilled jobs.Though you could say then that that would take away the jobs from the natives.You could say that, of course. What we notice in the economics of immigration, when we do these types of studies and we take a look at the wage impacts, we've got basically no wage effect on those of native-born Americans. If we were to have a situation where let's say massive amounts of jobs disappear in entire sectors of the economy, vanished, automated . . . well, that just means that we're going to have more opportunities and specialization, division of labor, where there's going to be a lot more lower-skilled and mid-skill jobs, just because there's such a much larger and more productive side of the economy.There's going to be so much more profits in these other ones that we're going to have a bigger economy in the same way that when agriculture basically shrank as a massive section of the workforce, those people got other jobs that were more productive, and it was great. I think we could maybe see that again, and I hope we do. I don't want to have to work anymore.Population decline and assimilation (17:35). . . if the whole world is going to have population decline in 20, 30, 50 years, we're going to have to deal with that at some point, but I'd rather deal with that problem with a population of 600 million Americans than a population of 350 million Americans.The scenario — and this was highlighted to me by one of our scholars who looks a lot about demographics and population growth — his theory is that all the population-decline estimates, shrinkage, and slowing down estimates from the United Nations are way too optimistic, that population would begin to level off much faster. Whatever the UN's low or worst-case scenario is, if you want to put a qualifier on it like that, it's probably like that. And a lot of policymakers are underestimating the decline in fertility rates, and eventually everyone's going to figure that out. And there'll be a mad global scandal for population — for people.There's going to be tons of labor shortages and you're going to want people, and there's going to be this scramble, and not every country is going to be as good at it. If people want to immigrate, they're probably more likely, everything else equal, they're going to want to go to the United States as opposed to — not to smear another country — I don't know, Argentina or something. We have this great ability to accept people to come here and for them to succeed and build companies. Maybe that company is a bodega, maybe that company is a technology company. So we're at this moment where we have this great natural advantage, but it seems like we're utterly rejecting it.We are not just rejecting it, we are turning it from a positive into a big negative. You have these students who are being apprehended and having their visas canceled because of a fishing license violation six years ago. People who are skilled science students studying the United States who could go on to be founders of big companies or just high-skilled workers, and we're saying, “Nope, can't do it, sorry.” We're kicking people out for reasons of speech — speech that I often don't like, by the way, but it doesn't matter, because I believe it on principle. It's important.We already see it showing up in tourism numbers plummeting to the United States, and I think we're going to see it in student visa numbers shortly. And student visas are the first step on that long chain of being able to be a high-skilled immigrant one day. So we are really doing long-term damage.On the population stuff, I completely agree, and if the whole world is going to have population decline in 20, 30, 50 years, we're going to have to deal with that at some point, but I'd rather deal with that problem with a population of 600 million Americans than a population of 350 million Americans.What is your general take on the notion of assimilation? Is that a problem? Should we doing more to make sure people are successful here? How do you think about that?I do think assimilation is important. I don't think it's a problem. When I talk about assimilation, I use it in the way that Jacob Vigdor — Jake is a professor, University of Washington economist, and he says, assimilation is when an immigrant or their kids are indistinguishable from long-settled Americans on the measurements of family size, civic participation, income, education, language. Basically it takes three generations. That is, the first generation are the immigrants, second are their kids, third are their grandkids, on average.Some, much faster. Like my Indian neighbors are more than assimilated in the first generation. They do better than native born Americans on most of those measures. Some lower-skilled Hispanic or some East African immigrants, takes three, three and a half, four sometimes, to do that well, but it's going very well.We do not have the cultural issues that some countries in Europe have. To some extent, it's overblown in Europe, those problems, but they do exist and they exist to a greater extent than they do here. Part of that is because we have birthright citizenship. People who are born in this country are citizens, they don't feel like they're an illegal underclass because they’re not. They feel totally accepted because they are legally, and we have an ethos in this country, because we don't have an ethnic identification of being American like they do in places like Germany or in Norway. I have family members in Norway who are half Iranian and they're not really considered to be Norwegian, culturally. Here it's the opposite. If I were to go say I'm not an American, people would be offended. There, if you say, “Oh, I'm Norwegian,” they'll correct you and be like, “No, you're not Norwegian, you're something else.”We have this great secret sauce born of our culture, born of our lack of an ethnic Americanness. It doesn't matter what ethnicity or race you are, or religion, anybody can be American. And we have done it so well and we just don't have these issues, and I don't think, as a result, we should do more because I'm worried about the government breaking it.Based on what you just said, at a gut level, how do you feel when someone uses the phrase “heritage Americans,” and they hate the idea of America as an idea, and to be an American you need to have been here for a long time. That whole way of looking at it — do you get it, or do you at some level [think], I am not a psychologist, I do not understand it?A way to make sense of it [is] by swapping out the word “American” in their sentence and we place it with the word “Frenchman,” or “German,” or “Russian,” or “Japanese,” or some other country that's a nation state where the identity is bound up with ethnicity. That's the way that I make sense of it, and I think this is a concept that just does not work in the United States; it cannot work. Maybe it's the most nationalistic I am, but I think that that's just a fundamentally foreign idea that could never work in the United States. It sounds more at home in Europe and other places. That's what strikes meAs I finish up, I know you have all kinds of ideas to improve the American immigration system, which we will try to link to, but instead of me asking you to give me your five-point plan for perfection, I'm going to ask you: How does this turn around? What is the scenario in which we become more accepting again of immigrants, perhaps the way we were 30 years ago?That really is a $64,000 question. The idea that I have floated — which probably won't work, but at least gets people to pause — is the entitlement programs are going insolvent, and I have pitched to my grandmother-in-law, who is a very nice woman, who is a Republican who is skeptical of immigration, but who is worried about Social Security going bankrupt, I say, “Well, there is one way to increase the solvency of this program for 30 or 40 years.” And she said, “What's that?” and I say, “Let in 100 million immigrants between the age of the 20 and 30.” And it gives her pause. I think if that idea can give her pause, then maybe it has a shot. When this country seriously starts to grapple with the insolvency of entitlement programs, that's looming.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro ReadsPlease check out the website or Substack app for the latest Up Wing economic, business, and tech news contained in this new edition of the newsletter. Lots of great stuff!Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app