
Faster, Please! — The Podcast
🤖 Superintelligence and national security: My chat (+transcript) with AI expert Dan Hendrycks
My fellow pro-growth/progress/abundance Up Wingers,
As we seemingly grow closer to achieving artificial general intelligence — machines that are smarter than humans at basically everything — we might be incurring some serious geopolitical risks.
In the paper Superintelligence Strategy, his joint project with former Google CEO Eric Schmidt and Alexandr Wang, Dan Hendrycks introduces the idea of Mutual Assured AI Malfunction: a system of deterrence where any state’s attempt at total AI dominance is sabotaged by its peers. From the abstract:
Just as nations once developed nuclear strategies to secure their survival, we now need a coherent superintelligence strategy to navigate a new period of transformative change. We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state’s aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals. Given the relative ease of sabotaging a destabilizing AI project—through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters—MAIM already describes the strategic picture AI superpowers find themselves in. Alongside this, states can increase their competitiveness by bolstering their economies and militaries through AI, and they can engage in nonproliferation to rogue actors to keep weaponizable AI capabilities out of their hands. Taken together, the three-part framework of deterrence, nonproliferation, and competitiveness outlines a robust strategy to superintelligence in the years ahead.
Today on Faster, Please! — The Podcast, I talk with Hendrycks about the potential threats posed by superintelligent AI in the hands of state and rogue adversaries, and what a strong deterrence strategy might look like.
Hendrycks is the executive director of the Center for AI Safety. He is an advisor to Elon Musk’s xAI and Scale AI, and is a prolific researcher and writer.
In This Episode
* Development of AI capabilities (1:34)
* Strategically relevant capabilities (6:00)
* Learning from the Cold War (16:12)
* Race for strategic advantage (18:56)
* Doomsday scenario (28:18)
* Maximal progress, minimal risk (33:25)
Below is a lightly edited transcript of our conversation.
Development of AI capabilities (1:34)
. . . mostly the systems aren't that impressive currently. People use them to some extent, but I'd more emphasize the trajectory that we're on rather than the current capabilities.
Pethokoukis: How would you compare your view of AI . . . as a powerful technology with economic, national security, and broader societal implications . . . today versus November of 2022 when OpenAI rolled out ChatGPT?
Hendrycks: I think that the main difference now is that we have the reasoning paradigm. Back in 2022, GPT couldn't think for an extended period of time before answering and try out multiple different ways of dissolving a problem. The main new capability is its ability to handle more complicated reasoning and science, technology, engineering, mathematics sorts of tasks. It's a lot better at coding, it's a lot better at graduate school mathematics, and physics, and virology.
An implication of that for national security is that AIs have some virology capabilities that they didn't before, and virology is dual-use that can be used for civilian applications and weaponization applications. That's a new concerning capability that they have, but I think, overall, the AI systems are still fairly similar in their capabilities profile. They're better in lots of different ways, but not substantially.
I think the next large shift is when they can be agents, when they can operate more autonomously, when they can book you flights reliably, make PowerPoints, play through long-form games for extended periods of time, and that seems like it's potentially on the horizon this year. It didn't seem like that two years ago. That's something that a lot of people are keeping an eye on and think could be arriving fairly soon. Overall, I think the capabilities profile is mostly the same except now it has some dual-use capabilities that they didn't have earlier, in particular virology capabilities.
To what extent are your national security concerns based on the capabilities of the technology as it is today versus where you think it will be in five years? This is also a way of me asking about the extent that you view AGI as a useful framing device — so this is also a question about your timeline.
I think that mostly the systems aren't that impressive currently. People use them to some extent, but I'd more emphasize the trajectory that we're on rather than the current capabilities. They still can't do very interesting cyber offense, for instance. The virology capabilities is very recent. We just, I think maybe a week ago, put out a study with SecureBio from MIT where we had Harvard, MIT virology postdocs doing wet lab skills, trying to work on viruses. So, “Here's a picture of my petri dish, I heated it to 37 degrees, what went wrong? Help me troubleshoot, help me guide me through this step by step.” We were seeing that it was getting around 95th percentile compared to those Harvard-MIT virology postdocs in their area of expertise. This is not a capability that the models had two years ago.
That is a national security concern, but I think most of the national security concerns where it's strategically relevant, where it can be used for more targeted weapons, where it affects the basis of a nation's power, I think that's something that happens in the next, say, two to five years. I think that's what we mostly need to be thinking about. I’m not particularly trying to raise the alarm saying that the AI systems right now are extremely scary in all these different ways because they're not even agential. They can't book flights yet.
Strategically relevant capabilities (6:00)
. . . when thinking about the future of AI . . . it's useful to think in terms of specific capabilities, strategically-relevant capabilities, as opposed to when is it truly intelligent . . .
So that two-to-five-year timeline — and you can debate whether this is a good way of thinking about it — is that a trajectory or timeline to something that could be called “human-level AI” — you can define that any way you want — and what are the capabilities that make AI potentially dangerous and a strategic player when thinking about national security?
I think having a monolithic term for AGI or for advanced AI systems is a little difficult, largely because there's been a consistently-moving goalpost. So right now people say, “AIs are dumb because they can't do this and that.” They can't play video games at the level of a teenager, they can't code for a day-long project, and things like that. Neither can my grandmother. That doesn't mean that she's not human-level intelligence, it's just a lot of people don't have some of these capabilities.
I think when thinking about the future of AI, especially when thinking about national security, it's useful to think in terms of specific capabilities, strategically-relevant capabilities, as opposed to when is it truly intelligent or something like that. This is because the capabilities of AI systems are very jagged: they're good at some things and terrible at others. They can't fold clothes that reliably — most of the AI can't —and they're okay at driving in some cities but not others, but they can solve really difficult mathematics problems, they can write really long essays and provide pretty good legal analysis very rapidly, and they can also forecast geopolitical events better than most forecasters. It's a really weird capabilities profile.
When I'm thinking about national security from a malicious-use standpoint, I'm thinking about weapon capabilities, I'm thinking about cyber-offensive capabilities, which they don't yet have, but that's an important one to track, and, outside of malicious use, I'm thinking about what's their ability to do AI research and how much of that can they automate? Because if they can automate AI research, then you could just run 100,000 of these artificial AGI researchers to build the next generations of AGI, and that could get very explosive extremely quickly. You're moving from human-speed research to machine-speed research. They’re typing 100 times faster than people, they're running tons of experiments simultaneously. That could be quite explosive, and that's something that the founders of AI pointed at as a really relevant capability, like Alan Turing and others, where that’s you could have a potential loss-of-control type of event is with this sort of runaway process of AI's building future generations of AIs quite rapidly.
So that's another capability. What fraction of AI research can they automate? For weaponization, I think if it gets extremely smart, able to do research in lots of other sorts of fields, then that would raise concerns of its ability to be used to disrupt the balance of power. For instance, if it can do research well, perhaps it could come up with a breakthrough that makes oceans more transparent so we can find where nuclear submarines are or find the mobile launches extremely reliably, or a breakthrough in driving down the cost by some orders of magnitude of anti-ballistic missile systems, which would disrupt having a secure second-strike, and these would be very geopolitically salient. To do those things, though, that seems like a bundle of capabilities as opposed to a specific thing like cyber-offensive capabilities, but those are the things that I'm thinking about that can really disrupt the geopolitical landscape.
If we put them in a bucket called, to use your phrase, “strategically-relevant capabilities,” are we on a trajectory of a data- and computing-power-driven trajectory to those capabilities? Or do there need to be one or two key innovations before those relevant capabilities are possible?
It doesn't seem like it currently that we need some new big insights, in large part because the rate of improvement is pretty good. So if we look at their coding capabilities — there's a benchmark called SWE-bench verified (SWE is software engineering). Given a set of coding tasks — and this benchmark was weighed in some years ago — the models are poised to get something like 90 percent on this this summer. Right now they're in this 60 percent range. If we just extrapolate the trend line out some more months, then they'll be doing nine out of 10 of those software engineering tasks that were set some years ago. That doesn't mean that that's the entirety of software engineering. Still need coders. It's not 100 percent, obviously, but that suggests that the capability is still improving fairly rapidly in some of these domains. And likewise, with their ability to play that take games that take 20-plus hours, a few months ago they couldn't — Pokémon, for instance, is something that kids play and that takes 20 hours or so to beat. The models from a few months ago couldn't beat the game. Now, the current models can beat the game, but it takes them a few hundred hours. It would not surprise me if in a few months they'll get it down to around human-level on the order of tens of hours, and then from there they'll be able to play harder and harder sorts of games that take longer periods of time, and I think that this would be indicative of higher general capabilities.
I think that there's a lot of steam in the current way that things are being done and I think that they've been trapped at the floor in their agent capabilities for a while, but I think we're starting to see the shift. I think that most people at the major AI companies would also think that agents are on the horizon and I don't think they were thinking that, myself included, a year ago. We were not seeing the signs that we're seeing now.
So what we're talking about is AIs is having, to use your phrase, which I like, “strategically-relevant capabilities” on a timeline that is soon enough that we should be having the kinds of conversations and the kind of thinking that you put forward in Superintelligence [Strategy]. We should be thinking about that right now very seriously.
Yeah, it's very difficult to wrap one's head around because, unlike other domains, AI is much more general and broad in its impacts. So if one's thinking about nuclear strategy, you obviously need to think about bombs going off, and survivability, and second strike. The failure modes are: one state strikes the other, and then there's also, in the civilian applications, fissile material leaking or there being a nuclear power plant meltdown. That's the scenario space, there’s what states can do and then there's also some of these civilian application issues.
Meanwhile, with AI, we've got much more than power plants melting down or bombs going off. We've got to think about how it transforms the economy, how it transforms people's private life, the sort of issues with them being sentient. We've got to think about it potentially disrupting mutual assured destruction. We've got to think about the AIs themselves being threats. We've got to think about regulations for autonomous AI agents and who's accountable. We've got to think about this open-weight, closed-weight issue. We've got, I think, a larger host of issues that touch on all the important spheres society. So it's not a very delimited problem and I think it's a very large pill to swallow, this possibility that it will be not just strategically relevant but strategically decisive this decade.
Consequently, and thinking a little bit beforehand about it is, useful. Otherwise, if we just ignore it, I think we reality will slap us across the face and AI will hit us like a truck, and then we're going, “Wow, I wish we did something, had some more break-glass measures at a time right now, but the cupboard is bare in terms of strategic options because we didn't do some prudent things a while ago, or we didn't even bother thinking about what those are.”
I keep thinking of the Situation Room in two years and they get news that China's doing some new big AI project, and it's fairly secretive, and then in the Situation Room they're thinking, “Okay, what do we know?” And the answer is nothing. We don't have really anybody on this. We're not collecting any information about this. We didn't have many concerted programs in the IC really tracking this, so we’re flying blind. I really don't want to be in that situation
Learning from the Cold War (16:12)
. . . mutual assured destruction is an ugly reality that took decision-makers a long time to internalize, but that's just what the game theory showed would make the most sense.
As I'm sure you know, throughout the course of the Cold War, there was a considerable amount of time and money spent on thinking about these kinds of problems. I went to college just before the end of the Cold War and I took an undergraduate class on nuclear war theory. There was a lot of thinking. To what extent does that volume of research and analysis over the course of a half-century, to what extent is that helpful for what you're trying to accomplish here?
I think it's very fortunate that, because of the Cold War, a lot of people started getting more of a sense of game theory and when it's rational to conflict versus negotiate, and offense can provide a good defense, some of these counterintuitive things. I think mutual assured destruction is an ugly reality that took decision-makers a long time to internalize, but that's just what the game theory showed would make the most sense. Hopefully we'll do a lot better with AI because strategic thinking can be a lot more precise and some of these things that are initially counterintuitive, if you reason through them, you go, actually no, this makes a lot of sense. We're trying to shape each other's intentions in this kind of complicated way. I think that makes us much better poised to address these geopolitical issues than last time.
I think of the Soviets, for instance, when talking about anti-ballistic missile systems. At one point, I forget who said that offense is immoral, defense is moral. So pointing these nuclear weapons at each other, this is the immoral thing. We need missile-defense systems. That's the moral option. It's just like, no, this is just going to eat up all of our budget. We're going to keep building these defense systems and it's not going to make us safer, we're just going to be spending more and more.
That was not intuitive. Offense does feel viscerally more mean, hostile, but that's what you want. That's what you want, to preserve for strategic stability. I think that a lot of the thinking is helpful with that, and I think the education for appreciating the strategic dynamics is more in the water, it's more diffused across the decision-makers now, and I think that that's great.
Race for strategic advantage (18:56)
There is also a risk that China builds [AGI] first, so I think what we want to do in the US is build up the capabilities to surgically prevent them . . .
I was recently reviewing a scenario slash world-building exercise among technologists, economists, forecasting people, and they were looking at various scenarios assuming that we're able to, on a rather short timeline, develop what they termed AGI. And one of the scenarios was that the US gets there first . . . probably not by very long, but the US got there first. I don't know how far China was behind, but that gave us the capability to sort of dictate terms to China about what their foreign policy would be: You're going to leave Taiwan alone . . . So it gave us an amazing strategic advantage.
I'm sure there are a lot of American policymakers who would read that scenario and say, “That's the dream,” that we are able to accelerate progress, that we are able to get there first, we can dictate foreign policy terms to China, game over, we win. If I've read Superintelligence correctly, that scenario would play out in a far more complicated way than what I've just described.
I think so. I think any bid for being a, not just unipolar force, but having a near-strategic-monopoly on power and able to cause all other superpowers to capitulate in arbitrary ways, concerns the other superpower. There is also a risk that China builds it first, so I think what we want to do in the US is build up the capabilities to surgically prevent them, if they are near or eminently going to gain a decisive advantage that would become durable and sustained over us, we want the ability to prevent that.
There's a variety of ways one can do things. There's the classic grayer ways like arson, and cutting wires in data centers, and things like that, or for power plants . . . There's cyber offense, and there's other sorts of kinetic sabotage, but we want it nice and surgical and having a good, credible threat so that we can deter that from happening and shaping their intentions.
I think it will be difficult to limit their capabilities, their ability to build these powerful systems, but I think being able to shape their intentions is something that is more tractable. They will be building powerful AI systems, but if they are making an attempt at leapfrogging us in a way that we never catch up and lose our standing and they get AIs that could also potentially disrupt MAD, for instance, we want to be able to prevent that. That is an important strategic priority, is developing a credible deterrent and saying there are some AI scenarios that are totally unacceptable to us and we want to block them off through credible threats.
They'll do the same to us, as well, and they can do it more easily to us. They know what's going on at all of our AI companies, and this will not change because we have a double digit percentage of the employees who are Chinese nationals, easily extortable, they have family back home, and the companies do not have good information security — that will probably not change because that will slow them down if they really try and lock them up and move everybody to North Dakota or wherever to work in the middle of nowhere and have everything air-gapped. We are an open book to them and I think they can make very credible threats for sabotage and preventing that type of outcome.
If we are making a bid for dictating their foreign policy and all of this, if we're making a bid for a strategic monopoly on power, they will not sit idly by, they will not take kindly to that when they recognize the stakes. If the US were to do a $500 billion program to achieve this faster than them, that would not go unnoticed. There's not a way of hiding that.
But we are trying to achieve it faster than them.
I would distinguish between trying to develop just generally more capable AI technologies than some of these strategically relevant capabilities or some of these strategically relevant programs. Like if we get AI systems that are generally useful for healthcare and for . . . whatever your pet cause area, we can have that. That is different from applying the AI systems to rapidly build the next generation of AIs, and the next generation of that. Just imagine if you have, right now, OpenAI’s got a few hundred AI researchers, imagine if you've got ones that are at that level that are artificial, AGI-type of researchers or are artificial researchers. You run 10,000, 100,000 thousand of them, they're operating around the clock at a hundred X speed, I think expecting a decade's worth of development compressed or telescoped into a year, that seems very plausible — not certain, but certainly double-digit percent chance.
China or Russia for instance, would perceive that as, “This is really risky. They could get a huge leap from this because these rate of development will be so high that we could never catch up,” and they could use their new gains to clobber us. Or, if they don't control it, then we're also dead, or lose our power. So if the US controls it, China would reason that, “Our survival is threatened and how we do things is threatened,” and if they lose control of it, “Our survival is also threatened.” Either way, provided that this automated AI research and development loop produces some extremely powerful AI systems, China would be fearing for their survival.
It's not just China: India, the global south, all the other countries, if they're more attuned to this situation, would be very concerned. Russia as well. Russia doesn't have the hope about competing, they don't have a $100 billion data centers, they're busy with Ukraine, and when they're finished with that, they may reassess it, but they're too many years behind. I think the best they can do is actually try and shape other states' intents rather than try to make a bid for outcompeting them.
If we're thinking about deterrence and what you call Mutually Assured AI Malfunction [MAIM], there's a capability aspect that we want to make sure that we would have the capability to check that kind of dash for dominance. But there's also a communication aspect where both sides have to understand and trust what the other side is trying to do, which was a key part of classic Cold War deterrence. Is that happening?
Information problems, yeah, if there's worse information then that can lead to conflict. I think China doesn't really need to worry about their access to information of what's going on. I think the US will need to develop more of its capabilities to have more reliable signals abroad. But I think there's different ways of getting information and producing misunderstandings, like the confidence-building measures, all these sorts of things. I think that the unilateral one is just espionage, and then the multilateral one is verification mechanisms and building some of that institutional or international infrastructure.
I think the first step in all of this is the states need to at least take matters into their own hands by building up these unilateral options, the unilateral option to prevent adversaries from doing a dash for domination and also know what's going on with each other's projects. I think that's what the US should focus on right now. Later on, as the salience of AI increases, I think then just international discussions to increase more strategic stability around this would be more plausible to emerge. But if they're not trying to take basic things to defend themselves and protect their own security, then I don't think international stuff that makes that much sense. That's kind of out of order.
Doomsday scenario (28:18)
If our institutions wake up to this more and do some of the basic stuff . . . to prevent another state dominating the other, I think that will make this go quite a bit better. . .
I have in my notes here that you think there's an 80 percent chance that an AI arms race would result in a catastrophe that would kill most of humanity. Do I have that right?
I think it's not necessarily just the race. Let's think of people's probabilities for this. There's a wide spectrum of probability. Elon, who I work with at xAI, a company I advise, xAI is his company, Elon thinks it's generally on the order of 20 to 30 percent. Dario Amodei, the CEO of philanthropic, I think thinks it's around 20 percent, as well. Sam Altman around 10 percent. I think it's more likely than not that this doesn't go that well for people, but there's a lot of tractability and a lot of volatility here.
If our institutions wake up to this more and do some of the basic stuff of knowing what's going on and sharpen your ability to have credible threats, credible, targeted threats to prevent another state dominating the other, I think that will make this go quite a bit better. . . I think if we went back in time in the 1940s and were saying, “Do we think that this whole nuclear thing is going to turn out well in 50 years?” I think we actually got a little lucky. I mean the Cuban Missile Crisis itself was . . .
There were a lot of bad moments in the ’60s. There were quite a few . . .
I think it's more likely than not, but there's substantial tractability and it's important not to be fatalistic about it or just deny it’s an issue, itself. I think it's like, do we think AI will go well? I don’t know, it depends on what our policy is. Right now, we're in the very early days and I'm still not noticing many of our institutions that are rising to the occasion that I think is warranted, but this could easily change in a few months with some larger event.
Not to be science fictional or anything, but you talk about a catastrophe, are you talking about: AI creates some sort of biological weapon? Back and forth cyber attacks destroy all the electrical infrastructure for China and the United States, so all of a sudden we're back into the 1800s? Are you talking about some sort of more “Terminator”-like scenario, rogue AI? When you think about the kind of catastrophe that could be that dangerous humanity, what do you think about?
We have three risk sources: one are states, the other are rogue actors like terrorists and pariah states, and then there's the AI themselves. The AI themselves are not relevant right now, but I think could be quite capable of causing damage on their own in even a year or two. That's the space of threat actors; so yes, AI could in the future . . . I don't see anything that makes them logically not controllable. They're mostly controllable right now. Maybe it's one out of 100, one out of 1000 of the times you run these AI systems and deploy them in some sort of environments [that] they do try breaking free. That's a bit of a problem later on when they actually gain the capability to break free and when they are able to operate autonomously.
There's been lots of studies on this and you can see this in OpenAI’s reports whenever they release new models. It's like, “Oh, it's only a 0.1 percent chance of it trying to break free,” but if you run a million of these AI agents, that's a lot of them that are going to be trying to break free. They're just not very capable currently. So I think that the AIs themselves are risky, and if you're having humanity going up against AIs that aren't controlled by anybody, or AIs that broke free, that could get quite dangerous if you also have, as we're seeing now, China and others building more of these humanoid robots in the next few years. This could make them be concerning in that they could just by themselves create some sort of bioweapon. You don't need even human hands to do it, you can just instruct a robot to do it and disperse it. I think that's a pretty easy way to take out biological opposition, so to speak, in kind of an eccentric way.
That's a concern. Rogue actors themselves doing this, them reasoning that, “Oh, this bioweapon gives us a secure second strike,” things like that would be a concern from rogue actors. Then, of course, states using this to make an attempt to crush the other state or develop a technology that disables an adversary’s secure second strike. I think these are real problems.
Maximal progress, minimal risk (33:25)
I think what we want to shoot for is [a world] where people have enough resources and the ability to just live their lives in ways as they self-determine . . .
Let me finish with this: I want continuing AI progress such that we can cure all the major chronic diseases, that we can get commercial nuclear fusion, that we can get faster rockets, all the kinds of optimistic stuff, accelerate economic growth to a pace that we've never seen. I want all of that.
Can I get all of that and also avoid the kinds of scenarios you're worried about without turning the optimistic AI project into something that arrives at the end of the century, rather than arrives midcentury? I’m just worried about slowing down all that progress.
I think we can. In the Superintelligence Strategy, we have three parts to that: We have the deterrence part, which I’m speaking about here, and we have making sure that the capabilities aren't falling into the hands of rogue actors — and I think this isn't that difficult, good export controls and add some just basic safeguards of we need to know who you are if we're going to be helping you manipulate viruses, things like that. That's easy to handle.
Then on the competition aspect, there are many ways the US can make itself more competitive, like having more guaranteed supply chains for AI chips, so more manufacturing here or in allied states instead of all of it being in Taiwan. Currently, all the cutting-edge AI chips are made in Taiwan, so if there's a Taiwan invasion, the US loses in this AI race. They lose. This is double-digit probability. This is very foreseeable. So trying to robustify our manufacturing capabilities, quite essential; likewise for making robotics and drones.
I think there's still many axes to compete in. I don't think it makes sense to try and compete in building a sort of superintelligence versus one of these potentially mutual assured destruction-disrupting AIs. I don't think you want to be building those, but I think you can have your AIs for healthcare, you can have your AIs doing all the complicated math you want, and whatever, all this coding, and driving your vehicles, and folding your laundry. You can have all of that. I think it's definitely feasible.
What we did in the Cold War with the prospect of nuclear weapons, we obviously got through it, and we had deterrence through mutual assured destruction. We had non-proliferation of fissile materials to lesser states and rogue actors, and we had containment of the Soviet Union. I think the Superintelligence Strategy is somewhat similar: If you deter some of the most stabilizing AI projects, you make sure that some of these capabilities are not proliferating to random rogue actors, and you increase your competitiveness relative to China through things like incorporating AI into your military by, for instance, improving your ability to manufacture drones and improving your ability to reliably get your hands on AI chips even if there's a Taiwan conflict.
I think that's the strategy and this doesn't make us uncompetitive. We are still focusing on competitiveness, but this does put barriers around some of the threats that different states could pose to us and that rogue actors using AI could pose to us while still shoring up economic security and positioning ourselves if AI becomes really relevant.
I lied, I had one more short question: If we avoid the dire scenarios, what does the world look like in 2045?
I would guess that it would be utterly transformed. I wouldn't expect people would be working then as much, hopefully. If you've controlled it well, there could be many ways of living, as there is now, and people would have resources to do so. It’s not like there's one way of living — that seems bad because there's many different values to pursue. So letting people pursue their own values, so long as it doesn't destroy the system, and things like that, as we have today. It seems like an abstract version of the picture.
People keep thinking, “Are we in zoos? Are AIs keeping us in zoos?” or something like that. It's like, no. Or like, “Are we just all in the Zuckerberg sort of virtual reality, AI friend thing?” It's like no, you can choose to do otherwise, as well. I think we want to preserve that ability.
Good news: we won't have to fold laundry. Bad news: in zoos. There's many scenarios.
I think what we want to shoot for is one where people have enough resources and the ability to just live their lives in ways as they self-determine, subject to not harming others in severe ways. But people tend to think there's same sort of forced dichotomy of it's going to be aWALL-EWALL-E world where everybody has to live the same way, or everybody's in zoos, or everybody's just pleasured-out and drugged-up or something. It’s forced choices. Some people do that, some people choose to have drugs, and we don't hear much from them, and others choose to flourish, and pursue projects, and raise children and so on.
On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
Micro Reads
▶ Economics
* Is College Still Worth It? - Liberty Street Economics
* Scalable versus Productive Technologies - Fed in Print
▶ Business
* AI’s Threat to Google Just Got Real - WSJ
* AI Has Upended the Search Game. Marketers Are Scrambling to Catch Up. - WSJ
▶ Policy/Politics
* U.S. pushes nations facing tariffs to approve Musk’s Starlink, cables show - Wapo
* US scraps Biden-era rule that aimed to limit exports of AI chips - FT
* Singapore’s Vision for AI Safety Bridges the US-China Divide - Wired
* A ‘Trump Card Visa’ Is Already Showing Up in Immigration Forms - Wired
▶ AI/Digital
* AI agents: from co-pilot to autopilot - FT
* China’s AI Strategy: Adoption Over AGI - AEI
* How to build a better AI benchmark - MIT
* Introducing OpenAI for Countries - OpenAI
* Why humans are still much better than AI at forecasting the future - Vox
* Outperformed by AI: Time to Replace Your Analyst? Find Out Which GenAI Model Does It Best - SSRN
▶ Biotech/Health
* Scientists Hail This Medical Breakthrough. A Political Storm Could Cripple It. - NYT
* DARPA-Funded Research Develops Novel Technology to Combat Treatment-Resistant PTSD - The Debrief
▶ Clean Energy/Climate
* What's the carbon footprint of using ChatGPT? - Sustainability by Numbers
* OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation - Wired
▶ Robotics/AVs
* Jesse Levinson of Amazon Zoox: ‘The public has less patience for robotaxi mistakes’ - FT
▶ Space/Transportation
* NASA scrambles to cut ISS activity due to budget issues - Ars
* Statistically Speaking, We Should Have Heard from Aliens by Now - Universe Today
▶ Substacks/Newsletters
* Globalization did not hollow out the American middle class - Noahpinion
* The Banality of Blind Men - Risk & Progress
* Toys, Pencils, and Poverty at the Margins - The Dispatch
* Don’t Bet the Future on Winning an AI Arms Race - AI Prospects
* Why Is the US Economy Surging Ahead of the UK? - Conversable Economist
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe