Since the launch of Project Stargate by OpenAI and the debut of DeepSeek’s V3 model, there has been a raging debate in global AI circles: what’s the balance between openness and scale when it comes to the competition for the frontiers of AI performance? More compute has traditionally led to better models, but V3 showed that it was possible to rapidly improve a model with less compute. At risk in the debate is nothing less than American dominance in the AI race.
Jared Dunnmon is highly concerned about the trajectory. He recently wrote “The Real Threat of Chinese AI” for Foreign Affairs, and across multiple years at the Defense Department’s DIU office, he has focused on ensuring long-term American supremacy in the critical technologies underpinning AI. That’s led to a complex thicket of policy challenges, from how open is “open-source” and “open-weights” to the energy needs of data centers as well as the censorship latent in every Chinese AI model.
Joining host Danny Crichton and Riskgaming director of programming Laurence Pevsner, the trio talk about the scale of Stargate versus the efficiency of V3, the security models of open versus closed models and which to trust, how the world can better benchmark the performance of different models, and finally, what the U.S. must do to continue to compete in AI in the years ahead.