AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
China is making significant progress in AI systems, including large language models comparable to those in the US. Efforts such as the Baidu Early Bot and other Chinese models are nearing GPT-4 levels. Despite trailing in base model development, China is competitive in computer vision, with companies like SenseTime excelling in advanced surveillance software.
There is a complex race dynamic between the US and China in AI development. While US national security focuses on AI accountability and ethics, Chinese engineers and the government are driven by the goal of winning the tech race. However, the rivalry is more about competition within companies rather than between nations. The focus is on preventing reckless development and ensuring safety in AI innovation.
Global collaboration is crucial to prevent reckless AI development and catastrophic risks. Strategies include dominance to avoid a race situation, coordinating with other countries to align AI risks perception, and emphasizing AI benefit sharing to ensure a collective effort towards safer AI development. Verification mechanisms and credible signals are essential in promoting trust and cooperation.
The competition in AI development poses risks of reckless advancement, particularly in military applications. Concerns arise from the potential deployment of unreliable autonomous weapons for military advantage. The prisoner's dilemma scenario highlights the need for cooperation to prevent unintended conflict escalation and ensure the responsible development of AI technologies, mitigating the risks of unintended consequences.
China is utilizing AI to strengthen authoritarian tendencies and national security, including military power and potential gray zone tactics. US policymakers are concerned about China using advanced AI for disinformation and offensive cyber capabilities, potentially undermining democratic processes.
China has invested billions in its semiconductor industry, focusing on legacy chips due to lower costs and closer supply chains. However, challenges exist in catching up with Western leading edge chips needed for AI applications, due to restrictions on advanced lithography machines and materials.
US export controls restrict China from obtaining advanced AI chips like the Nvidia A100 and H100, impacting China's ability to scale AI training clusters. China resorts to acquiring cut-down versions but continues efforts to build domestic AI processors using 7 nanometer technology.
In the short term, China faces limitations in scaling AI with domestic chips due to production capacity constraints. Long-term prospects depend on overcoming trade-offs and technological challenges to achieve competitive chip manufacturing.
Countries like the US, the UK, Japan, and Singapore have AI safety institutes to ensure model safety and evaluation while funding AI safety research. Encouraging China to establish a similar organization can facilitate collaboration with other countries to harmonize regulations and advance international governance on AI systems. By creating an AI safety coordination authority under ministries like Information and Industry Technology, China can solidify its role in setting AI safety regulations and contribute to unified international standards.
Promoting China's inclusion in expanded White House voluntary commitments can improve AI safety globally. This involves creating systems for model evaluation, fostering independent evaluators, establishing trust and safety risk-sharing channels between companies and governments, and investing in advanced safety research. Emphasizing a diplomatic exchange framework can encourage mutual learning and cooperation between countries for proactive safety measures in AI development.
"You don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of compute requirements than the frontier large language models. And China has the compute to train these systems. And if you’re, for instance, building a cyber agent or something that conducts cyberattacks, perhaps you also don’t need the general reasoning or mathematical ability of a large language model. You train on a much smaller subset of data. You fine-tune it on a smaller subset of data. And those systems — one, if China intentionally misuses them, and two, if they get proliferated because China just releases them as open source, or China does not have as comprehensive AI regulations — this could cause a lot of harm in the world." —Sihao Huang
In today’s episode, host Luisa Rodriguez speaks to Sihao Huang — a technology and security policy fellow at RAND — about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance.
Links to learn more, highlights, video, and full transcript.
They cover:
Chapters:
Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode