AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Introduction
Several developments over the past few months should cause you to re-evaluate what you are doing. These include:
Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on.
In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of [...]
---
Outline:
(00:06) Introduction
(01:21) Implications of recent developments
(01:25) Updates toward short timelines
(04:26) The Trump Presidency
(07:34) The o1 paradigm
(09:23) Deepseek
(12:08) Stargate/AI data center spending
(13:11) Increased internal deployment
(15:43) Absence of AI x-risk/safety considerations in mainstream AI discourse
(17:13) Implications for strategic priorities
(17:18) Broader implications for US-China competition
(19:33) What seems less likely to work?
(20:56) What should people concerned about AI safety do now?
(24:01) Acknowledgements
The original text contained 13 footnotes which were omitted from this narration.
---
First published:
January 28th, 2025
Narrated by TYPE III AUDIO.