If we worry too much about AI safety, will this make us "lose the race with China"1?
(here "AI safety" means long-term concerns about alignment and hostile superintelligence, as opposed to "AI ethics" concerns like bias or intellectual property.)
Everything has tradeoffs, regulation vs. progress is a common dichotomy, and the more important you think AI will be, the more important it is that the free world get it first. If you believe in superintelligence, the technological singularity, etc, then you think AI is maximally important, and this issue ought to be high on your mind.
But when you look at this concretely, it becomes clear that this is too small to matter - so small that even the sign is uncertain.
https://www.astralcodexten.com/p/why-ai-safety-wont-make-america-lose