In my view, we ought to have a massive investment in existential risk mitigation from AI. I think that there's maybe a bit of a trend towards ignoring the lack of agency that people have as a result of things like great power conflict. It would be great if we could do that. But I just don't see it as something that's tractable in the short term.