AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Existential Risk Angle to AI Safety
In the last few years it just kind of seems like there's a cocktail of data model size and compute that gives you capabilities. There is sort of this polarity where some people are like look we don't want all the AI capabilities to be concentrated in a small number of hands but on the other hand research suggests powerful AIs are intrinsically dangerous. When you have systems that can plan ahead that much you got to start to worry about whether they'll realize their program objectives if I get turned off or if I don't have access to certain resources. The very least doesn't seem like a bad thing that they're inviting in this external auditing and maybe setting that precedent going forward.