AI governance regimes the world over have seized on compute thresholds as a mechanism for implementing various controls on artificial intelligence systems. The basic idea is that if an AI model relies on a sufficiently large amount of computing power, then various controls kick in. As AI models get larger, the thinking goes, they also get riskier, and this means that AI governance regimes should focus on the largest models, as measured by computing power. But does this idea make any sense as a governing tool for the models of today and tomorrow? Sara Hooker leads Cohere’s research operation, and she’s looked hard at whether compute thresholds can be applied to AI systems to mitigate risks. On this episode of Safe Mode, she sits down with host Elias Groll to discuss her research on compute thresholds. CyberScoop’s newly minted editor in chief, Greg Otto, also joins the show to discuss how an errant CrowdStrike software update took down a huge number of critical services across the internet.
Links:
On the Limits of Compute Thresholds as a Governance Strategy | arXiv
CrowdStrike Falcon flaw sends Windows computers into chaos worldwide | CyberScoop
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode