80,000 Hours Podcast

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

Jan 31, 2025
Nova DasSarma, a computer scientist at Anthropic and co-founder of HofVarpNear Studios, dives into the critical realm of information security in AI. She discusses the immense financial stakes in AI development and the vulnerabilities inherent in training models. The conversation touches on recent high-profile breaches, like Nvidia's, and the significant security challenges posed by advanced technologies. DasSarma emphasizes the importance of collaboration in improving security protocols and ensuring safe AI alignment amid evolving threats.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Model Theft Risk

  • AI models' small size, internet-connected servers, and poor computer security create a theft risk.
  • This is a serious challenge, especially for expensively trained machine learning models.
ANECDOTE

Zero-Click iMessage Vulnerability

  • Nova DasSarma recounts a zero-click iMessage vulnerability involving the JBIG2 compression algorithm.
  • Attackers created a computer within the algorithm to deliver payloads, highlighting the difficulty of defense.
ADVICE

Data Cold Storage

  • Limit easily accessible information by encrypting and storing valuable data offline.
  • Consider cold storage solutions like Amazon Glacier or Google Snowball, setting alarms for unauthorized access.
Get the Snipd Podcast app to discover more snips from this episode
Get the app