
Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)
Future of Life Institute Podcast
00:00
Navigating Open-Source AI Risks
This chapter examines the potential misuse of open-source machine learning models, emphasizing the need for accountability amidst concerns about bioweapons. It also explores the balance between safety and access, discussing the effectiveness of human interaction in exploiting vulnerabilities compared to traditional technical defenses.
Transcript
Play full episode