
 The Inside View
 The Inside View [Crosspost] Adam Gleave on Vulnerabilities in GPT-4 APIs (+ extra Nathan Labenz interview)
 12 snips 
 May 17, 2024  Adam Gleave from Far AI and Nathan Labenz discuss vulnerabilities in GPT-4's APIs, accidental jailbreaking during fine-tuning, malicious code generation, private email discovery risks, ethical AI disclosure dilemmas, and navigating the ethical landscape of open source models. They explore exploiting vulnerabilities in superhuman Go AIs, challenges with GPT-4, and the transformative potential of AI. 
 Chapters 
 Transcript 
 Episode notes 
 1  2  3  4  5  6  7  8  9  10  11 
 Introduction 
 00:00 • 2min 
 Exploring the Mission of Far AI and AI Safety Research 
 02:26 • 3min 
 Vulnerabilities in GPT-4 APIs and Malicious Code Generation 
 05:12 • 22min 
 Risks of Private Email Discovery Using GPT-4 
 27:12 • 9min 
 Ethical Considerations in Disclosing Vulnerabilities in AI Models 
 36:39 • 19min 
 Navigating the Ethical Landscape of Open Source Models 
 55:36 • 7min 
 Exploiting Vulnerabilities in Superhuman Go Playing AIs 
 01:02:19 • 29min 
 Navigating the GPT-4 Landscape 
 01:30:49 • 6min 
 Discussion on Adversarial Robustness and AI Career Opportunities 
 01:36:29 • 3min 
 Exploring the Transformative Potential of GPT-4 
 01:39:49 • 25min 
 The Evolution of AI Leaders 
 02:04:51 • 11min 

