
Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting
Future of Life Institute Podcast
00:00
Navigating Ethical Dilemmas in AI Safety Research
The chapter explores the challenges and debates surrounding the use of advanced AI models in safety research, addressing concerns about corruptibility and the efficacy of models claimed to be safe by design. It delves into the concept of responsibly scaling AI systems, discussing the implications of pausing model development to ensure safety without hampering progress. The conversation also reflects on the contrasting safety efforts in various machine intelligence research institutes and explores the reactions of individuals within AGI corporations towards the idea of pausing AI development.
Transcript
Play full episode