Center for AI Policy Podcast cover image

#10: Stephen Casper on Technical and Sociotechnical AI Safety Research

Center for AI Policy Podcast

CHAPTER

Understanding Unobservable Failures in AI Safety

This chapter delves into the challenges of recognizing non-obvious failures in AI systems, contrasting easily identifiable problems with those that go unnoticed. The discussion highlights the necessity for alternative research methods, such as anomaly detection, to improve AI alignment and safety amidst subtle biases and complex issues.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner