3min chapter

Center for AI Policy Podcast cover image

#10: Stephen Casper on Technical and Sociotechnical AI Safety Research

Center for AI Policy Podcast

CHAPTER

Understanding Unobservable Failures in AI Safety

This chapter delves into the challenges of recognizing non-obvious failures in AI systems, contrasting easily identifiable problems with those that go unnoticed. The discussion highlights the necessity for alternative research methods, such as anomaly detection, to improve AI alignment and safety amidst subtle biases and complex issues.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode