80,000 Hours Podcast cover image

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

80,000 Hours Podcast

00:00

AI Companions, Persuasion, and Social Proof Risks

Holden warns AI companions could create loyal human pockets, enabling persuasion cascades and social proof that amplify dangerous behaviors.

Play episode from 03:52:31
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app