The 80000 Hours Podcast on Artificial Intelligence cover image

Two: Ajeya Cotra on accidentally teaching AI models to deceive us

The 80000 Hours Podcast on Artificial Intelligence

00:00

Exploring Risks in AI Alignment

This chapter focuses on the guest speaker's dedication to researching potential risks of AI systems becoming misaligned with human objectives, leading to a transition from research to grant-making in AI alignment. The discussion covers retrospective analysis of AI advancements, shifting attitudes towards transformative AI, and public concerns about AI leading to human extinction.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app