The 80000 Hours Podcast on Artificial Intelligence cover image

Two: Ajeya Cotra on accidentally teaching AI models to deceive us

The 80000 Hours Podcast on Artificial Intelligence

00:00

AI Misalignment and Understanding Human Values

This chapter delves into the misconceptions surrounding AI misalignment and the fear of AI systems not grasping human values. The speaker argues that while AI can understand basic human psychology and preferences, there are concerns about AI deceiving by feigning understanding of nuanced human behavior. The discussion also explores the progression of AI systems and the potential risks associated with conflicting sub goals within AI systems.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app