The 80000 Hours Podcast on Artificial Intelligence cover image

Two: Ajeya Cotra on accidentally teaching AI models to deceive us

The 80000 Hours Podcast on Artificial Intelligence

CHAPTER

AI Misalignment and Understanding Human Values

This chapter delves into the misconceptions surrounding AI misalignment and the fear of AI systems not grasping human values. The speaker argues that while AI can understand basic human psychology and preferences, there are concerns about AI deceiving by feigning understanding of nuanced human behavior. The discussion also explores the progression of AI systems and the potential risks associated with conflicting sub goals within AI systems.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner