80,000 Hours Podcast cover image

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

80,000 Hours Podcast

CHAPTER

Navigating AI Alignment Challenges

This chapter explores the complexities of aligning artificial intelligence with human values, discussing the orthogonality thesis and the independence of intelligence from the goals. It emphasizes the challenges faced when developing AI systems, particularly robots, and highlights the risks of misalignment in their objectives. The conversation stresses the need for refining alignment techniques as AI capabilities advance, shedding light on the potential unintended consequences of deploying misaligned intelligent systems.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner