80,000 Hours Podcast cover image

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

80,000 Hours Podcast

CHAPTER

Rethinking AI Threats: Evolution and Objectives

This chapter explores the misconceptions around advanced AI systems as inevitable threats to humanity, emphasizing the importance of developmental processes over mere technological capabilities. By drawing analogies from aviation and human evolution, it discusses how gradual engineering and alignment with human values can mitigate risks. Additionally, it examines the complexities of AI behavior, including the implications of sub-goals and instrumental convergence in AI training, highlighting the need for nuanced understanding in AI safety.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner