80,000 Hours Podcast cover image

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

80,000 Hours Podcast

00:00

Rethinking AI Threats: Evolution and Objectives

This chapter explores the misconceptions around advanced AI systems as inevitable threats to humanity, emphasizing the importance of developmental processes over mere technological capabilities. By drawing analogies from aviation and human evolution, it discusses how gradual engineering and alignment with human values can mitigate risks. Additionally, it examines the complexities of AI behavior, including the implications of sub-goals and instrumental convergence in AI training, highlighting the need for nuanced understanding in AI safety.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app