80,000 Hours Podcast cover image

#80 – Stuart Russell on why our approach to AI is broken and how to fix it

80,000 Hours Podcast

00:00

AI, Morality, and Human Preferences

This chapter discusses the moral responsibilities involved in designing AI systems that respect both human and non-human preferences. It examines the philosophical and ethical challenges of integrating the welfare of animals and sentient beings into technology, while questioning the viability of achieving an objective moral truth in a diverse universe. Additionally, it highlights the need for a deeper understanding of collective decision-making and the roles that humans play in an increasingly automated world.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app