80,000 Hours Podcast cover image

#80 – Stuart Russell on why our approach to AI is broken and how to fix it

80,000 Hours Podcast

00:00

Navigating AI Safety and Alignment

This chapter explores the orthogonality thesis in machine learning, highlighting the disconnect between an AI's capabilities and its goals. It emphasizes the need for critical examination of the values behind AI objectives, discussing the complexities of aligning these systems with human ethics. The conversation also addresses the importance of diverse research approaches, iterative safety methods, and responsible regulation to mitigate risks associated with advanced AI technologies.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app