LessWrong (Curated & Popular) cover image

“Foom & Doom 2: Technical alignment is hard” by Steven Byrnes

LessWrong (Curated & Popular)

00:00

Exploring AGI Alignment: Consequentialism, Non-Consequentialism, and Urgency in Research

This chapter explores the intricate issues of alignment in artificial general intelligence (AGI) and the philosophical underpinnings that shape its reward systems. It highlights the risks of consequentialist approaches and advocates for the need to develop benevolent AI motivations, stressing the urgency for research into alignment solutions before AGI becomes a reality.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app