LessWrong (Curated & Popular) cover image

“Foom & Doom 2: Technical alignment is hard” by Steven Byrnes

LessWrong (Curated & Popular)

00:00

Intro

This chapter explores the potential challenges of aligning future artificial general intelligence (AGI) with human values, emphasizing the contrast with current large language models. It discusses the implications of reliance on reinforcement learning and offers a critical yet hopeful view on addressing alignment issues.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app