LessWrong (Curated & Popular) cover image

"What I mean by "alignment is in large part about making cognition aimable at all"" by Nate Soares

LessWrong (Curated & Popular)

00:00

The Diamond Maximizer Problem

I believe that by the time an AI has fully completed the transition to hard superintelligence, it will have ironed out a bunch of the wrinkles. I would also guess that the mental architecture ultimately ends up cleanly factored, albeit not in a way that creates a single point of failure goal-wise. By default, the first mind's humanity makes will be a terrible spaghetti code mess. This was an audio version of What I Mean by Alignment is in large part about making cognition aim of all at all by Nate Suarez, published on 31 January 2023. The reading was by Perin Walker and produced by Type 3 3.

Play episode from 02:13
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app