“Eliezer’s Lost Alignment Articles / The Arbital Sequence” by Ruby
Feb 20, 2025
auto_awesome
Dive into the treasure trove of AI alignment insights from Eliezer Yudkowsky and others, overlooked in the Arbital platform. Learn about key concepts such as instrumental convergence and corrigibility, alongside some less-known ideas that challenge conventional understanding. The discussion also sheds light on the high-quality mathematical guides that are now more accessible than ever. It's a rich retrospective that reaffirms the relevance of these pivotal articles for today's thinkers.
02:37
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
The curation of Eliezer Yudkowsky's and Nate Soares' articles on AI alignment and mathematics enhances their visibility and accessibility for a wider audience.
Key concepts like instrumental convergence and corrigibility, alongside lesser-known topics, provide critical insights into the challenges of AI alignment.
Deep dives
Importance of AI Alignment Content
The high-quality articles on AI alignment and mathematics written by notable figures like Eliezer Yudkowsky and Nate Suarez have not received the attention they deserve due to the limited reach of the Arbital platform. These writings explore critical alignment concepts, such as instrumental convergence and corrigibility, offering deep insights that are essential for understanding the field. Moreover, lesser-known topics, such as epistemic instrumental efficiency, are also covered, providing a broader perspective on AI alignment challenges. The effort to compile and publish this content on LessWrong allows for greater visibility and accessibility, ensuring that these valuable ideas can be appreciated by a wider audience.
Organization of the Collected Articles
The articles have been systematically organized into tiers based on their accessibility and engagement level, making it easier for readers to navigate through the content. Tier 1 features essential readings that provide a good experience, while Tier 2 highlights high-quality but less accessible content. This curation process involved selecting the most valuable articles and ordering them according to feedback from test readers, ensuring a thoughtful presentation of the material. The inclusion of mathematical topics alongside AI alignment content helps create a comprehensive resource for those interested in these interconnected fields.
Note: this is a static copy of this wiki page. We are also publishing it as a post to ensure visibility.
Circa 2015-2017, a lot of high quality content was written on Arbital by Eliezer Yudkowsky, Nate Soares, Paul Christiano, and others. Perhaps because the platform didn't take off, most of this content has not been as widely read as warranted by its quality. Fortunately, they have now been imported into LessWrong.
Most of the content written was either about AI alignment or math[1]. The Bayes Guide and Logarithm Guide are likely some of the best mathematical educational material online. Amongst the AI Alignment content are detailed and evocative explanations of alignment ideas: some well known, such as instrumental convergence and corrigibility, some lesser known like epistemic/instrumental efficiency, and some misunderstood like pivotal act.
The Sequence
The articles collected here were originally published as wiki pages with no set [...]
---
Outline:
(01:01) The Sequence
(01:23) Tier 1
(01:32) Tier 2
The original text contained 3 footnotes which were omitted from this narration.