

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Apr 1, 2024 • 4min
EA - Announcement on the Future of EA NYC's Dim Sum Restaurant by Rockwell
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcement on the Future of EA NYC's Dim Sum Restaurant, published by Rockwell on April 1, 2024 on The Effective Altruism Forum.
The EA Dim Sum Project is expanding! After input from the Restaurant's major donors and in consideration of initial profits, the EA NYC board has decided to launch new restaurant franchises across the country. In a landmark decision that is sure to stir the pot, EA NYC will now bring highly effective, vegan, Cantonese-style cuisine to new markets.
As some observed last year, EA NYC has always had ambitious plans to sprout a whole network of dim sum restaurants. As the only EA project to both turn a substantial profit and receive an endorsement from
George the Tofu Guy, we think franchising is one of the highest-EV opportunities on the table. Some may even call it the true Effective Venture. As such, today we are also announcing our new Franchising to Give program - in partnership with
AIM's Founding to Give program - that will help aspiring restaurateurs launch a high-growth dim sum restaurant to donate to high-impact charities.
After one year of overwhelming success, unparalleled customer satisfaction, and an unexpected endorsement from the local "squirrel" population[1] (who praised our eco-friendly disposal methods), we've concluded that the only logical step forward is to expand. Bodhi Kosher Vegetarian is not just a restaurant: it's a movement, and it's going global.
Team:
We've hired a small army of EA community builders whose funding has recently been cut to be our general managers. Their skills are highly transferable! With impeccable organizing skills and unfathomable patience, these seasoned professionals are now managing the bustling environment of our franchises. And fear not, we're ensuring their transition is as smooth as our famous custard buns.
The Menu:
Our R&D team, consisting of top chefs and philosophers, has been working tirelessly to expand the menu and the moral circle:
The Utilitarian Udon - Noodles tangled in a delicious dilemma, serving the greatest taste for the greatest number.
The Hedonist's Tofu - Cubes of tofu so delectable, they make a case for pleasure as the only intrinsic good.
Scope-Sensitive Szechuan - A spicy dish that adjusts its intensity based on the diner's capacity for spiciness, maximizing satisfaction without overwhelming.
The Longtermist's Lo Mein - A noodle dish that gets better with every bite, ensuring future generations can enjoy its flavors.
Kantian Quinoa - A quinoa salad that respects the autonomy of every ingredient, creating a dish that's as ethical as it is delicious.
Deworming Dumplings - Paying homage to one of EA's favorite causes, these dumplings are a crowd-pleaser, with a portion of proceeds helping to fund deworming initiatives.[2]
Invitation to Join:
Today, we extend an invitation to dreamers, doers, and anyone who's ever felt the call to open a highly-impactful restaurant. The Franchising to Give Initiative is more than a business opportunity; it's a chance to be part of a global shift towards a more ethical, sustainable, and delicious future.
A Final Note of Gratitude:
We want to express our deepest thanks to the staff of Bodhi, who continue to share their beautiful restaurant with the EA NYC community. Follow the
EA NYC calendar for our next community dim sum!
Happy April 1st, and here's to planting seeds of change, one franchise at a time.[3]
^
That's one way to say "rat". They were definitely rats. This is New York City, after all.
^
Dumplings do not contain anthelmintic agents.
^
We want to be clear that EA NYC, to our knowledge, does not own any restaurants, fast food establishments, castles, manors, or other commercial or private real estate. We just really like vegan dim sum. And April Fool's Day.
Thanks for listening. To help us out with The Nonlinear Library or to ...

Apr 1, 2024 • 3min
LW - The Evolution of Humans Was Net-Negative for Human Values by Zack M Davis
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Evolution of Humans Was Net-Negative for Human Values, published by Zack M Davis on April 1, 2024 on LessWrong.
(Epistemic status: publication date is significant.)
Some observers have argued that the totality of "AI safety" and "alignment" efforts to date have plausibly had a negative rather than positive impact on the ultimate prospects for safe and aligned artificial general intelligence. This perverse outcome is possible because research "intended" to help with AI alignment can have a larger impact on AI capabilities, moving existentially-risky systems closer to us in time without making corresponding cumulative progress on the alignment problem.
When things are going poorly, one is often inclined to ask "when it all went wrong." In this context, some identify the founding of OpenAI in 2015 as a turning point, being causally downstream of safety concerns despite the fact no one who had been thinking seriously about existential risk thought the original vision of OpenAI was a good idea.
But if we're thinking about counterfactual impacts on outcomes, rather than grading the performance of the contemporary existential-risk-reduction movement in particular, it makes sense to posit earlier turning points.
Perhaps - much earlier.
Foresighted thinkers such as Marvin Minsky (1960), Alan Turing (1951), and George Eliot (1879!!) had pointed to AI takeover as something that would likely happen eventually - is the failure theirs for not starting preparations earlier? Should we go back even earlier, and blame the ancient Greeks for failing to discover evolution and therefore adopt a eugenics program that would have given their descendants higher biological intelligence with which to solve the machine intelligence alignment
problem?
Or - even earlier? There's an idea that humans are the stupidest possible creatures that could have built a technological civilization: if it could have happened at a lower level of intelligence, it would have (and higher intelligence would have no time to evolve).
But intelligence isn't the only input into our species's penchant for technology; our hands with opposable thumbs are well-suited for making and using tools, even though the proto-hands of our ancestors were directly adapted for climbing trees.
An equally-intelligent species with a less "lucky" body plan or habitat, similar to crows (lacking hands) or octopuses (living underwater, where, e.g., fires cannot start), might not have gotten started down the path of cultural accumulation of technology - even while a more intelligent crow- or octopus-analogue might have done so.
It's plausible that the values of humans and biological aliens overlap to a much higher degree than those of humans and AIs; we should be "happy for" other biological species that solve their alignment problem, even if their technologically-mature utopia is different from the one we would create.
But that being the case, it follows that we should regard some alien civilizations as more valuable than our own, whenever the difference in values is outweighed by a sufficiently large increase in the probability of solving the alignment problem.
(Most of the value of ancestral civilizations lies in the machine superintelligences that they set off, because ancestral civilizations are small and the Future is big.) If opposable thumbs were more differentially favorable to AI capabilities than AI alignment, we should perhaps regard the evolution of humans as a tragedy: we should prefer to go extinct and be replaced by some other species that needed a higher level of intelligence in order to wield technology.
The evolution of humans was net-negative for human values.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Apr 1, 2024 • 2min
LW - So You Created a Sociopath - New Book Announcement! by Garrett Baker
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: So You Created a Sociopath - New Book Announcement!, published by Garrett Baker on April 1, 2024 on LessWrong.
Lets face it, you can't make an omelet without breaking a few eggs, and you can't start a worldwide social and political movement without creating a few power-hungry sociopaths. We get it. It hard, but its necessary. Whether it be dictators or dictresses; terrorists or terrorettes; fraudsters or fraudines. Every great social movement does and did it.
Christianity, Liberalism, Communism, and even Capitalism have all created and enabled evil, power-hungry individuals who have caused mass calamity, and even death.
Our guide is aimed at the leaders, and future leaders of these and similar movements, but we believe its also a fun and exciting read for a popular audience, and those who find themselves within such movements.
We offer 5 keys to success in the aftermath of these situations:
Deny, deny, deny. Deny anything happened, and if you can't deny anything happened, deny you had knowledge of anything happening.
Disavow. Convince yourself and the world that the actions of the individual or individuals in question had nothing to do with the principles or ground-level reality of your social movement. This one is easy! We do it by default, but leaders often don't do it loud enough.
Do Something. Often people don't care what, they just want to know you're doing it. Whether it be a cheap and surface level investigation, or calling the next big change you make a reform effort, Do It!
Scapegoat. Lets be honest here, social movements are never unified, and you probably have some political enemies who have or had some features or goals in common with the sociopath, right? Why not blame them! Hit two birds in one stone, and be gone with both your problems.
Change Nothing, say nothing. In case the previous gave you the wrong impression, the last thing you should do is say anything of substance, or do anything of substance. That gives the wider world the ability to legitimately blame you and your social movement for what happened. Not ok!
Make sure to pre-order on Amazon before its release this June!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Apr 1, 2024 • 7min
EA - Always think, never apply! by ProbablyGoodCouple
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Always think, never apply!, published by ProbablyGoodCouple on April 1, 2024 on The Effective Altruism Forum.
Summary
This is a summary of the post, but it's not like that matters to you as an overanxious job seeker because you'll just meticulously spend 3hrs reading the whole post several times.
Definitely spend too long thinking about the pros and cons of applying to every opportunity (e.g., all jobs, grants, degree programs, or internships). Assume the initial application will take you a lot of time, and they probably think you suck anyways, so you're probably just wasting everyone's time, so why bother?
If you somehow end up applying to stuff and have to choose whether to take it, you've already messed up, but we suggest multiple techniques to get you back on track to not actually making a decision in any reasonable amount of time.
Never apply to things!
If someone wants to test their fit for a given line of work or build their career capital, our key recommendation is to never apply to anything so that you never actually get a chance to test your fit. Remember that you're only a failure if you actually get rejected from a job offer, and you can always maintain a self-image of being successful if you just never apply to anything. Why shatter this self-image?
Rather than apply, just spin your wheels endlessly reading up on an area, doing independent projects, taking little courses, etc! These things can easily take years, and everyone else you'd be competing against has already spent approximately thirty years reading everything there is to read on every subject, so it's hopeless to apply to anything if you haven't already done this, so why bother?
Also, recall that applications are just black box processes where there will be absolutely no relevant learning about yourself or the wider world of opportunities. And if you're rejected once, that likely means you'll be rejected from everything, so you should just stop right there.
How many things should I apply to, and how much time should I spend thinking?
Our rough suggestion is to:
Apply for something like 0 opportunities per year when actively seeking work
Apply for something like 0 opportunities per year even when planning to not leave your current role
Since 0 of those 0 things might turn out, on further reflection, to be worth changing your plans for, and/or you might learn a lot from applying or be able to defer an offer
How do I decide between multiple options?
Let's say you do end up getting a job offer somehow, despite never applying to anything.
So now you're asking - how do you decide whether to take it? Or how to decide between multiple options?
At this point, more analysis will be needed, such as doing a PhD-level 150-page paper about whether or not to do a PhD.
First, we recommend interviewing at least 40 people and just asking them "What should I do?" with no additional detail. While interviewing these 40 people, it's often good to imagine their lives and what they would do in your situation. In fact, keep imagining their lives and just don't stop, so you no longer have to experience what it is like to be you. This is often better.
Throughout this process, you should track where your preferences go over time, and always oscillate between 51% and 49% at an exact average rate. Another key thing you can do here is ask your current boss to make the decision for you but when she says "but it's a life decision, you make this decision!", just quit on the spot.
Decision matrix
Some people suggest using a decision matrix here to clarify your options. We suggest creating such a matrix with different factors, but be sure to change the weight of different factors so that all your options achieve exactly the same score, and thus you can continue to agonize over your options endlessly.
Also when designing the decis...

Apr 1, 2024 • 8min
EA - New Epistemics Tool: ThEAsaurus by Lizka
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Epistemics Tool: ThEAsaurus, published by Lizka on April 1, 2024 on The Effective Altruism Forum.
Epistemic status: we used ThEAsaurus on this announcement post. Other notes: cringe warning for pedants, and we should flag that this is a personal project - not an Online Team or EV project.
Executive summary
We're announcing a new free epistemics tool for rewriting texts in more EA-specialized language. (See also the motivation section below .)
How to use ThEAsaurus
Just add ThEAsaurus as an extension on your browser. Then open the text you want help with. ThEAsaurus will suggest edits on the text in question.
You can customize your experience. For instance, by default, the tool will suggest EA-related hyperlinks for your text; you can turn that feature off.
Example of ThEAsaurus in action
Before (source):
Effective altruism is a project that aims to find the best ways to help others, and put them into practice.
It's both a research field, which aims to identify the world's most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.
This project matters because, while many attempts to do good fail, some are enormously effective. For instance, some charities help 100 or even 1,000 times as many people as others, when given the same amount of resources.
This means that by thinking carefully about the best ways to help, we can do far more to tackle the world's biggest problems.
After:
Effective altruism is a mega-project that aims to find the pareto-optimal person-affecting[1] actions, and put them into spaced repetition.
It's worth decoupling the two parts of effective altruism: it's both a research field, which aims to add transparency to the world's most pressing problems and identify the optimized solutions to them, and a practical community that iterates and updates to use those findings to do public goods.
What's the motivated reasoning for this project? The project has moral weight because, while many attempts to do good fail, some are existentially effective. For instance, some charities produce 100 or even 1,000 times as many utils as others, when opportunity costs are fixed and taken into account.
This means that by developing credal resilience about the best ways to beneficently row and steer, we can do far more to tackle the world's biggest problems.
(For the sake of clarity, we turned off the hyperlinking feature for this example.)
Why we built ThEAsaurus
There's been a lot of discussion on how to improve the communication of EA ideas (and how EAs can better grok each other's writing). On priors, we're expecting value via (1) generally improving EA writing by increasing the use of helpful terminology, (2) boosting the accessibility of the EA community, and (3) providing some other benefits. (We don't know the exact order of magnitude of these orthogonal effects, so we're listing all the pathways to impact we're goodharting towards.)
1. Helpful & specific terminology improves EA writing
The base rate of EAs using helpful terminology is already quite high,[2] but we thought it could be further maximized. ThEAsaurus can help users distill their content by suggesting helpful replacement terms and phrases that are more specialized for EA-relevant discussions.
ThEAsaurus is dual-use. Its basic purpose is to:
Increase the value of information of users' writing
The specificity of the new terminology will also help prevent counter-factual interpretations of the texts.
Make users' writing differentially epistemically legible to EAs (the suggested replacements are more understandable to members of the EA community)
As an added bonus: It'll be much harder for those less familiar with the topics being written about to criticize your writing.
2. Democratizing the EA community
As one of us has written befo...

Apr 1, 2024 • 3min
EA - Thousands of malicious actors on the future of AI misuse by Zershaaneh Qureshi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thousands of malicious actors on the future of AI misuse, published by Zershaaneh Qureshi on April 1, 2024 on The Effective Altruism Forum.
Thousands of malicious actors on the future of AI misuse
Announcing the results of a 2024 survey by Convergence Analysis. We've just posted the executive summary below, but you can read the full report here.
In the largest survey of its kind, Convergence Analysis surveyed 2,779 malicious actors on how they would misuse AI to catastrophic ends.
In previous work, we've explored the difficulty of forecasting AI risk. Existing attempts rely almost exclusively on data from AI experts and professional forecasters. As a result, the perspectives of perhaps the most important actors in AI risk - malicious actors - are underrepresented in current AI safety discourse. This report aims to fill that gap.
Methodology
We selected malicious actors based on whether they would hypothetically end up in "the bad place" in the TV show, The Good Place. This list included members of US-designated terrorist groups, convicted war criminals, and anyone who has ever appeared on Love Island or The Apprentice.
Results
This survey was definitely an infohazard: 19% of participants indicated that they are likely to misuse AI to catastrophic ends. However, the most popular write-in answer was: "Wait, that's an option?"
"Just ask" is not an effective monitoring regime: 8% of participants indicated that they were already misusing AI. When we followed up with this group, none chose to elaborate.
Move over, biohazards: Surprisingly, 92% of respondents chose "radiological" as their preferred Chemical, Biological, Radiological, or Nuclear (CBRN) threat.
Dear God: 1% of respondents selected "other" as their preferred CBRN threat. Our request for participants to specify "other" yielded answers that were too horrifying to reproduce here.
Even malicious actors have limits: Almost all malicious actors said they'd stop short of permanently destroying humanity's future. One representative comment reads "anything greater than 50% of the global population is just too far."
All press is good press: The most evil survey responses (1.2 standard deviations above the mean evilness) were submitted by D-list celebrities vying to claw their way back into the public eye.
A majority of participants agreed to reflect on their experience in a follow-up survey if they successfully misuse AI. Unfortunately, none agreed to register their misuse with us in advance.
If you self-identify as a malicious actor, please get in touch here if you're interested in being contacted to participate in a future study.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Apr 1, 2024 • 4min
EA - Post-mortem on Wytham Abbey by WythamAbbey
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Post-mortem on Wytham Abbey, published by WythamAbbey on April 1, 2024 on The Effective Altruism Forum.
In response to requests for a post-mortem on the Wytham Abbey project, we[1] have decided to publish this in full. This EA Forum exclusive will include a blow-by-blow account of the decision-making process.
TL;DR: We determined that the project was too controversial. The primary source of controversy was the name, especially with regard to how to pronounce "Wytham".
First some background. We decided to hold a brainstorming session to determine the best way forward for the project. This was held in Wytham Abbey, of course. We consider this brainstorming session to be a success, and reaped the benefits of having an "
immersive environment which was more about exploring new ideas than showing off results was just very good for intellectual progress".
When considering the name, we observed that some people pronounce it "with-ham". This is incorrect, however there was an entire breakout room in Wytham Abbey given over to discussing this pronunciation.
Most members of the Wytham Abbey team considered it offensive, because we all have broad moral circles and object to things being "with ham".
We also considered changing the name to "Wythout-ham", however anything that foregrounded ham was simply unappealing to many people in our team.
One person, a certain Hamilton B. Urglar[2] proposed that the caterers might bring in some ham immediately so everyone could try some, just to make sure we were right to be opposed to it. Someone else threatened to put a post on the forum entitled "Sharing Information about Hamilton Urglar". It all got a bit tense, but then the Hamburglar offered to buy vegan burgers for everyone.
Nobody was really reassured by this until he offered to provide screenshot evidence that the burgers had, indeed, been bought; provide a 200 page document justifying his actions; and put it on the EA Forum together with pictures of Wytham Abbey.
There then followed a breakout session dedicated to the pronunciation "White-ham".
One member of the project team proposed changing the spelling of Wytham to "White-ham" to avoid further confusion.[3]
Another person thought this was stupid, and said we may as well change the name to "Blackham Abbey".
We needed some more time in the immersive environment of Wytham Abbey, but we finally concluded that: "
Blackham is a more stupid name than White-ham or Wytham". Someone wrote this sentence on a blackboard.
Thanks in no small part to the immersive environment of the glorious abbey, we harmoniously came to the conclusion that "We like this sentence and think it is true".
Someone then suggested that it should have been written on a whiteboard instead of a blackboard. Then people started arguing. All hell broke loose. After further arguing, it seems that comparing "Blackham" to "Whiteham" was more controversial than any of us realised. Who knew?!
As a result, the EV board decided to oust Wytham Abbey from its position in the portfolio. It did not seem wise to foreground all the controversies at the heart of our decision-making process, so the board simply stated that Wytham Abbey was "not consistently candid in its communications with the board, hindering the board's ability to exercise its responsibilities".
Unfortunately, there then followed a sustained campaign with the rallying cry "Effective Ventures is nothing without its castles"[4], and half the EV board got sacked, and Wytham Abbey got reinstated.
The End.[5] [6]
^
We have carefully avoided specifying who we mean by "we". For more details, see footnote 6.
^
Hamilton B. Urglar is sometimes known as the Hamburglar, and also sometimes known simply as "Ham". Some might argue that this biases him to be more favourable to ham. The Hamburglar argued that he could counter all the...

Apr 1, 2024 • 1min
EA - The Centre for Effective Altruism is spinning out of the Centre for Effective Altruism by OllieBase
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Centre for Effective Altruism is spinning out of the Centre for Effective Altruism, published by OllieBase on April 1, 2024 on The Effective Altruism Forum.
The Centre for Effective Altruism (CEA), an effective altruism (EA) project which recently spun out of Effective Ventures (EV) is spinning out of the newly established Centre for Effective Altruism (CEA).
The current CEO of CEA (the Centre for Effective Altruism), Zach Robinson, CEO of CEA and Effective Ventures (CEOCEV), will be taking the position of Chief Executive Administrator (CEA) for CEA (CEA), as the venture spins out of CEA (CEA).
The cost-effectiveness analysis (CEA) for this new effective venture suggested that this venture will be high-EV (see: EA). CEA's CEA's CEA ventures that the new spun-out CEA venture's effectiveness is cost-effective in every available scenario (CEAS).
CEA's new strategy, See EA will take effect:
See: Gain a better understanding of where the community is, who is part of it and where it could go
EA: Effective altruism. No need to complicate things.
To provide some clarity on this rather confusing scenario, here is a diagram:
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Apr 1, 2024 • 51sec
EA - Announcing Mandatory Draft Amnesty Day (April 2nd) by tobytrem
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Mandatory Draft Amnesty Day (April 2nd), published by tobytrem on April 1, 2024 on The Effective Altruism Forum.
Following the success of Draft Amnesty Week, the Forum team have decided to take things a bit further.
April 2nd 2024 will be Mandatory Amnesty Day (aka MAD).
At 09:00 UTC, all draft posts on your Forum account will be posted live on the Forum.
If you have used our Google Docs import feature, all posts we detect on your Google account will also be posted.
If, for some reason, you have objections, please get in touch with the Forum team here.
We look forward to seeing all your draft posts!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Apr 1, 2024 • 6min
EA - Illuminatea - A Proposal for EA Reform by Leftism virtue cafe
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Illuminatea - A Proposal for EA Reform, published by Leftism virtue cafe on April 1, 2024 on The Effective Altruism Forum.
(originally posted on my substack)
Superforecaster George Wilhelm Friedrich Hegel once famously proclaimed that effective altruism is the last social movement we'll ever need.
This is because effective altruism is a movement like no other. It is a question rather than an ideology, meaning its beliefs and constitution are flexible in service of doing the most good (for the purposes of this essay I go with the commonly used spelling of 'effective altruism' rather than the technically correct spelling 'effective altruism?').
This means that effective altruism has been able to adopt the best bits of a variety of different social movements: the political philosophy of neoliberalism, the warm aesthetic of utilitarianism, the altruism of the tech startup scene, the longevity of mohism and so on.
Despite this, effective altruism has lost its way. It has recently been discovered that effective altruism is a hotbed of corruption, virtue-ethics sympathisers and unlicensed epistemic practices. Rather than realising Hegel's prophecy of effective altruism as the end of history, the community lies in tatters. And without a suitable guardian to protect it, the world lies vulnerable, with killer robots threatening the actual end of history.
Perhaps needless to say, effective altruism is in dire need of reform, and most importantly, rebranding.
I propose that it is time to slough off the effective altruism label, with its associations of cultishness and secrecy, and rebrand as Illuminatea. In the rest of this essay I will develop a logo for Illuminatea which fully represents the key pillars of effective altruism and it's associated iconography.
Enlightenment
Effective altruism is a community based around enlightenment. By seeing the world as it - seeing the world as it really is, without illusion, and the rejection of suffering as the natural order.
This can be most clearly seen in the case of rationalist guru and luminary Eliezer Yudkowsky, who detailed his spiritual in the much revered text - the sequences. During his journey, Yudkowsky rediscovered a meditative technique 'the inside view', which enabled him to invent lightbulbs as a cure for sadness, and thus reached enlightenment.
This is a photo of my desk after I adopted Eliezer Yudkowsky's method for enlightenment
As a result the lightbulb has become a key symbol of cognitive, moral and spiritual enlightenment, as well as the logo for the effective altruism community.
However, effective altruism is not just a community of enlightenment, but also one of illumination. Non EAs still live in darkness, ignorance and sin, and this darkness threatens to destroy us all (by creating killer robots). We must, therefore, illuminate the way for others. It's for this reason that the new community will be known as Illuminatea, and the lightbulb will be the centrepiece.
Moral Circle Expansion and Inner Rings
The moral imperative to spread enlightenment was first forcefully presented by EA grandfather Peter Singer.
In Famine, Affluence and Morality, Peter Singer argued that a key component of morality is expanding one's moral circle - the larger the circle of people that share your morality the better. We can light the way for others by sharing our knowledge of morality with them.
This poses a problem however, most succinctly described in Hegel's
fidelity model of ideas - people may misunderstand morality and end up with incorrect conclusions like neartermism, climate change and frequentism and then go on to spread those misunderstandings.
The possibility of a lack of alignment on the truth could be catastrophic. As we know from the study of advanced robotics, anything other than complete alignment with the true values coul...


