

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

May 5, 2024 • 2min
EA - Pandemic apathy by Matthew Rendall
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pandemic apathy, published by Matthew Rendall on May 5, 2024 on The Effective Altruism Forum.
An article in Vox yesterday by Kelsey Piper notes that after suffering through the whole Covid pandemic, policymakers and publics now seem remarkably unconcerned to prevent another one. 'Repeated efforts to get a serious pandemic prevention program through [the US] Congress', she writes, 'have fizzled.' Writing from Britain, I'm not aware of more serious efforts to prevent a repetition over here.
That seems surprising. Both governments and citizens notoriously neglect many catastrophic threats, sometimes because they've never yet materialised (thermonuclear war; misaligned superintelligence), sometimes because they creep up on us slowly (climate change, biodiversity loss), sometimes because it's been a while since the last disaster and memories fade. After an earthquake or a hundred-year flood, more people take out insurance against them; over time, memories fade and take-up declines.
None of these mechanisms plausibly explains apathy toward pandemic risk. If anything, you'd think people would exaggerate the threat, as they did the threat of terrorism after 9/11. It's recent and - in contrast to 9/11 - it's something we all personally experienced.
What's going on? Cass Sunstein argues that 9/11 prompted a stronger response than global heating in part because people could put a face on a specific villain - Osama bin Laden. Sunstein maintains that this heightens not only outrage but also fear. Covid is like global heating rather than al-Qaeda in this respect.
While that could be part of it, my hunch is that at least two other factors are playing a role. First, tracking down and killing terrorists was exciting. Improving ventilation systems or monitoring disease transmission between farmworkers and cows is not. It's a bit like trying to get six-year olds interested in patent infringements. This prompts the worry that we might fail to address some threats because their solutions are too boring to think about.
Second, maybe Covid is a bit like Brexit. That issue dominated British politics for so long that even those of us who would like to see Britain rejoin the EU are rather loth to reopen it. Similarly, most of us would rather think about anything else than the pandemic. Unfortunately, that's a recipe for repeating it.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

May 4, 2024 • 11min
EA - S-Risks: Fates Worse Than Extinction by A.G.G. Liu
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: S-Risks: Fates Worse Than Extinction, published by A.G.G. Liu on May 4, 2024 on The Effective Altruism Forum.
Cross-posted from LessWrong
In this Rational Animations video, we discuss s-risks (risks from astronomical suffering), which involve an astronomical number of beings suffering terribly. Researchers on this topic argue that s-risks have a significant chance of occurring and that there are ways to lower that chance.
The script for this video was a winning submission to the Rational Animations Script Writing contest (https://forum.effectivealtruism.org/posts/p8aMnG67pzYWxFj5r/rational-animations-script-writing-contest). The first author of this post, Allen Liu, was the primary script writer with the second author (Writer) and other members of the Rational Animations writing team giving significant feedback. Outside reviewers, including authors of several of the cited sources, provided input as well.
Production credits are at the end of the video. You can find the script of the video below.
Is there anything worse than humanity being driven extinct? When considering the long term future, we often come across the concept of "existential risks" or "x-risks": dangers that could effectively end humanity's future with all its potential. But these are not the worst possible dangers that we could face. Risks of astronomical suffering, or "s-risks", hold even worse outcomes than extinction, such as the creation of an incredibly large number of beings suffering terribly.
Some researchers argue that taking action today to avoid these most extreme dangers may turn out to be crucial for the future of the universe.
Before we dive into s-risks, let's make sure we understand risks in general. As Swedish philosopher Nick Bostrom explains in his 2013 paper "Existential Risk Prevention as Global Priority",[1] one way of categorizing risks is to classify them according to their "scope" and their "severity". A risk's "scope" refers to how large a population the risk affects, while its "severity" refers to how much that population is affected.
To use Bostrom's examples, a car crash may be fatal to the victim themselves and devastating to their friends and family, but not even noticed by most of the world. So the scope of the car crash is small, though its severity is high for those few people. Conversely, some tragedies could have a wide scope but be comparatively less severe.
If a famous painting were destroyed in a fire, it could negatively affect millions or billions of people in the present and future who would have wanted to see that painting in person, but the impact on those people's lives would be much smaller.
In his paper, Bostrom analyzes risks which have both a wide scope and an extreme severity, including so-called "existential risks" or "x-risks". Human extinction would be such a risk: affecting the lives of everyone who would have otherwise existed from that point on and forever preventing all the joy, value and fulfillment they ever could have produced or experienced.
Some other such risks might include humanity's scientific and moral progress permanently stalling or reversing, or us squandering some resource that could have helped us immensely in the future.
S-risk researchers take Bostrom's categories a step further. If x-risks are catastrophic because they affect everyone who would otherwise exist and prevent all their value from being realized, then an even more harmful type of risk would be one that affects more beings than would otherwise exist and that makes their lives worse than non-existence: in other words, a risk with an even broader scope and even higher severity than a typical existential risk, or a fate worse than extinction.
David Althaus and Lukas Gloor, in their article from 2016 titled "Reducing Risks of Astronomical Suffering: A Neglected Priority"...

May 4, 2024 • 2min
LW - Introducing AI-Powered Audiobooks of Rational Fiction Classics by Askwho
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing AI-Powered Audiobooks of Rational Fiction Classics, published by Askwho on May 4, 2024 on LessWrong.
(ElevenLabs reading of this post:)
I'm excited to share a project I've been working on that I think many in the Lesswrong community will appreciate - converting some rational fiction into high-quality audiobooks using cutting-edge AI voice technology from ElevenLabs, under the name "Askwho Casts AI".
The keystone of this project is an audiobook version of Planecrash (AKA Project Lawful), the epic glowfic authored by Eliezer Yudkowsky and Lintamande. Given the scope and scale of this work, with its large cast of characters, I'm using ElevenLabs to give each character their own distinct voice. It's a labor of love to convert this audiobook version of this story, and I hope if anyone has bounced off it before, this might be a more accessible version.
Alongside Planecrash, I'm also working on audiobook versions of two other rational fiction favorites:
Luminosity by Alicorn (to be followed by its sequel Radiance)
Animorphs: The Reckoning by Duncan Sabien
I'm also putting out a feed where I convert any articles I find interesting, a lot of which are in the Rat Sphere.
My goal with this project is to make some of my personal favorite rational stories more accessible by allowing people to enjoy them in audiobook format. I know how powerful these stories can be, and I want to help bring them to a wider audience and to make them easier for existing fans to re-experience.
I wanted to share this here on Lesswrong to connect with others who might find value in these audiobooks. If you're a fan of any of these stories, I'd love to get your thoughts and feedback! And if you know other aspiring rationalists who might enjoy them, please help spread the word.
What other classic works of rational fiction would you love to see converted into AI audiobooks?
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

May 4, 2024 • 34min
LW - Now THIS is forecasting: understanding Epoch's Direct Approach by Elliot Mckernon
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Now THIS is forecasting: understanding Epoch's Direct Approach, published by Elliot Mckernon on May 4, 2024 on LessWrong.
Happy May the 4th from Convergence Analysis! Cross-posted on the EA Forum.
As part of Convergence Analysis's scenario research, we've been looking into how AI organisations, experts, and forecasters make predictions about the future of AI. In February 2023, the AI research institute Epoch published a report in which its authors use neural scaling laws to make quantitative predictions about when AI will reach human-level performance and become transformative. The report has a corresponding blog post, an interactive model, and a Python notebook.
We found this approach really interesting, but also hard to understand intuitively. While trying to follow how the authors derive a forecast from their assumptions, we wrote a breakdown that may be useful to others thinking about AI timelines and forecasting.
In what follows, we set out our interpretation of Epoch's 'Direct Approach' to forecasting the arrival of transformative AI (TAI). We're eager to see how closely our understanding of this matches others'. We've also fiddled with Epoch's interactive model and include some findings on its sensitivity to plausible changes in parameters.
The Epoch team recently attempted to replicate DeepMind's influential Chinchilla scaling law, an important quantitative input to Epoch's forecasting model, but found inconsistencies in DeepMind's presented data. We'll summarise these findings and explore how an improved model might affect Epoch's forecasting results.
This is where the fun begins (the assumptions)
The goal of Epoch's Direct Approach is to quantitatively predict the progress of AI capabilities.
The approach is 'direct' in the sense that it uses observed scaling laws and empirical measurements to directly predict performance improvements as computing power increases. This stands in contrast to indirect techniques, which instead seek to estimate a proxy for performance. A notable example is Ajeya Cotra's Biological Anchors model, which approximates AI performance improvements by appealing to analogies between AIs and human brains.
Both of these approaches are discussed and compared, along with expert surveys and other forecasting models, in Zershaaneh Qureshi's recent post, Timelines to Transformative AI: an investigation.
In their blog post, Epoch summarises the Direct Approach as follows:
The Direct Approach is our name for the idea of forecasting AI timelines by directly extrapolating and interpreting the loss of machine learning models as described by scaling laws.
Let's start with scaling laws. Generally, these are just numerical relationships between two quantities, but in machine learning they specifically refer to the various relationships between a model's size, the amount of data it was trained with, its cost of training, and its performance.
These relationships seem to fit simple mathematical trends, and so we can use them to make predictions: if we make the model twice as big - give it twice as much 'compute' - how much will its performance improve? Does the answer change if we use less training data? And so on.
If we combine these relationships with projections of how much compute AI developers will have access to at certain times in the future, we can build a model which predicts when AI will cross certain performance thresholds. Epoch, like Convergence, is interested in when we'll see the emergence of transformative AI (TAI): AI powerful enough to revolutionise our society at a scale comparable to the agricultural and industrial revolutions.
To understand why Convergence is especially interested in that milestone, see our recent post 'Transformative AI and Scenario Planning for AI X-risk'.
Specifically, Epoch uses an empirically measured scaling ...

May 4, 2024 • 13min
EA - Animal Welfare is now enshrined in the Belgian Constitution by Bob Jacobs
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Welfare is now enshrined in the Belgian Constitution, published by Bob Jacobs on May 4, 2024 on The Effective Altruism Forum.
A while back, I wrote a quicktake about how the Belgian Senate voted to enshrine animal welfare in the Constitution.
It's been a journey. I work for GAIA, a Belgian animal advocacy group that for years has tried to get animal welfare added to the constitution. Today we were present as a supermajority of the senate came out in favor of our proposed constitutional amendment. [...]
It's a very good day for Belgian animals but I do want to note that:
1. This does not mean an effective shutdown of the meat industry, merely that all future pro-animal welfare laws and lawsuits will have an easier time. And,
2. It still needs to pass the Chamber of Representatives.
If there's interest I will make a full post about it
if once it passes the Chamber.
It is now my great pleasure to announce to you that a supermajority of the Chamber also voted in favor of enshrining animal welfare in the Constitution. Article 7a of the Belgian Constitution now reads as follows:
In the exercise of their respective powers, the Federal State, the Communities and the Regions shall ensure the protection and welfare of animals as sentient beings.
This inclusion of animals as sentient beings is notable as it represents the fourth major revision of the Constitution in favor of individual rights. Previous revisions have addressed universal suffrage, gender equality, and the rights of people with disabilities.
TL;DR: The significance of this inclusion extends beyond symbolic value. It will have tangible effects on animal protection in Belgium:
1. Fundamental Value: Animal welfare is now recognized as a fundamental value of Belgian society. In cases where a constitutional right conflicts with animal protection, the latter will hold greater legal weight and must be seriously considered. For example, this recognition may facilitate the implementation of a country-wide ban on slaughter without anesthesia, as both freedom of religion and animal welfare are now constitutionally protected.
2. Legislative Guidance: The inclusion of animal welfare will encourage legislative and executive bodies to prioritize laws aimed at improving animal protection while rejecting those that may undermine it. Regressive measures with certain interests (e.g. purely financial interests) will face increased scrutiny as they are weighed against the constitutional protection of animal welfare.
3. Legal Precedent: In legal cases involving animals, whether criminal or civil, judges will be influenced by the values enshrined in the Constitution. This awareness may lead to greater consideration of animal interests in judicial decisions.
Legal importance
In the hierarchy of Belgian legal norms, the Constitution is at the very top. This means that lower regulations (the laws of the federal and regional parliament(s), the regulations of local governments and executive orders) must comply with the Constitution.
If different rights must be weighed against one another, the one that is enshrined in the Constitution is deemed more important. Previously, religious freedom was in the Constitution and animal welfare was not, meaning the former carried more weight. Article 19 of the Constitution merely states that the exercise of worship is free unless crimes (criminal violations of law) are committed in the course of that exercise.
There have been many attempts to ban unanesthetized slaughter; in some regions they were successful, in others not, in all of them they led to fierce legal debate and lengthy proceedings. Enshrining animal welfare in the constitutional will finally ensure a full victory for the animals.
(The exercise of other fundamental rights besides religious freedom can also have a negative impact on ani...

May 4, 2024 • 9min
LW - My hour of memoryless lucidity by Eric Neyman
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My hour of memoryless lucidity, published by Eric Neyman on May 4, 2024 on LessWrong.
Yesterday, I had a coronectomy: the top halves of my bottom wisdom teeth were surgically removed. It was my first time being sedated, and I didn't know what to expect. While I was unconscious during the surgery, the hour after surgery turned out to be a fascinating experience, because I was completely lucid but had almost zero short-term memory.
My girlfriend, who had kindly agreed to accompany me to the surgery, was with me during that hour. And so - apparently against the advice of the nurses - I spent that whole hour talking to her and asking her questions.
The biggest reason I find my experience fascinating is that it has mostly answered a question that I've had about myself for quite a long time: how deterministic am I?
In computer science, we say that an algorithm is deterministic if it's not random: if it always behaves the same way when it's in the same state. In this case, my "state" was my environment (lying drugged on a bed with my IV in and my girlfriend sitting next to me) plus the contents of my memory.
Normally, I don't ask the same question over and over again because the contents of my memory change when I ask the question the first time: after I get an answer, the answer is in my memory, so I don't need to ask the question again. But for that hour, the information I processed came in one ear and out the other in a matter of minutes.
And so it was a natural test of whether my memory is the only thing keeping me from saying the same things on loop forever, or whether I'm more random/spontaneous than that.[1]
And as it turns out, I'm pretty deterministic! According to my girlfriend, I spent a lot of that hour cycling between the same few questions on loop: "How did the surgery go?" (it went well), "Did they just do a coronectomy or did they take out my whole teeth?" (just a coronectomy), "Is my IV still in?" (yes), "how long was the surgery?" (an hour and a half), "what time is it?", and "how long have you been here?".
(The length of that cycle is also interesting, because it gives an estimate of how long I was able to retain memories for - apparently about two minutes.)
(Toward the end of that hour, I remember asking, "I know I've already asked this twice, but did they just do a coronectomy?" (The answer: "actually you've asked that much more than twice, and yes, it was just a coronectomy.))
Those weren't my only questions, though. About five minutes into that hour, I apparently asked my girlfriend for two 2-digit numbers to multiply, to check how cognitively impaired I was. She gave me 27*69, and said that I had no trouble doing the multiplication in the obvious way (27*7*10 - 27), except that I kept having to ask her to remind me what the numbers were.
Interestingly, I asked her for two 2-digit numbers again toward the end of that hour, having no memory that I had already done this. She told me that she had already given me two numbers, and asked whether I wanted the same numbers again. I said yes (so I could compare my performance). The second time, I was able to do the multiplication pretty quickly without needing to ask for the numbers to be repeated.
Also, about 20 minutes into the hour, I asked my girlfriend to give me the letters to that day's New York Times Spelling Bee, which is a puzzle where you're given seven letters and try to form words using the letters. (The letters were W, A, M, O, R, T, and Y.) I found the pangram - the word that uses every letter at least once[2] - in about 30 seconds, which is about average for me, except that yesterday I was holding the letters in my head instead of looking at them on a screen.
I also got most of the way to the "genius" rank - a little better than I normally do - and my girlfriend got us the rest of the way ther...

May 4, 2024 • 2min
LW - Apply to ESPR & PAIR, Rationality and AI Camps for Ages 16-21 by Anna Gajdova
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to ESPR & PAIR, Rationality and AI Camps for Ages 16-21, published by Anna Gajdova on May 4, 2024 on LessWrong.
TLDR - Apply now to
ESPR and
PAIR. ESPR welcomes students between 16-19 years. PAIR is for students between 16-21 years.
The
FABRIC team is running two immersive summer workshops for mathematically talented students this year.
The
Program on AI and Reasoning (PAIR) is for students with an interest in artificial intelligence, cognition, and minds in general.
We will study how current AI systems work, mathematical theories about human minds, and how the two relate. Alumni of previous PAIR described the content as a blend of AI, mathematics and introspection, but also highlighted that a large part of the experience are informal conversations or small group activities. See the
curriculum details.
For students who are 16-21 years old
July 29th - August 8th in Somerset, United Kingdom
The
European Summer Program on Rationality (ESPR) is for students with a desire to understand themselves and the world, and interest in applied rationality.
The curriculum covers a wide range of topics, from game theory, cryptography, and mathematical logic, to AI, styles of communication, and cognitive science. The goal of the program is to help students hone rigorous, quantitative skills as they acquire a toolbox of useful concepts and practical techniques applicable in all walks of life. See the
content details.
For students who are 16-19 years old
August 15th - August 25th in Oxford, United Kingdom
We encourage all Lesswrong readers interested in these topics who are within the respective age windows to apply!
Both programs are free for accepted students, travel scholarships are available. Apply to both camps
here. The application deadline is Sunday May 19th.
If you know people within the age window who might enjoy these camps, please send them the link to the
FABRIC website which has an overview of all our camps.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

May 3, 2024 • 6min
LW - "AI Safety for Fleshy Humans" an AI Safety explainer by Nicky Case by habryka
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "AI Safety for Fleshy Humans" an AI Safety explainer by Nicky Case, published by habryka on May 3, 2024 on LessWrong.
Nicky Case, of "The Evolution of Trust" and "We Become What We Behold" fame (two quite popular online explainers/mini-games) has written an intro explainer to AI Safety! It looks pretty good to me, though just the first part is out, which isn't super in-depth. I particularly appreciate Nicky clearly thinking about the topic themselves, and I kind of like some of their "logic vs. intuition" frame, even though I think that aspect is less core to my model of how things will go.
It's clear that a lot of love has gone into this, and I think having more intro-level explainers for AI-risk stuff is quite valuable.
===
The AI debate is actually 100 debates in a trenchcoat.
Will artificial intelligence (AI) help us cure all disease, and build a post-scarcity world full of flourishing lives? Or will AI help tyrants surveil and manipulate us further? Are the main risks of AI from accidents, abuse by bad actors, or a rogue AI itself becoming a bad actor? Is this all just hype? Why can AI imitate any artist's style in a minute, yet gets confused drawing more than 3 objects? Why is it hard to make AI robustly serve humane values, or robustly serve any goal? What if an
AI learns to be more humane than us? What if an AI learns humanity's inhumanity, our prejudices and cruelty? Are we headed for utopia, dystopia, extinction, a fate worse than extinction, or - the most shocking outcome of all - nothing changes? Also: will an AI take my job?
...and many more questions.
Alas, to understand AI with nuance, we must understand lots of technical detail... but that detail is scattered across hundreds of articles, buried six-feet-deep in jargon.
So, I present to you:
This 3-part series is your one-stop-shop to understand the core ideas of AI & AI Safety* - explained in a friendly, accessible, and slightly opinionated way!
(* Related phrases: AI Risk, AI X-Risk, AI Alignment, AI Ethics, AI Not-Kill-Everyone-ism. There is no consensus on what these phrases do & don't mean, so I'm just using "AI Safety" as a catch-all.)
This series will also have comics starring a Robot Catboy Maid. Like so:
[...]
The Core Ideas of AI & AI Safety
In my opinion, the main problems in AI and AI Safety come down to two core conflicts:
Note: What "Logic" and "Intuition" are will be explained more rigorously in Part One. For now: Logic is step-by-step cognition, like solving math problems. Intuition is all-at-once recognition, like seeing if a picture is of a cat. "Intuition and Logic" roughly map onto "System 1 and 2" from cognitive science.[1]1[2]2 ( hover over these footnotes! they expand!)
As you can tell by the "scare" "quotes" on "versus", these divisions ain't really so divided after all...
Here's how these conflicts repeat over this 3-part series:
Part 1: The past, present, and possible futures
Skipping over a lot of detail, the history of AI is a tale of Logic vs Intuition:
Before 2000: AI was all logic, no intuition.
This was why, in 1997, AI could beat the world champion at chess... yet no AIs could reliably recognize cats in pictures.[3]3
(Safety concern: Without intuition, AI can't understand common sense or humane values. Thus, AI might achieve goals in logically-correct but undesirable ways.)
After 2000: AI could do "intuition", but had very poor logic.
This is why generative AIs (as of current writing, May 2024) can dream up whole landscapes in any artist's style... yet gets confused drawing more than 3 objects. ( click this text! it also expands!)
(Safety concern: Without logic, we can't verify what's happening in an AI's "intuition". That intuition could be biased, subtly-but-dangerously wrong, or fail bizarrely in new scenarios.)
Current Day: We still don't know how to unify logic & i...

May 3, 2024 • 14min
EA - My Lament to EA by kta
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Lament to EA, published by kta on May 3, 2024 on The Effective Altruism Forum.
I am dealing with repetitive strain injury and don't foresee being able to really respond to any comments (I'm surprised with myself that I wrote all of this without twitching forearms lol!)
I'm a little hesitant to post this, but I thought I should be vulnerable. Honestly, I'm relieved that I finally get to share my voice. I know some people may want me to discuss this privately - but that might not be helpful to me, as I know (by personal and indirect experience) that some community issues in EA have been tried to be silenced by the very people who were meant to help.
And to be honest, the fear of criticizing EA is something I have disliked about EA - I've been behind the scenes enough to know that despite being well-intentioned, criticizing EA (especially openly) can privately get you excluded from opportunities and circles, often even silently. This is an internal battle I've had with EA for a while (years). Still, I thought by sharing my experiences I can add to the ongoing discourse in the community.
Appreciation and disillusionment
I want to start by saying I have many lovely friends and colleagues in the movement whom I deeply respect. You know who you are. :) My thoughts here are not generalized to the whole movement itself - just some problems I feel most have failed to recognize enough, stemming from specific experiences. I think more effort should be made to address these issues, or at least to consider them as the movement is built.
I joined EA in university (five years ago), thrilled to see an actual movement work on problems I thought were important in a way that I thought was important. I dove in, thinking I finally found the group of people I so wanted to find since grade school - a bunch of cool, intelligent, kind, altruistic nerds and geeks! And for a while, it was good. I met my ex-partner there (which was good for a while) and some good friends.
I'm happy thinking I made an impact over the past few years and learned so much about myself and how to be more mature and intelligent. I also have a lot of gratitude for this movement for teaching me so much and for shaping who I am today.
However, throughout the years, I became more disillusioned and saddened due to systemic issues within the movement - how it was structured in a way that allowed for a lot of negative things to happen, despite how much people really brainstormed and tried for it not to. I've experienced many degrading things I wish on no one directly because of EA. (Some of these experiences I mainly wish to keep private out of respect for some.)
Despite my efforts to enter the community and work hard, I burned out, physically, professionally, and personally. And it's taken such a toll on me that for a while I did not fully recognize who I was anymore. I definitely think a lot of me was consumed by hurt and negativity, and I'm working on that.
I've actually distanced myself from my local group for the longest time because I felt a select of them (not all!) were toxic and mean - I mainly stayed there to protect and support someone, but unfortunately was betrayed multiple times by them, and I wish I left earlier. (I send them love and light now.)
So, while I've had many great, eye-opening experiences and have made many amazing friends through EA, I don't think my positive feelings are enough anymore for me to fully stay in it for a while. Instead, I will focus on my specific cause area and research field. (I acknowledge it might tie into EA sometimes and I accept that). This has not been an easy realization, nor one reached hastily, but after considerable reflection on the negative impacts these issues have had on my well-being.
The following are non-exhaustive.
Specific challenges
When it has been uncomfort...

May 3, 2024 • 48min
LW - Key takeaways from our EA and alignment research surveys by Cameron Berg
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Key takeaways from our EA and alignment research surveys, published by Cameron Berg on May 3, 2024 on LessWrong.
Many thanks to Spencer Greenberg, Lucius Caviola, Josh Lewis, John Bargh, Ben Pace, Diogo de Lucena, and Philip Gubbins for their valuable ideas and feedback at each stage of this project - as well as the ~375 EAs + alignment researchers who provided the data that made this project possible.
Background
Last month,
AE Studio launched two surveys:
one for alignment researchers, and
another for the broader EA community.
We got some surprisingly interesting results, and we're excited to share them here.
We set out to better explore and compare various population-level dynamics within and across both groups. We examined everything from demographics and personality traits to community views on specific EA/alignment-related topics. We took on this project because it seemed to be largely unexplored and rife with potentially-very-high-value insights. In this post, we'll present what we think are the most important findings from this project.
Meanwhile, we're also sharing and publicly releasing a
tool we built for analyzing both datasets. The tool has some handy features, including customizable filtering of the datasets, distribution comparisons within and across the datasets, automatic classification/regression experiments, LLM-powered custom queries, and more. We're excited for the wider community to use the tool to explore these questions further in whatever manner they desire.
There are many open questions we haven't tackled here related to the current psychological and intellectual make-up of both communities that we hope others will leverage the dataset to explore further.
(Note: if you want to see all results, navigate to the
tool, select the analysis type of interest, and click 'Select All.' If you have additional questions not covered by the existing analyses, the GPT-4 integration at the bottom of the page should ideally help answer them. The code running the tool and the raw anonymized data are both also publicly available.)
We incentivized participation by offering to donate $40 per eligible[1] respondent - strong participation in both surveys enabled us to donate over $10,000 to both AI safety orgs as well as a number of
different high impact organizations (see here[2] for the exact breakdown across the two surveys). Thanks again to all of those who participated in both surveys!
Three miscellaneous points on the goals and structure of this post before diving in:
1. Our goal here is to share the most impactful takeaways rather than simply regurgitating every conceivable result. This is largely why we are also releasing the data analysis tool, where anyone interested can explore the dataset and the results at whatever level of detail they please.
2. This post collectively represents what we at AE found to be the most relevant and interesting findings from these experiments. We sorted the TL;DR below by perceived importance of findings. We are personally excited about pursuing
neglected approaches to alignment, but we have attempted to be as deliberate as possible throughout this write-up in striking the balance between presenting the results as straightforwardly as possible and sharing our views about implications of certain results where we thought it was appropriate.
3. This project was descriptive and exploratory in nature. Our goal was to cast a wide psychometric net in order to get a broad sense of the psychological and intellectual make-up of both communities. We used standard frequentist statistical analyses to probe for significance where appropriate, but we definitely still think it is important for ourselves and others to perform follow-up experiments to those presented here with a more
tightly controlled scope to replicate and further sharpen t...


