

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

May 30, 2024 • 4min
LW - Value Claims (In Particular) Are Usually Bullshit by johnswentworth
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Value Claims (In Particular) Are Usually Bullshit, published by johnswentworth on May 30, 2024 on LessWrong.
Epistemic status: mental model which I have found picks out bullshit surprisingly well.
Idea 1: Parasitic memes tend to be value-claims, as opposed to belief-claims
By "parasitic memes" I mean memes whose main function is to copy themselves - as opposed to, say, actually provide value to a human in some way (so that the human then passes it on). Scott's old
Toxoplasma of Rage post is a central example; "share to support X" is another.
Insofar as a meme is centered on a factual claim, the claim gets entangled with lots of other facts about the world; it's the phenomenon of
Entangled Truths, Contagious Lies. So unless the meme tries to knock out a person's entire epistemic foundation, there's a strong feedback signal pushing against it if it makes a false factual claim. (Of course some meme complexes do try to knock out a person's entire epistemic foundation, but those tend to be "big" memes like religions or ideologies, not the bulk of day-to-day memes.)
But the Entangled Truths phenomenon is epistemic; it does not apply nearly so strongly to values. If a meme claims that, say, it is especially virtuous to eat yellow cherries from Switzerland... well, that claim is not so easily falsified by a web of connected truths.
Furthermore, value claims always come with a natural memetic driver: if X is highly virtuous/valuable/healthy/good/etc, and this fact is not already widely known, then it's highly virtuous and prosocial of me to tell other people how virtuous/valuable/healthy/good X is, and vice-versa if X is highly dangerous/bad/unhealthy/evil/etc.
Idea 2: Transposons are ~half of human DNA
There are sequences of DNA whose sole function is to copy and reinsert themselves back into the genome. They're called transposons. If you're like me, when you first hear about transposons, you're like "huh that's pretty cool", but you don't expect it to be, like, a particularly common or central phenomenon of biology.
Well, it turns out that something like half of the human genome consists of dead transposons. Kinda makes sense, if you think about it.
Now we suppose we carry that fact over, by analogy, to memes. What does that imply?
Put Those Two Together...
… and the natural guess is that value claims in particular are mostly parasitic memes. They survive not by promoting our terminal values, but by people thinking it's good and prosocial to tell others about the goodness/badness of X.
I personally came to this model from the other direction. I've read a lot of papers on aging. Whenever I mention this fact in a room with more than ~5 people, somebody inevitably asks "so what diet/exercise/supplements/lifestyle changes should I make to stay healthier?". In other words, they're asking for value-claims. And I noticed that the papers, blog posts, commenters, etc, who were most full of shit were ~always exactly the ones which answered that question.
To a first approximation, if you want true information about the science of aging, far and away the best thing you can do is specifically look for sources which do not make claims about diet or exercise or supplements or other lifestyle changes being good/bad for you. Look for papers which just investigate particular gears, like "does FoxO mediate the chronic inflammation of arthritis?" or "what's the distribution of mutations in mitochondria of senescent cells?".
… and when I tried to put a name on the cluster of crap claims which weren't investigating gears, I eventually landed on the model above: value claims in general are dominated by memetic parasites.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

May 30, 2024 • 5min
LW - The Pearly Gates by lsusr
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Pearly Gates, published by lsusr on May 30, 2024 on LessWrong.
St. Peter stood at a podium before the Gates of Heaven. The gates were gold, built on a foundation of clouds. A line of people curved and winded across the clouds, beyond what would be a horizon if this plane of existence was positively-curved. Instead, they just trailed away into Infinity, away from the golden wall securing Heaven.
The worthy would enter eternal paradise. The unforgiven would burn in Hell for just as long. Infinite judgment for finite lives.
"Next please," said St. Peter.
The foremost man stepped forward. He had freckles and brilliant orange hair.
"Tell me about yourself," said St. Peter.
"Me name's Seamus O'Malley, sure, and I was - or still am, begorrah - an Irish Catholic," said Seamus.
"How did you die?" said St. Peter.
"Jaysus, I went and blew meself to bits tryin' to cobble together an auld explosive to give those English occupiers a proper boot, so I did," said Seamus.
"You were a good Catholic," said St. Peter, "You're in."
Seamus entered the Pearly Gates with his head held high.
"Next please," said St. Peter.
A Floridian woman stepped forward.
"My name is Megan Roberts. I worked as a nurse. I couldn't bear to tell people their family members were going to die. I poisoned them so they would die when a less empathetic nurse was on watch," said the nurse.
"That's a grave sin," said St. Peter.
"But it's okay because I'm a Christian. Protestant," said Megan.
"Did you go to church?" said St. Peter.
"Mostly just Christmas and Easter," said Megan, "But moments before I died, I asked Jesus for forgiveness. That means my sins are wiped away, right?"
"You're in," said St. Peter.
"Next please," said St. Peter.
A skinny woman stepped forward.
"My name is Amanda Miller. I'm an Atheist. I've never attended church or prayed to God. I was dead certain there was no God until I found myself in the queue on these clouds. Even right now, I'm skeptical this isn't a hallucination," said Amanda.
"Were you a good person?" asked St. Peter.
"Eh," said Amanda, "I donated a paltry 5% of my income to efficient public health measures, resulting in approximately 1,000 QALYs."
"As punishment for your sins, I condemn you to an eternity of Christians telling you 'I told you so'," said St Peter, "You're in."
"Next please," said St. Peter.
A bald man with a flat face stepped forward.
"My name is Oskar Schindler. I was a Nazi," said Oskar.
"Metaphorical Nazi or Neo-Nazi?" asked St Peter.
"I am from Hildesheim, Germany. I was a card-carrying member of the Nazi Party from 1935 until 1945," said Oskar.
"Were you complicit in the war or just a passive bystander?" asked St. Peter.
"I was a war profiteer. I ran a factory that employed Jewish slave labor to manufacture munitions in Occupied Poland," said Oskar.
"Why would you do such a thing?" asked St. Peter.
"The Holocaust," said Oskar, "Nobody deserves that. Every Jew I bought was one fewer Jew in the death camps. Overall, I estimate I saved 1,200 Jews from the gas chambers."
St. Peter waited, as if to say go on.
"I hired as many workers as I could. I made up excuses to hire extra workers. I bent and broke every rule that got in my way. When that didn't work, I bought black market goods to bribe government officials. I wish I could have done more, but we do what we can with the limited power we have," said Oskar, "Do you understand?"
St. Peter glanced furtively at the angels guarding the Gates of Heaven. He leaned forward, stared daggers into Oskar's eyes and whispered, "I think I understand you perfectly."
"Next please," said St. Peter.
A skinny Indian man stepped forward.
"My name is Siddhartha Gautama. I was a prince. I was born into a life of luxury. I abandoned my duties to my kingdom and to my people," said Siddhartha.
St. Peter read from his scroll. "It says ...

May 30, 2024 • 1h 17min
AF - AXRP Episode 32 - Understanding Agency with Jan Kulveit by DanielFilan
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AXRP Episode 32 - Understanding Agency with Jan Kulveit, published by DanielFilan on May 30, 2024 on The AI Alignment Forum.
YouTube link
What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group.
Topics we discuss:
What is active inference?
Preferences in active inference
Action vs perception in active inference
Feedback loops
Active inference vs LLMs
Hierarchical agency
The Alignment of Complex Systems group
Daniel Filan: Hello, everybody. This episode, I'll be speaking with Jan Kulveit. Jan is the co-founder and principal investigator of the Alignment of Complex Systems Research Group, where he works on mathematically understanding complex systems composed of both humans and AIs. Previously, he was a research fellow at the Future of Humanity Institute focused on macrostrategy, AI alignment, and existential risk.
For links to what we're discussing you can check the description of this episode and you can read the transcript at axrp.net. Okay. Well Jan, welcome to the podcast.
Jan Kulveit: Yeah, thanks for the invitation.
What is active inference?
Daniel Filan: I'd like to start off with this paper that you've published in December of this last year. It was called "Predictive Minds: Large Language Models as Atypical Active Inference Agents." Can you tell me roughly what was that paper about? What's it doing?
Jan Kulveit: The basic idea is: there's active inference as a field originating in neuroscience, started by people like Karl Friston, and it's very ambitious. The active inference folks claim roughly that they have a super general theory of agency in living systems and so on. And there are LLMs, which are not living systems, but they're pretty smart. So we're looking into how close the models actually are.
Also, it was in part motivated by… If you look at, for example, the 'simulators' series or frame by Janus and these people on sites like the Alignment Forum, there's this idea that LLMs are something like simulators - or there is another frame on this, that LLMs are predictive systems.
And I think this terminology… a lot of what's going on there is basically reinventing stuff which was previously described in active inference or predictive processing, which is another term for minds which are broadly trying to predict their sensory inputs.
And it seems like there is a lot of similarity, and actually, a lot of what was invented in the alignment community seems basically the same concepts just given different names. So noticing the similarity, the actual question is: in what ways are current LLMs different, or to what extent are they similar or to what extent are they different? And the main insight of the paper is… the main defense is: currently LLMs, they lack the fast feedback loop between action and perception.
So if I have now changed the position of my hand, what I see immediately changes. So you can think about [it with] this metaphor, or if you look on how the systems are similar, you could look at base model training of LLMs as some sort of strange edge case of active inference or predictive processing system, which is just receiving sensor inputs, where the sensor inputs are tokens, but it's not acting, it's not changing some data.
And then the model is trained, and it maybe changes a bit in instruct fine-tuning, but ultimately when the model is deployed, we claim that you can think about the interactions of the model with users as actions, because what the model outputs ultimately can change stuff in the world. People will post it on the internet or take actions based on what the LLM is saying.
So the arrow from the system to the world, changing the world, exists, but th...

May 30, 2024 • 22min
LW - Thoughts on SB-1047 by ryan greenblatt
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on SB-1047, published by ryan greenblatt on May 30, 2024 on LessWrong.
In this post, I'll discuss my current understanding of SB-1047, what I think should change about the bill, and what I think about the bill overall (with and without my suggested changes).
Overall, SB-1047 seems pretty good and reasonable. However, I think my suggested changes could substantially improve the bill and there are some key unknowns about how implementation of the bill will go in practice.
The opinions expressed in this post are my own and do not express the views or opinions of my employer.
[This post is the product of about 4 hours of work of reading the bill, writing this post, and editing it. So, I might be missing some stuff.]
[Thanks to various people for commenting.]
My current understanding
(My understanding is based on a combination of reading the bill, reading various summaries of the bill, and getting pushback from commenters.)
The bill places requirements on "covered models'' while not putting requirements on other (noncovered) models and allowing for limited duty exceptions even if the model is covered. The intention of the bill is to just place requirements on models which have the potential to cause massive harm (in the absence of sufficient safeguards). However, for various reasons, targeting this precisely to just put requirements on models which could cause massive harm is non-trivial.
(The bill refers to "models which could cause massive harm" as "models with a hazardous capability".)
In my opinion, I think the bar for causing massive harm defined by the bill is somewhat too low, though it doesn't seem like a terrible choice to me. I'll discuss this more later.
The bill uses two mechanisms to try and improve targeting:
1. Flop threshold: If a model is trained with <10^26 flop and it is not expected to match >10^26 flop performance as of models in 2024, it is not covered. (>10^26 flop performance as of 2024 is intended to allow the bill to handle algorithmic improvements.)
2. Limited duty exemption: A developer can claim a limited duty exemption if they determine that a model does not have the capability to cause massive harm. If the developer does this, they must submit paperwork to the Frontier Model Division (a division created by the bill) explaining their reasoning.
From my understanding, if either the model isn't covered (1) or you claim a limited duty exemption (2), the bill doesn't impose any requirements or obligations.
I think limited duty exemptions are likely to be doing a lot of work here: it seems likely to me that the next generation of models immediately above this FLOP threshold (e.g. GPT-5) won't actually have hazardous capabilities, so the bill ideally shouldn't cover them. The hope with the limited duty exemption is to avoid covering these models.
So you shouldn't think of limited duty exemptions as some sort of unimportant edge case: models with limited duty exemptions likely won't be that "limited" in how often they occur in practice!
In this section, I'm focusing on my read on what seems to be the intended enforcement of the bill. It's of course possible that the actual enforcement will differ substantially!
The core dynamics of the bill are best exhibited with a flowchart.
(Note: I edited the flowchart to separate the noncovered node from the exemption node.)
Here's this explained in more detail:
1. So you want to train a non-derivative model and you haven't yet started training. The bill imposes various requirements on the training of covered models that don't have limited duty exemptions, so we need to determine whether this model will be covered.
2. Is it >10^26 flop or could you reasonably expect it to match >10^26 flop performance (as of models in 2024)? If so, it's covered.
3. If it's covered, you might be able to claim a limited ...

May 29, 2024 • 15min
LW - Finding Backward Chaining Circuits in Transformers Trained on Tree Search by abhayesian
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Finding Backward Chaining Circuits in Transformers Trained on Tree Search, published by abhayesian on May 29, 2024 on LessWrong.
This post is a summary of our paper A Mechanistic Analysis of a Transformer Trained on a Symbolic Multi-Step Reasoning Task (ACL 2024). While we wrote and released the paper a couple of months ago, we have done a bad job promoting it so far. As a result, we're writing up a summary of our results here to reinvigorate interest in our work and hopefully find some collaborators for follow-up projects.
If you're interested in the results we describe in this post, please see the paper for more details.
TL;DR - We train transformer models to find the path from the root of a tree to a given leaf (given an edge list of the tree). We use standard techniques from mechanistic interpretability to figure out how our model performs this task. We found circuits that involve backward chaining - the first layer attends to the goal and each successive layer attends to the parent of the output of the previous layer, thus allowing the model to climb up the tree one node at a time.
However, this algorithm would only find the correct path in graphs where the distance from the starting node to the goal is less than or equal to the number of layers in the model. To solve harder problem instances, the model performs a similar backward chaining procedure at insignificant tokens (which we call register tokens). Random nodes are chosen to serve as subgoals and the model backward chains from all of them in parallel.
In the final layers of the model, information from the register tokens is merged into the model's main backward chaining procedure, allowing it to deduce the correct path to the goal when the distance is greater than the number of layers. In summary, we find a parallelized backward chaining algorithm in our models that allows them to efficiently navigate towards goals in a tree graph.
Motivation & The Task
Many people here have conjectured about what kinds of mechanisms inside future superhuman systems might allow them to perform a wide range of tasks efficiently. John Wentworth coined the term
general-purpose search to group several hypothesized mechanisms that share a couple of core properties. Others have proposed projects around how to
search
for
search inside neural networks.
While general-purpose search is still relatively vague and undefined, we can study how language models perform simpler and better-understood versions of search. Graph search, the task of finding the shortest path between two nodes, has been the cornerstone of algorithmic research for decades, is among the first topics covered by virtually every CS course (BFS/DFS/Djikstra), and serves as the basis for
planning algorithms in GOFAI systems. Our project revolves around understanding how transformer language models perform graph search at a mechanistic level.
While we initially tried to understand how models find paths over any directed graph, we eventually restricted our focus specifically to trees. We trained a small GPT2-style transformer model (6 layers, 1 attention head per layer) to perform this task. The two figures below describe how we generate our dataset, and tokenize the examples.
It is important to note that this task cannot be solved trivially. To correctly predict the next node in the path, the model must know the entire path ahead of time. The model must figure out the entire path in a single forward pass. This is not the case for a bunch of other tasks proposed in the literature on evaluating the reasoning capabilities of language models (see
Saparov & He (2023) for instance). As a result of this difficulty, we can expect to find much more interesting mechanisms in our models.
We train our model on a dataset of 150,000 randomly generated trees. The model achieves an ac...

May 29, 2024 • 3min
EA - Announcing the Launch of the National Observatory on Insect Farming in France - ONEI by Corentin D. Biteau
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Launch of the National Observatory on Insect Farming in France - ONEI, published by Corentin D. Biteau on May 29, 2024 on The Effective Altruism Forum.
We are excited to announce the launch of the ONEI, an organisation dedicated to informing decision-making about insect farming in France.
The context
France is a leader in insect farming, with two of the largest companies in the sector working there. The industry has grown immensely in recent years, gathering more than a billion dollars in investment worldwide, with the number of insects farmed rising yearly from 1 trillion to 10 to 30 trillion in 5 years. The sector is expected to grow even further in the future.
While discussions on the topic often revolve around insects as food, farmed insects are primarily intended to be used as feed for other farmed animals like fish or chickens or as pet food.
Insect farming has been presented as a potential solution to environmental challenges linked to conventional livestock farming. France is currently supporting the industry with funding and research.
However, several recent studies call into question these promises of sustainability. For instance, rebound effects could lead to increased meat consumption and the associated impacts if insects provide a new source of animal feed. Moreover, while insects were promised to contribute to a circular economy by using food waste, persistent economic and regulatory challenges prevent this, with most farms feeding insects with high-quality feeds already in use elsewhere.
Our role
I am the first author of several new papers produced in collaboration with the Insect Institute on the environmental impacts of insect farming. This work, covering
environmental sustainability,
economic competitiveness, barriers to the use of food waste, limits to the research and
consumer acceptability, is currently available in the form of academic preprints and highlights several challenges.
ONEI intends to share evidence-based information on the impact of insect farming on the environment and society, a role no actor is currently filling in France. Our first task, currently underway, is translating our findings into French. We plan to work with policymakers, journalists, and investors. Much of our work will revolve around policy to ensure that future decisions are based on solid evidence.
How you can contribute
French speakers can subscribe to our newsletter and share our articles when they are published.
If you have contacts who might be interested in data on the sustainability of the sector (in French or English), please share them with us. This includes policymakers, institutions, journalists, investors or researchers.
If you're interested in this topic, we are looking for volunteers! We have some skilled tasks available for non-French speakers (graphic design, communication) and others that require speaking French (proofreading, identifying relevant contacts to share our reports with). I can also redirect you to relevant English-speaking charities that might have other roles in this sector.
You can DM me, and I will also be at the EAG London and EAGx Utrecht - feel free to reach out!
You can contact us here or via email (contact@onei-insectes.org) for any questions or remarks.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

May 29, 2024 • 12min
LW - MIRI 2024 Communications Strategy by Gretta Duleba
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI 2024 Communications Strategy, published by Gretta Duleba on May 29, 2024 on LessWrong.
As we explained in our
MIRI 2024 Mission and Strategy update, MIRI has pivoted to prioritize policy, communications, and technical governance research over technical alignment research. This follow-up post goes into detail about our communications strategy.
The Objective: Shut it Down[1]
Our objective is to convince major powers to shut down the development of frontier AI systems worldwide before it is too late. We believe that nothing less than this will prevent future misaligned smarter-than-human AI systems from destroying humanity. Persuading governments worldwide to take sufficiently drastic action will not be easy, but we believe this is the most viable path.
Policymakers deal mostly in compromise: they form coalitions by giving a little here to gain a little somewhere else. We are concerned that most legislation intended to keep humanity alive will go through the usual political processes and be ground down into ineffective compromises.
The only way we think we will get strong enough legislation is if policymakers actually get it, if they actually come to understand that building misaligned smarter-than-human systems will kill everyone, including their children. They will pass strong enough laws and enforce them if and only if they come to understand this central truth.
Meanwhile, the clock is ticking. AI labs continue to invest in developing and training more powerful systems. We do not seem to be close to getting the sweeping legislation we need. So while we lay the groundwork for helping humanity to wake up, we also have a less dramatic request. We ask that governments and AI labs install the "off-switch"[2] so that if, on some future day, they decide to shut it all down, they will be able to do so.
We want humanity to wake up and take AI x-risk seriously. We do not want to shift the Overton window, we want to shatter it.
Theory of Change
Now I'll get into the details of how we'll go about achieving our objective, and why we believe this is the way to do it. The facets I'll consider are:
Audience: To whom are we speaking?
Message and tone: How do we sound when we speak?
Channels: How do we reach our audience?
Artifacts: What, concretely, are we planning to produce?
Audience
The main audience we want to reach is policymakers - the people in a position to enact the sweeping regulation and policy we want - and their staff.
However, narrowly targeting policymakers is expensive and probably insufficient. Some of them lack the background to be able to verify or even reason deeply about our claims. We must also reach at least some of the people policymakers turn to for advice. We are hopeful about reaching a subset of policy advisors who have the skill of thinking clearly and carefully about risk, particularly those with experience in national security.
While we would love to reach the broader class of bureaucratically-legible "AI experts," we don't expect to convince a supermajority of that class, nor do we think this is a requirement.
We also need to reach the general public. Policymakers, especially elected ones, want to please their constituents, and the more the general public calls for regulation, the more likely that regulation becomes. Even if the specific measures we want are not universally popular, we think it helps a lot to have them in play, in the Overton window.
Most of the content we produce for these three audiences will be fairly basic, 101-level material. However, we don't want to abandon our efforts to reach deeply technical people as well. They are our biggest advocates, most deeply persuaded, most likely to convince others, and least likely to be swayed by charismatic campaigns in the opposite direction. And more importantly, discussions with very tech...

May 29, 2024 • 19min
LW - Response to nostalgebraist: proudly waving my moral-antirealist battle flag by Steven Byrnes
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Response to nostalgebraist: proudly waving my moral-antirealist battle flag, published by Steven Byrnes on May 29, 2024 on LessWrong.
@nostalgebraist has recently posted yet another thought-provoking post, this one on how we should feel about AI ruling a long-term posthuman future. [Previous discussion of this same post on lesswrong.] His post touches on some of the themes of Joe Carlsmith's "Otherness and Control in the Age of AI" series - a series which I enthusiastically recommend - but nostalgebraist takes those ideas much further, in a way that makes me want to push back.
Nostalgebraist's post is casual, trying to reify and respond to a "doomer" vibe, rather than responding to specific arguments by specific people. Now, I happen to self-identify as a "doomer" sometimes. (Is calling myself a "doomer" bad epistemics and bad PR? Eh, I guess. But also: it sounds cool.) But I too have plenty of disagreements with others in the "doomer" camp (cf: "Rationalist (n.) Someone who disagrees with Eliezer Yudkowsky".). Maybe nostalgebraist and I have common ground? I dunno.
Be that as it may, here are some responses to certain points he brings up.
1. The "notkilleveryoneism" pitch is not about longtermism, and that's fine
Nostalgebraist is mostly focusing on longtermist considerations, and I'll mostly do that too here. But on our way there, in the lead-in, nostalgebraist does pause to make a point about the term "notkilleveryoneism":
They call their position "notkilleveryoneism," to distinguish that position from other worries about AI which don't touch on the we're-all-gonna-die thing. And who on earth would want to be a not-notkilleveryoneist?
But they do not mean, by these regular-Joe words, the things that a regular Joe would mean by them.
We are, in fact, all going to die. Probably, eventually. AI or no AI.
In a hundred years, if not fifty. By old age, if nothing else. You know what I mean.…
OK, my understanding was:
(1) we doomers are unhappy about the possibility of AI killing all humans because we're concerned that the resulting long-term AI future would be a future we don't want; and
(2) we doomers are also unhappy about the possibility of AI killing all humans because we are human and we don't want to get murdered by AIs. And also, some of us have children with dreams of growing up and having kids of their own and being a famous inventor or oh wait actually I'd rather work for Nintendo on their Zelda team or hmm wait does Nintendo hire famous inventors? …And all these lovely aspirations again would require not getting murdered by AIs.
If we think of the "notkilleveryoneism" term as part of a communication and outreach strategy, then it's a strategy that appeals to Average Joe's desire to not be murdered by AIs, and not to Average Joe's desires about the long-term future.
And that's fine! Average Joe has every right to not be murdered, and honestly it's a safe bet that Average Joe doesn't have carefully-considered coherent opinions about the long-term future anyway.
Sometimes there's more than one reason to want a problem to be solved, and you can lead with the more intuitive one. I don't think anyone is being disingenuous here (although see comment).
1.1 …But now let's get back to the longtermist stuff
Anyway, that was kinda a digression from the longtermist stuff which forms the main subject of nostalgebraist's post.
Suppose AI takes over, wipes out humanity, and colonizes the galaxy in a posthuman future. He and I agree that it's at least conceivable that this long-term posthuman future would be a bad future, e.g. if the AI was a paperclip maximizer. And he and I agree that it's also possible that it would be a good future, e.g. if there is a future full of life and love and beauty and adventure throughout the cosmos. Which will it be? Let's dive into that discus...

May 29, 2024 • 22min
EA - Introducing Ansh: A Charity Entrepreneurship Incubated Charity by Supriya
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Ansh: A Charity Entrepreneurship Incubated Charity, published by Supriya on May 29, 2024 on The Effective Altruism Forum.
Executive Summary
Ansh, a 1-year-old Charity Entrepreneurship incubated charity, has been delivering an evidence-based, scientifically proven intervention called Kangaroo Care to low birth weight and premature babies in 2 government hospitals in India since January 2024. Ansh estimates that their programs are saving, on average, 4 lives a month per facility and a total of 98 lives per year. The cost of one life saved is approximately $2077 (current costs, not a potential estimate).
Ansh is now replicating the programs in two additional hospitals, doubling their impact before the end of this year.
According to the World Health Organization (WHO), neonatal conditions[1] are among the top 10 causes of death worldwide[2]. This makes neonatal mortality one of the largest-scale causes of suffering and death today. In 2022, 2.3 million babies died in the first 28 days of life (i.e. the newborn/neonatal period) (World Health Organisation, 2024). Let's compare that number to one of EA's other top cause areas.
In 2022, 608,000 people died of malaria, which is about 26.4% lower than neonatal conditions. However, we have a cost-effective, scalable model for preventing malaria-caused death (e.g., with AMF and Malaria Consortium). Unfortunately, there has been no equivalently cost-effective and scalable model for preventing neonatal mortality.
In this post, we will introduce Ansh, a 1-year-old Charity Entrepreneurship incubated charity that is working towards building tractable, scalable solutions to neonatal mortality in low- and middle-income countries (LMICs). 81% of neonatal deaths happen in low and Low-Middle SDI countries. The disparities in mortality rates between low and high-resource contexts suggest that most neonatal deaths are preventable.
In the sections below, we will first introduce Ansh and its mission statement, share our results thus far, and then introduce some of our plans for how to increase our reach and impact over the next few years. We are very excited to share the work we've done so far with the EA community, and to hear your constructive feedback on how we can make our non-profit even more impactful!
I. The Problem and Solution
More than half of all neonatal deaths occur within the first three days after birth (Dol J, 2021) and over 75% in the first week of life (WHO, 2024), making it imperative to reach babies as soon after birth as possible. Moreover, low birth weight (LBW)[3] is considered the number one mortality risk factor for children under 5.
In fact, according to the Global Burden of Disease, around 89% of all newborn deaths in India (the country where about 22% of all newborn deaths in the world occur) happen to LBW and preterm newborns. Further, 81% of all newborn deaths occur in Low or Low-Middle SDI countries (Global Burden of Disease Collaborative Network, 2019).
Hence, the most effective path toward reducing neonatal mortality rates globally lies in developing interventions aimed at helping LBW babies during their first week of life in LMIC contexts.
Thankfully, such an intervention exists: Kangaroo Care. Kangaroo Care (KC) needs neither fancy equipment nor expensive technology - the methods of KC are both simple and highly effective, especially for LBW newborns. KC requires early, continuous, and prolonged skin-to-skin contact between the mother (or another caregiver) and the baby for about 8 hours of contact per day-paired with exclusive breastfeeding and close monitoring of the baby.
This is often assisted with a cloth binder, between the LBW newborn and caregiver (preferably the mother), to allow for mobility. Estimates from the 2016 Cochrane review suggest that KC can reduce LBW neonates' chance of (i) ...

May 29, 2024 • 6min
EA - The US Presidential Election is Tractable, Very Important, and Urgent by kuhanj
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The US Presidential Election is Tractable, Very Important, and Urgent, published by kuhanj on May 29, 2024 on The Effective Altruism Forum.
Disclaimer: To avoid harmful polarization of important topics, this post is written in a non-partisan manner (in accordance with
forum guidelines), and I'd encourage comments to be written similarly.
US Presidential Elections are surprisingly tractable
1. US presidential elections are often extremely close.
1. Biden won the last election by 42,918 combined votes in three swing states. Trump won the election before that by 77,744 votes. 537 votes in Florida decided the 2000 election.
2. There's a good chance the 2024 election will be very close too.
1. Trump leads national polling by
around 1% nationally, and polls are tighter than they were the last two elections. If polls were perfectly accurate (which of course, they aren't), the tipping point state would be
Pennsylvania or Michigan, which
are
currently at +1-2% for Trump.
3. There is still low-hanging fruit. Estimates for how effectively top RCT-tested interventions to generate net swing-state votes this election range from a few hundred to several thousand dollars per vote. Top non-RCT-able interventions are likely even better. Many potentially useful strategies have not been sufficiently explored. Some examples:
1. mobilizing US citizens abroad (who vote at a ~10x lower rate than citizens in the country), or swing-state university students (perhaps through a walk-out-of-classes-to-the-polls demonstration).
2. There is no easily-searchable resource on how to best contribute to the election. (Look up the best ways to contribute to the election online - the answers are not very helpful.)
3. Anecdotally, people with little political background have been able to generate many ideas that haven't been tried and were received positively by experts.
4. Many top organizations in the space are only a few years old, which suggests they have room to grow and that more opportunities haven't been picked.
5. Incentives push talent away from political work:
1. Jobs in political campaigns are cyclical/temporary, very demanding, poorly compensated, and offer uncertain career capital (i.e. low rewards for working on losing campaigns).
2. How many of your most talented friends work in electoral politics?
6. The election is more tractable than a lot of other work: Feedback loops are more measurable and concrete, and the theory of change fairly straightforward. Many other efforts that significant resources have gone into have little positive impact to show for them (though of course ex-ante a lot of these efforts seemed very reasonable to prioritize) - e.g. efforts around OpenAI, longtermist branding, certain AI safety research directions, and more.
Much more important than other elections
This election seems unusually important for several reasons:
There's arguably a decent chance that very critical decisions about transformative AI will be made in 2025-2028. The role of governments might be especially important for AI if other prominent (state and lab) actors cannot be trusted. Biden's administration issued a landmark executive order on AI in October 2023. Trump has vowed to repeal it on Day One.
Compared to other governments, the US government is unusually influential. The US government spent over $6 trillion in the 2023 fiscal year, and makes key decisions involving billions of dollars each year for issues like global development, animal welfare, climate change, and international conflicts.
Critics argue that Trump and his allies are unique in their
response to the 2020 election, plans to fill the government with
tens of thousands of vetted loyalists, and in how people who have worked with Trump have
described him. On the other side, Biden's critics point to his
age (81 years, four years older...


