

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Aug 17, 2024 • 27min
EA - Sharing the AI Windfall: A Strategic Approach to International Benefit-Sharing by michel
 Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sharing the AI Windfall: A Strategic Approach to International Benefit-Sharing, published by michel on August 17, 2024 on The Effective Altruism Forum.
Summary
If AI progress continues on its current trajectory, the developers of advanced AI systems - and the governments who house those developers - will accrue tremendous wealth and technological power.
In this post, I consider how and why US government[1] may want to internationally share some benefits accrued from advanced AI - like financial benefits or monitored access to private cutting-edge AI models. Building on prior work that discusses international benefit-sharing primarily from a global welfare or equality lens, I examine how strategic benefit-sharing could unlock international agreements that help all agreeing states and bolster global security.
Two "use cases" for strategic benefit-sharing in international AI governance:
1. Incentivizing states to join a coalition on safe AI development
2. Securing the US' lead in advanced AI development to allow for more safety work
I also highlight an important, albeit fuzzy, distinction between benefit-sharing and power-sharing:
Benefit-sharing: Sharing AI-derived benefits that don't significantly alter power dynamics between the recipient and the provider.
Power-sharing: Sharing AI-derived benefits that significantly empower the recipient actor, thereby changing the relative power between the provider and recipient.
I identify four main clusters of benefits, but these categories overlap and some benefits don't fit neatly into any category: Financial and resource-based benefits; Frontier AI benefits; National security benefits; and 'Seats at the table'.
I conclude with two key considerations with respect to benefit-sharing:
Credibility will be a key challenge for benefit-sharing and power-sharing agreements (See more)
Benefit sharing strategies should account for potential risks. (See more)
Introduction
Advanced AI systems could empower their developers - and the governments who supervise those developers - with enormous benefits. For example, advanced AI systems[2] could give rise to tremendous wealth, breakthrough medical technologies, and decisive national security benefits.
This post examines how these benefits could be shared internationally. In particular, I examine how and why the US government (henceforth USG) may want to strategically share some of the benefits accrued from advanced AI to further their own interest, other states interests, and ultimately bring about a safer world.
Past work[3] on international benefit-sharing has primarily focused on sharing benefits to address wide-spread job-displacement and promote welfare and equality globally. I support such altruistic benefit-sharing to remedy the uptick in global power- and income-inequality that AI could drive. But I want to expand discussions of international benefit-sharing to include sharing benefits as a tool for positive-sum trades.
By offering AI-derived benefits - such as economic aid, monitored frontier AI model access, or security assurances[4] - the USG could enable commitments that are in the interest of all parties at the table and promote global security. For example, the US could provide allied states with monitored access to private, cutting-edge frontier AI models. In exchange, allied states could take steps domestically to prevent the proliferation of weaponized AI systems.
(I discuss other ideas on how benefit-sharing could be used in international AI governance below).
There is precedent for US-led strategic benefit-sharing. Consider the Marshall Plan. Post-WW2, the USG helped Western Europe rapidly recover by providing benefits like financial aid and modern technologies, which in turn strengthened the US' key strategic alliances, created mutually-beneficial markets for US goods and... 

Aug 17, 2024 • 11min
LW - Release: Optimal Weave (P1): A Prototype Cohabitive Game by mako yass
 Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Release: Optimal Weave (P1): A Prototype Cohabitive Game, published by mako yass on August 17, 2024 on LessWrong.
I recently argued for Cohabitive Games, games that are designed for practicing negotiation, or for developing intuitions for applied cooperative bargaining, which is one of the names of preference aggregation.
I offered the assets for P1, my prototype, to whoever was interested. About 20 people asked for them. I told most of them that I wanted to polish the assets up a little bit first, and said that it would be done in about a month. But a little bit of polish turned into a lot, and other things got in the way, and a month turned into ten. I'm genuinely a little bit dismayed about how long it took, however:
P1, now Optimal Weave 0.1, is a lot nicer now.
I think it hangs together pretty well.
You can get it here: Optimal Weave 0.1. (I'm not taking any profit, and will only start to if it runs away and starts selling a lot. (release day sale. get in quick.))
There's also a print and play version if you want it sooner, free, and don't mind playing with paper. (Honestly, if you want to help to progress the genre, you've got to learn to enjoy playing on paper so that you can iterate faster.)
There's also a version that only includes the crisp, flippable land tiles, which you can then supplement with the ability, event and desire cards from the printed version and whatever playing pieces you have at home.
(note: the version you'll receive uses round tiles instead of hex tiles. They don't look as cool, but I anticipate that round tiles will be a little easier to lay out, and easier to flip without nudging their neighbors around (the game involves a lot of tile flipping).
Earlier on, I had some hope that I could support both square layout and hexagonal layout, that's easier with round tiles, though everything's balanced around the higher adjacency of the hexagonal layout so I doubt that'll end up working)
I'll be contacting everyone who reached out immanently (I have a list).
What Happened
Further learnings occurred. I should report them.
I refined the game a fair bit
There were a lot of abilities I didn't like, and a lot of abilities I did like but hadn't taken the time to draw up, so I went through the hundred or so variations of the initial mechanics, cut the lame ones, added more ideas, then balanced everything so that no land-type would get stuck without any conversion pathways.
I also added an event deck, which provides several functions. It simplifies keeping track of when the game ends, removes the up front brain crunch of dropping players straight into a novel political network of interacting powers, by spreading ability reveals out over the first couple of turns.
The event deck also lightly randomizes the exact timing of the end of the game, which introduces a nice bit of tension, hopefully spares people from feeling a need to precisely calculate the timing of their plans, and possibly averts some occurrences of defection by backwards induction (though the contract system is supposed to deal with that kind of thing, for reasons that are beyond me, people seem to want to try playing without the contract system so, here, this might be important).
I found a nicer low-volume prototyping service
The Game Crafter. Their prices are better, uploads are easier to automate, and they offer thick card tile pieces, as well as various figures. I think some of those figures are pretty funky little guys. Those are the ones you'll be receiving.
Their print alignment is also pretty bad. They warn you, so I'm not mad or anything, I just find it baffling. It's especially bad with the cardboard tile pieces. Oh well, everything looks good enough, I'm happy overall, although I do notice that no big popular board game that I've ever heard of decided to publish through this se... 

Aug 17, 2024 • 6min
LW - Rewilding the Gut VS the Autoimmune Epidemic by GGD
 Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rewilding the Gut VS the Autoimmune Epidemic, published by GGD on August 17, 2024 on LessWrong.
Did you know that an autoimmune disease now affects one in five Americans? Of these, 80% are women; autoimmune disease is the fifth leading cause of death in women under 65. Most autoimmune diseases are being diagnosed in increasing numbers, but it is still unclear if all this variance is due to higher awareness, medical literacy, or better diagnosis techniques.
The effects of autoimmune diseases can be devastating. As a person's immune system attacks their body instead of microbes or cancerous cells, they can experience chronic fatigue, chronic pain, drug dependency, depression, and social isolation. These symptoms annihilate mental health, wreck promising careers, destroy lives, and, often, financially ruin families. For too many, these illnesses result in early death.
The Missing Friends
The Old Friends Hypothesis suggests that early and regular exposure to harmless microorganisms - "old friends" present throughout human evolution and recognized by the human immune system - train the immune system to react appropriately to threats.
Here's the problem: some very specific microbes that we've coevolved with - for potentially millions of years (considering monkeys emerged 35 million years ago) - are no longer in our gut. Those are literally gone; they cannot be replaced by eating fiber, kefir, or any probiotics on the market. And this might be the reason the immune system of many people is dysfunctional - it's missing organisms it's been interacting with for a very long time.
We know this precisely because we've been able to compare the microbiome of industrialized populations with modern hunter-gatherers, who possibly have the microbiome closest to early humans. The Hadza, a protected hunter-gatherer Tanzanian indigenous ethnic group, are known for having the highest gut diversity of every population. The Hadza had an average of 730 species of gut microbes per person.
The average Californian gut microbiome contained just 277 species, and the Nepali microbiome fell in between. Ultra-deep sequencing of the Hadza microbiome reveals 91,662 genomes of bacteria, archaea, bacteriophages, and eukaryotes, 44% of which are absent from existing unified datasets. The researchers identified 124 gut-resident species absent or vanishing in industrialized populations.
As seen in the above picture, there is, at the same time, reduced diversity and an entirely different type of microbial ecology in our gut. Sonnenburg & Sonnenburg list five major reasons that could mediate this change: (1) consumption of highly processed foods; (2) high rates of antibiotic administration; (3) birth via cesarean section and use of baby formula; (4) sanitation of the living environment; and (5) reduced physical contact with animals and soil.
The question that interests me here is, of course: what would happen if we were to reintroduce these "lost" microbes into our bodies? Well, we have no idea! Worse, currently, as of 2024, aside from Symbiome, who focuses on skin, I am not aware of any initiative by a private company or group to develop any treatment based on the ancestral microbiome. Most previous projects seem to have been shelved.
However, private, secret initiatives may be happening at the moment; after all, harvesting the microbiome of hunter-gatherers is not without controversies.
However, given what we've learned about the microbiome in the last decades and its connection to just about every biological process, I do believe rewilding the gut might be the most effective public health initiative of the 21st century:
The cost and suffering caused by autoimmune conditions are massive and will increase in the future. Chronic conditions use most of the resources of public health care systems.
Rewilding the... 

Aug 17, 2024 • 7min
LW - Rationalists are missing a core piece for agent-like structure (energy vs information overload) by tailcalled
 Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rationalists are missing a core piece for agent-like structure (energy vs information overload), published by tailcalled on August 17, 2024 on LessWrong.
The agent-like structure problem is a question about how agents in the world are structured. I think rationalists generally have an intuition that the answer looks something like the following:
We assume the world follows some evolution law, e.g. maybe deterministically like xn+1=f(xn), or maybe something stochastic. The intuition being that these are fairly general models of the world, so they should be able to capture whatever there is to capture. x here has some geometric structure, and we want to talk about areas of this geometric structure where there are agents.
An agent is characterized by a Markov blanket in the world that has informational input/output channels for the agent to get information to observe the world and send out information to act on it, intuitively because input/output channels are the most general way to model a relationship between two systems, and to embed one system within another we need a Markov blanket.
The agent uses something resembling a Bayesian model to process the input, intuitively because the simplest explanation that predicts the observed facts is the best one, yielding the minimal map that can answer any query you could have about the world.
And then the agent uses something resembling argmax to make a decision for the output given the input, since endless coherence theorems prove this to be optimal.
Possibly there's something like an internal market that combines several decision-making interests (modelling incomplete preferences) or several world-models (modelling incomplete world-models).
There is a fairly-obvious gap in the above story, in that it lacks any notion of energy (or entropy, temperature, etc.). I think rationalists mostly feel comfortable with that because:
xn+1=f(xn) is flexible enough to accomodate worlds that contain energy (even if they also accomodate other kinds of worlds where "energy" doesn't make sense)
80% of the body's energy goes to muscles, organs, etc., so if you think of the brain as an agent and the body as a mech that gets piloted by the brain (so the Markov blanket for humans would be something like the blood-brain barrier rather than the skin), you can mostly think of energy as something that is going on out in the universe, with little relevance for the agent's decision-making.
I've come to think of this as "the computationalist worldview" because functional input/output relationships are the thing that is described very well with computations, whereas laws like conservation of energy are extremely arbitrary from a computationalist point of view. (This should be obvious if you've ever tried writing a simulation of physics, as naive implementations often lead to energy exploding.)
Radical computationalism is killed by information overload
Under the most radical forms of computationalism, the "ideal" prior is something that can range over all conceivable computations. The traditional answer to this is Solomonoff induction, but it is not computationally tractable because it has to process all observed information in every conceivable way.
Recently with the success of deep learning and the bitter lesson and the Bayesian interpretations of deep double descent and all that, I think computationalists have switched to viewing the ideal prior as something like a huge deep neural network, which learns representations of the world and functional relationships which can be used by some sort of decision-making process.
Briefly, the issue with these sorts of models is that they work by trying to capture all the information that is reasonably non-independent of other information (for instance, the information in a picture that is relevant for predicting ... 

Aug 17, 2024 • 9min
LW - Please support this blog (with money) by Elizabeth
 Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please support this blog (with money), published by Elizabeth on August 17, 2024 on LessWrong.
Short version
It has always felt like a gift that people read things I write, and I enjoyed reciprocating that by sharing my writing for free. Until recently I had enough income to pull that off and still cover material needs. That is no longer true, and while it is not an urgent problem, I would like to solve it while that is still true. However it's important to me to keep this blog free. To square this, I'm asking for your support.
If you value this blog and can comfortably do so, please consider sending some of that value back my way via Patreon or Paypal.
Long version
I love my work and this blog. Clients paying me to learn interesting things that they immediately use to help people would be amazing on its own, but then grantmakers pay me to do more speculative EV projects of my choosing. Then I get to publish my work and help more people. I'm incredibly fortunate to do this, and at a wage that supports a good lifestyle in a reasonable number of hours.
Unfortunately, I can't work a reasonable number of hours. It's been years since I could work a full work-week, but until recently I could still work enough. That stopped being true this winter (short version: furniture grew mold leading to a devastating health cascade). I spent months barely able to work while firehosing money at the medical system (ballpark total cost: $100,000, plus an intense desire to own my own home).
I'm doing much better although not fully healed now and God willing this was a one time thing, but 10 years ago I needed a year off work for dental surgery, and I'll consider myself lucky if it takes 10 years before my next "one off" multi-month health problem. My limited hours in good times aren't enough to make up for the total absence of hours in bad times.
Disability insurance refused to sell to me at any price back when I had an excellent work attendance record and very minor medical issues, so I assume they'd use a flamethrower if I tried now. This has me interested in finding ways to decouple my income from my working hours.
The obvious thing to do is monetize this blog. Except I hate gating zero-marginal-cost content, especially health content. It keeps out the people who most need the help. It's also hard to match price to value - any given post could be a waste of your time or a life changing intervention, and you may not know which for years. So I don't want to do any kind of paywall. Instead, I'm asking for voluntary contributions, based on your evaluation of the value my work has provided you.
That way you can match price to value, no one gets locked out of content they need, and I can invest in this blog while staying financially secure.
How to give me money
If you'd like to make a one-time payment, you can paypal me at acesounderglass@gmail.com. This will get me caught up on this winter's lost income and medical expenses. If your generosity exceeds Paypal's petty limitations or you just prefer a different format, email me at that same address and we'll work something out.
If you would like to give ongoing support, you can join my Patreon. This will let me spend more time on my top-choice project and invest more in this blog on an ongoing basis. I've recently added real rewards to my patreon, including a discord, live calls for each post, and for the most discerning of patrons, all the plankton you can eat. For the moment these are experimental, and you can help me out by telling me what you'd like to see.
If you're already a Patreon patron - first of all, thank you. Second, take a look at the new tiers. I'm not allowed to switch you to reward tiers even if you're already giving enough to qualify for them, and the $1 tier has been removed.
If you would like to give one-time support to spe... 

Aug 17, 2024 • 2min
EA - Open Philanthropy is hiring people to… help hire more people! by maura
 Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy is hiring people to… help hire more people!, published by maura on August 17, 2024 on The Effective Altruism Forum.
Open Philanthropy needs
more recruiters. We'd love to see you
apply, even if you've never worked in hiring before.
"Recruiting" is sometimes used to narrowly refer to headhunting or outreach. At Open Phil, though, "recruiting" includes everything related to the hiring process. Our recruiting team manages the systems and people that take us from applications to offers. We design evaluations, interview candidates, manage stakeholders, etc.[1]
We're looking for
An operations mindset. Recruiting is project management, first and foremost; we want people who can reliably juggle lots of balls without dropping them.
Interpersonal skills. We want clear communicators with good people judgment.
Interest in Open Phil's mission. This is an intentionally broad definition-see below!
What you don't need
Prior recruiting experience. We'll teach you!
To be well-networked or highly immersed in EA. You should be familiar with the areas Open Phil works in (such as
global health and wellbeing and global catastrophic risks), but if you're wondering "Am I EA enough for this?", you almost certainly are.
The
job application will be posted to OP's website in coming weeks, but isn't there yet as of this post; we're starting with targeted outreach to high-context audiences (you!) before expanding our search to broader channels.
If this role isn't for you but might be for someone in your network, please send them our way-we offer a
reward if you counterfactually refer someone we end up hiring.
1. ^
The OP recruiting team also does headhunting and outreach, though, and we're open to hiring more folks to help with that work, too! If that sounds exciting to you, please apply to the
current recruiter posting and mention an interest in outreach work in the "anything else" field.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org 

Aug 17, 2024 • 22min
EA - Report: The Broken State of Animal Advocacy in Universities by Dr Faraz Harsini
 Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Report: The Broken State of Animal Advocacy in Universities, published by Dr Faraz Harsini on August 17, 2024 on The Effective Altruism Forum.
The following report is also available at the website of Allied Scholars for Animal Protection.
Background
This study examined the state of animal advocacy and number of events held by vegan and animal advocacy student groups at the top 100 U.S. universities. Our central research questions were:
How many events were held to educate students, especially non- vegans, on veganism and animal welfare?
Why is animal advocacy ineffective in universities? Why is animal advocacy unsustainable in universities?
How many advocacy events were held recently by university student groups?
What can we learn from other student organizations in building a sustainable movement?
We also conducted a qualitative survey of several influential animal advocacy professionals regarding their views on the state of campus animal advocacy.
Thanks to the following individuals for their contributions and insights:
Dr. Courtney Dillard, Mercy for Animals
Nicole Rawling, JD, Material Innovation Initiative
Aidan Kanyoku, Pax Fauna
Zoe Rosenberg, Happy Hen Animal Sanctuary
Sebastian Quaade, Climate Refarm
Kenzie Bushman, Better Food Foundation
Jenna Holakovsky, Farm Sanctuary
Key Findings
1. Most college animal advocacy organizations are inactive, indicating a lack of sustainability. While 76% of top colleges once had active animal rights organizations, only 16% of colleges have an active group, and fewer than 10% of colleges had groups that held more than a single event in the first five months of 2023.
2. Most active organizations held only one or two events per semester. Of 16 active universities, just four held more than a couple events, and just two held more than three events between January and May of 2023.
3. The top 10 universities are similarly inactive and are even less effective. Of these, 70% saw no animal rights activism, and none held more than two events in Spring 2023.
4. The time is now for a united front in college animal rights activism. Even our nation's top universities have been unable to host consistent, effective, enduring animal rights activism or animal advocacy events on their campuses. In fact, the vast majority host none at all. We have no reason to believe this will improve on its own.
Method Overview
We searched college animal rights activist events on Google, Instagram, and Facebook. If a university's student group advertised at least one event in the most recent semester at the time of this study (Spring 2023, ranging from January 1st to May 31st, 2023) it was considered active. Otherwise, it was considered inactive.
We defined an animal rights event as any organized action intended to educate non-activists about farm or laboratory animal exploitation and help them take corrective action, e.g. go vegan or reduce consumption of animal products. This includes movie nights, discussion sessions, seminar speakers, outreach to mobilize currently non-activist vegans, etc.
We excluded events that only addressed the environment, healthy living, plant-based cooking, etc. Our reasoning is that if animal exploitation does not come up at all, it is not animal advocacy.
We excluded vegan socials, meetings targeted at existing activists, and events only focused on companion animals, e.g. dogs and cats. We contend that these do not contribute directly to animal liberation.[1]
We excluded law school organizations and events because these are usually not available to the entire student body and typically serve a niche community in graduate schools.
The list of top 100 universities is from U.S. News' Best National University Rankings 2022-2023 report.
Results
How Many Advocacy Groups Are Active?
First, we checked which of the top 100 universities h... 

Aug 17, 2024 • 11min
LW - Principled Satisficing To Avoid Goodhart by JenniferRM
 Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Principled Satisficing To Avoid Goodhart, published by JenniferRM on August 17, 2024 on LessWrong.
There's an admirable LW post with (currently) zero net upvotes titled Goodhart's Law and Emotions where a relatively new user re-invents concepts related to super-stimuli. In the comments, noggin-scratcher explains in more detail:
The technical meaning is a stimulus that produces a stronger response than the stimulus for which that response originally evolved.
So for example a candy bar having a carefully engineered combination of sugar, fat, salt, and flavour in proportions that make it more appetising than any naturally occurring food. Or outrage-baiting infotainment "news" capturing attention more effectively than anything that one villager could have said to another about important recent events.
In my opinion, there's a danger that arises when applying the dictum to know thyself, where one can do this so successfully that one begins to perceive the logical structure of the parts of yourself that generate subjectively accessible emotional feedback signals.
In the face of this, you face a sort of a choice: (1) optimize these to get more hedons AT ALL as a coherent intrinsic good, or (2) something else which is not that.
In general, for myself, when I was younger and possibly more foolish than I am now, I decided that I was going to be explicitly NOT A HEDONIST.
What I meant by this has changed over time, but I haven't given up on it.
In a single paragraph, I might "shoot from the hip" and say that when you are "not a hedonist (and are satisficing in ways that you hope avoid Goodhart)" it doesn't necessarily mean that you throw away joy, it just means that that WHEN you "put on your scientist hat", and try to take baby steps, and incrementally modify your quickly-deployable habits, to make them more robust and give you better outcomes, you treat joy as a measurement, rather than a desiderata.
You treat subjective joy like some third pary scientist (who might have a collaborator who filled their spreadsheet with fake data because they want a Nature paper, that they are still defending the accuracy of, at a cocktail party, in an ego-invested way) saying "the thing the joy is about is good for you and you should get more of it".
When I first played around with this approach I found that it worked to think of myself as abstractly-wanting to explore "conscious optimization of all the things" via methods that try to only pay attention to the plausible semantic understandings (inside of the feeling generating submodules?) that could plausibly have existed back when the hedonic apparatus inside my head was being constructed.
(Evolution is pretty dumb, so these semantic understandings were likely to be quite coarse. Cultural evolution is also pretty dumb, and often actively inimical to virtue and freedom and happiness and lovingkindness and wisdom, so those semantic understandings also might be worth some amount of mistrust.)
Then, given a model of a modeling process that built a feeling in my head, I wanted to try to figure out what things in the world that that modeling process might have been pointing to, and think about the relatively universal instrumental utility concerns that arise proximate to the things that the hedonic subsystem reacts to. Then maybe just... optimize those things in instrumentally reasonable ways?
This would predictably "leave hedons on the table"!
But it would predictably stay aligned with my hedonic subsystems (at least for a while, at least for small amounts of optimization pressure) in cases where maybe I was going totally off the rails because "my theory of what I should optimize for" had deep and profound flaws.
Like suppose I reasoned (and to be clear, this is silly, and the error is there on purpose):
1. Making glucose feel good is simply a w... 

Aug 17, 2024 • 20min
EA - #196 - The edge cases of sentience and why they matter (Jonathan Birch on The 80,000 Hours Podcast) by 80000 Hours
 Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #196 - The edge cases of sentience and why they matter (Jonathan Birch on The 80,000 Hours Podcast), published by 80000 Hours on August 17, 2024 on The Effective Altruism Forum.
We just published an interview: Jonathan Birch on the edge cases of sentience and why they matter. Listen on Spotify, watch on Youtube, or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.
Episode summary
In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. And as soon as [some courageous scientists] looked for evidence, it showed that this practice was completely indefensible and then the clinical practice was changed.
People don't need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous.
Jonathan Birch
In today's episode, host Luisa Rodriguez speaks to Dr Jonathan Birch - philosophy professor at the London School of Economics - about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)
They cover:
Candidates for sentience - such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIs.
Humanity's history of acting as if we're sure that such beings are incapable of having subjective experiences - and why Jonathan thinks that that certainty is completely unjustified.
Chilling tales about overconfident policies that probably caused significant suffering for decades.
How policymakers can act ethically given real uncertainty.
Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.
How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.
Why Jonathan is so excited about citizens' assemblies.
Jonathan's conversation with the Dalai Lama about whether insects are sentient.
And plenty more.
Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore
Highlights
The history of neonatal surgery without anaesthetic
Jonathan Birch: It's another case I found unbelievable: in the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. There was a public campaign led by someone called Jill Lawson, whose baby son had been operated on in this way and had died.
And at the same time, evidence was being gathered to bear on the questions by some pretty courageous scientists, I would say. They got very heavily attacked for doing this work, but they knew evidence was needed to change clinical practice. And they showed that, if this protocol is done, there were massive stress responses in the baby, massive stress responses that reduce the chances of survival and lead to long-term developmental damage.
So as soon as they looked for evidence, the evidence showed that this practice was completely indefensible and then the clinical practice was changed.
So, in a way, people don't need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it... 

Aug 17, 2024 • 1min
LW - How unusual is the fact that there is no AI monopoly? by Viliam
 Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How unusual is the fact that there is no AI monopoly?, published by Viliam on August 17, 2024 on LessWrong.
I may be completely confused about this, but my model of technological breakthroughs in history was basically this: A few guys independently connect the dots leading to a new invention, for example the telephone, approximately at the same time. One of them runs to the patent office a little faster than the others, and he gets the patent first.
Now he gets to be forever known as the inventor of the telephone, and the rest of them are screwed; if they ever try to sell their own inventions, they will probably get sued to bankruptcy.
Today, we have a few different companies selling AIs (LLMs). What is different this time?
Is my model of the history wrong?
Is the current legal situation with patents different?
Are LLMs somehow fundamentally different from all other previous inventions?
Is it the fact that anyone can immediately publish all tiny partial steps in online journals, that makes it virtually impossible for any individual or institution to legally acquire the credit -- and the monopoly -- for the entire invention?
Or something else...?
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org 


