

The Nonlinear Library: LessWrong
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Aug 19, 2024 • 7min
LW - Liability regimes for AI by Ege Erdil
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Liability regimes for AI, published by Ege Erdil on August 19, 2024 on LessWrong.
For many products, we face a choice of who to hold liable for harms that would not have occurred if not for the existence of the product. For instance, if a person uses a gun in a school shooting that kills a dozen people, there are many legal persons who in principle could be held liable for the harm:
1. The shooter themselves, for obvious reasons.
2. The shop that sold the shooter the weapon.
3. The company that designs and manufactures the weapon.
Which one of these is the best? I'll offer a brief and elementary economic analysis of how this decision should be made in this post.
The important concepts from economic theory to understand here are Coasean bargaining and the problem of the judgment-proof defendant.
Coasean bargaining
Let's start with Coaesean bargaining: in short, this idea says that regardless of who the legal system decides to hold liable for a harm, the involved parties can, under certain conditions, slice the harm arbitrarily among themselves by contracting and reach an economically efficient outcome.
Under these conditions and assuming no transaction costs, it doesn't matter who the government decides to hold liable for a harm; it's the market that will ultimately decide how the liability burden is divided up.
For instance, if we decide to hold shops liable for selling guns to people who go on to use the guns in acts of violence, the shops could demand that prospective buyers purchase insurance on the risk of them committing a criminal act. The insurance companies could then analyze who is more or less likely to engage in such an act of violence and adjust premiums accordingly, or even refuse to offer it altogether to e.g.
people with previous criminal records, which would make guns less accessible overall (because there's a background risk of anyone committing a violent act using a gun) and also differentially less accessible to those seen as more likely to become violent criminals. In other words, we don't lose the ability to deter individuals by deciding to impose the liability on other actors in the chain, because they can simply find ways of passing on the cost.
The judgment-proof defendant
However, what if we imagine imposing the liability on individuals instead? We might naively think that there's nothing wrong, because anyone who used a gun in a violent act would be required to pay compensation to the victims which in principle could be set high enough to deter offenses even by wealthy people.
However, the problem we run into in this case is that most school shooters have little in the way of assets and certainly not enough to compensate the victims and the rest of the world for all the harm that they have caused. In other words, they are judgment-proof: the best we can do when we catch them is put them in jail or execute them. In these cases, Coaesean bargaining breaks down.
We can try to recover something like the previous solution by mandating such people buy civil or criminal insurance by law, so that they are no longer judgment-proof because the insurance company has big coffers to pay out large settlements if necessary, and also the incentive to turn away people who seem like risky customers. However, law is not magic, and someone who refuses to follow this law would still in the end be judgment-proof.
We can see this in the following example: suppose that the shooter doesn't legally purchase the gun from the shop but steals it instead. Given that the shop will not be held liable for anything, it's only in their interest to invest in security for ordinary business reasons, but they have no incentive to take additional precautions beyond what make sense for e.g. laptop stores.
Because the shooter obtains the gun illegally, they can then go and carry out...

Aug 18, 2024 • 2min
LW - What is "True Love"? by johnswentworth
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is "True Love"?, published by johnswentworth on August 18, 2024 on LessWrong.
Meta: I recently entered the dating market, so naturally I have lots of random thoughts on the subject which you all get to suffer through for a while. Your usual diet of dry math and agency theory will resume shortly.
Obviously the phrase "true love" has been so thoroughly overdone in so much fiction as to lose all substantive meaning. That's what happens when we leave important conceptual work to would-be poets. We're here to reclaim the term, because there's a useful concept which is very naturally described by the words "true" and "love".
You know
that thing where, when you're smitten by someone, they seem more awesome than they really are? Your brain plays up all the great things about them, and plays down all the bad things, and makes up stories about how great they are in other ways too? And then you get even more smitten by them? All that perceived-wonderfulness makes your attraction a steady state? That's part of normal being-in-love.
… and there's something "false" about it. Like, in some sense, you're in love with an imaginary person, not the real person in front of you. You're in love with this construct in your head whose merits are greater and shortcomings more minor than the real person who triggered the cascade in your heart.
But what if you can see the target of your affection with clear eyes and level head, without the pleasant tint of limerance skewing your perception, and still feel a similar level of love? What if they are actually that good a fit to you, not just in your head but in real life? Well, the obvious name for that would be "true love": love which is not built on a map-territory mismatch, but rather on perception of your loved one as they really are.
And that does actually seem like a pretty good fit for at least some of the poetry on the subject: loving your partner as they truly are, flaws and all, blah blah blah.
Alas, "false" love can still feel like "true" love from the inside as it's happening. To tell it's happening, you'd need to either be really good at keeping a level head, rely on feedback from other people you trust, or just wait until the honeymoon stage passes and find out in hindsight.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Aug 18, 2024 • 4min
LW - I didn't have to avoid you; I was just insecure by Chipmonk
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I didn't have to avoid you; I was just insecure, published by Chipmonk on August 18, 2024 on LessWrong.
I don't usually post stories on LessWrong so I'm curious to see how this is received.
The first time we spoke, you asked me some questions that felt really invasive. I didn't want that to happen again, so I avoided you the entire following year.
So when you said "Hi" at a party and suggested catching up, I hesitated. But curiosity won out.
You still asked probing questions like "Why did you quit your job?" and "What did you think of your manager? I hear they don't have a great social reputation."
These weren't questions I wanted to answer. But this time, something was different. Not you - me.
In the past, I would have felt forced to answer your questions. But I'm sure you can remember how I responded when we spoke again: "Mm, I don't want to answer that question", "I don't want to gossip", and even a cheeky, "No comment :)"
It didn't even take effort, that surprised me.
And nothing bad happened! We just spoke about other things.
I realized that I was protecting myself from you with physical distance.
But instead I could protect myself from you with "No."
So simple…
Too simple?
Why didn't I think of that before??
Oh, I know why: When I first met you, I was extremely afraid of expressing disapproval of other people.
I didn't know it consciously. It was quite deeply suppressed. But the pattern fits the data.
It seems that I was so afraid of this, that when you asked me those questions when we met for the first time, the thought didn't even cross my mind that I could decline to answer.
If I declined a question, I unconsciously predicted you might get mad, and that would make me feel terrible about myself.
So that's why I didn't say "No" to your questions when you first met me. And that's why I avoided you so bluntly with physical distance. (Although, I also avoided everyone during that year for similar reasons.)
Why am I telling you all of this? You helped me grow. These days, it takes very little effort - and sometimes none at all - to reject others' requests and generally do what I want. I'm much more emotionally secure now.
Also, I noticed a shift in how I perceived you. Once I realized I didn't have to avoid you, I began noticing qualities I admire. Your passion for your work. Your precise and careful reasoning. I want to learn from these traits. And now that I don't have to avoid you anymore, I can :)
Addendum: Beliefs I have
Emotional security is the absence of insecurities
In my model, emotional security is achieved by the absence of emotional insecurities - ie: I had those unconscious predictions like, "If something bad outside of my control happens, then I'm not going to be able to feel okay." But it seems I unlearned most of mine. I don't encounter situations that make me anxious in that way anymore, and I can't imagine any new ones either. Rejecting others (and being rejected by others, same thing) has ceased to carry much unnecessary emotional weight.
(The one exception I can think of is if I was afraid that someone was going to physically harm me. But that's rare.)
It's about present predictions, not past trauma
One might wonder, "What happened to you? What trauma caused your inability to say 'No'?" But that's all irrelevant. All that matters is that I had that unconscious prediction in that present moment.
Thanks to Stag Lynn, Kaj Sotala, Damon Sasi, Brian Toomey, Epistea Residency, CFAR, Anna Salamon, Alex Zhu, and Nolan Kent for mentorship and financial support.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Aug 18, 2024 • 3min
LW - You're a Space Wizard, Luke by lsusr
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You're a Space Wizard, Luke, published by lsusr on August 18, 2024 on LessWrong.
"My father didn't fight in the wars. He was a navigator on a spice freighter," said Luke.
"That's what your uncle told you. Basic security protocol. You don't tell a young child sensitive information about your participation in an ongoing civil war. Which reminds me. I have here something for you. Your father wanted you to have this, when you were old enough. But your uncle wouldn't allow it. Quite sensibly, in my opinion. He feared what you might do with it," said Obi-Wan.
Obi-wan dug through a chest and withdrew a textured metal cylinder with an activator button. "What is it?" asked Luke.
"Your father's lightsaber. This is a weapon of a Jedi Knight. Not as nonsensical or random as a blaster," said Obi-Wan.
"Does that imply…?" asked Luke.
"You're a space wizard, Luke."
Luke activated the lightsaber, making sure to keep the emitter pointed away from his face.
"An elegant weapon, from a more civilized age," said Obi-Wan.
"I'm confused," said Luke, "Just how old is this thing? Bringing a melee weapon to a blaster fight sounds like suicide. Can I block blaster bolts with it?"
"Of course not," said Obi-Wan, "I mean, it's theoretically possible. But I advise against it. The reflexes necessary to do so reliably are beyond the limits of human biology."
"Does it have magic powers then?" said Luke, "That's how the story is supposed to go when the wise old mentor gives a rod-shaped weapon to the young hero. I wonder how old that is. Did our simian ancestors tell stories about magic sticks?"
Obi-Wan leaned forward, as if he was about to share the most important secret in the universe.
"When you activate this lightsaber…" Obi-Wan said.
Yes. Luke leaned forward until his nose almost touched Obi-Wan's.
"…everything around you will follow the laws of physics," Obi-Wan finished.
Some narrative instinct deep in his brainsteam caused Luke to gasp. Then disappointment washed over his face as his frontal cortex processed the literal meaning of what Obi-Wan was saying.
"But everything ALWAYS follows the laws of physics," objected Luke, "That's not even a law of science. It's a tautology. Physics is DEFINED as the laws by which everything follows."
Obi-Wan smiled. "In our deterministic Universe, a thorough understanding of physics is true power."
"You're wrong, you crazy old man," said Luke, "Science alone is not sufficient to kill Vader and overthrow the Empire. I need a fleet. I need allies. I need industrial capacity."
"You need a heroic narrative," said Obi-Wan.
"Can I at least have a blaster?" asked Luke.
"No," said Obi-Wan.
"Why not?" asked Luke.
"Because when you're ready, you won't need one," said Obi-Wan.
Luke rolled his eyes. "How did my father die?" asked Luke.
"Vader killed him," said Obi-Wan.
"I bet if my father had a ranged weapon instead of this glorified stick he might've killed Vader instead," said Luke.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Aug 17, 2024 • 11min
LW - Release: Optimal Weave (P1): A Prototype Cohabitive Game by mako yass
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Release: Optimal Weave (P1): A Prototype Cohabitive Game, published by mako yass on August 17, 2024 on LessWrong.
I recently argued for Cohabitive Games, games that are designed for practicing negotiation, or for developing intuitions for applied cooperative bargaining, which is one of the names of preference aggregation.
I offered the assets for P1, my prototype, to whoever was interested. About 20 people asked for them. I told most of them that I wanted to polish the assets up a little bit first, and said that it would be done in about a month. But a little bit of polish turned into a lot, and other things got in the way, and a month turned into ten. I'm genuinely a little bit dismayed about how long it took, however:
P1, now Optimal Weave 0.1, is a lot nicer now.
I think it hangs together pretty well.
You can get it here: Optimal Weave 0.1. (I'm not taking any profit, and will only start to if it runs away and starts selling a lot. (release day sale. get in quick.))
There's also a print and play version if you want it sooner, free, and don't mind playing with paper. (Honestly, if you want to help to progress the genre, you've got to learn to enjoy playing on paper so that you can iterate faster.)
There's also a version that only includes the crisp, flippable land tiles, which you can then supplement with the ability, event and desire cards from the printed version and whatever playing pieces you have at home.
(note: the version you'll receive uses round tiles instead of hex tiles. They don't look as cool, but I anticipate that round tiles will be a little easier to lay out, and easier to flip without nudging their neighbors around (the game involves a lot of tile flipping).
Earlier on, I had some hope that I could support both square layout and hexagonal layout, that's easier with round tiles, though everything's balanced around the higher adjacency of the hexagonal layout so I doubt that'll end up working)
I'll be contacting everyone who reached out immanently (I have a list).
What Happened
Further learnings occurred. I should report them.
I refined the game a fair bit
There were a lot of abilities I didn't like, and a lot of abilities I did like but hadn't taken the time to draw up, so I went through the hundred or so variations of the initial mechanics, cut the lame ones, added more ideas, then balanced everything so that no land-type would get stuck without any conversion pathways.
I also added an event deck, which provides several functions. It simplifies keeping track of when the game ends, removes the up front brain crunch of dropping players straight into a novel political network of interacting powers, by spreading ability reveals out over the first couple of turns.
The event deck also lightly randomizes the exact timing of the end of the game, which introduces a nice bit of tension, hopefully spares people from feeling a need to precisely calculate the timing of their plans, and possibly averts some occurrences of defection by backwards induction (though the contract system is supposed to deal with that kind of thing, for reasons that are beyond me, people seem to want to try playing without the contract system so, here, this might be important).
I found a nicer low-volume prototyping service
The Game Crafter. Their prices are better, uploads are easier to automate, and they offer thick card tile pieces, as well as various figures. I think some of those figures are pretty funky little guys. Those are the ones you'll be receiving.
Their print alignment is also pretty bad. They warn you, so I'm not mad or anything, I just find it baffling. It's especially bad with the cardboard tile pieces. Oh well, everything looks good enough, I'm happy overall, although I do notice that no big popular board game that I've ever heard of decided to publish through this se...

Aug 17, 2024 • 6min
LW - Rewilding the Gut VS the Autoimmune Epidemic by GGD
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rewilding the Gut VS the Autoimmune Epidemic, published by GGD on August 17, 2024 on LessWrong.
Did you know that an autoimmune disease now affects one in five Americans? Of these, 80% are women; autoimmune disease is the fifth leading cause of death in women under 65. Most autoimmune diseases are being diagnosed in increasing numbers, but it is still unclear if all this variance is due to higher awareness, medical literacy, or better diagnosis techniques.
The effects of autoimmune diseases can be devastating. As a person's immune system attacks their body instead of microbes or cancerous cells, they can experience chronic fatigue, chronic pain, drug dependency, depression, and social isolation. These symptoms annihilate mental health, wreck promising careers, destroy lives, and, often, financially ruin families. For too many, these illnesses result in early death.
The Missing Friends
The Old Friends Hypothesis suggests that early and regular exposure to harmless microorganisms - "old friends" present throughout human evolution and recognized by the human immune system - train the immune system to react appropriately to threats.
Here's the problem: some very specific microbes that we've coevolved with - for potentially millions of years (considering monkeys emerged 35 million years ago) - are no longer in our gut. Those are literally gone; they cannot be replaced by eating fiber, kefir, or any probiotics on the market. And this might be the reason the immune system of many people is dysfunctional - it's missing organisms it's been interacting with for a very long time.
We know this precisely because we've been able to compare the microbiome of industrialized populations with modern hunter-gatherers, who possibly have the microbiome closest to early humans. The Hadza, a protected hunter-gatherer Tanzanian indigenous ethnic group, are known for having the highest gut diversity of every population. The Hadza had an average of 730 species of gut microbes per person.
The average Californian gut microbiome contained just 277 species, and the Nepali microbiome fell in between. Ultra-deep sequencing of the Hadza microbiome reveals 91,662 genomes of bacteria, archaea, bacteriophages, and eukaryotes, 44% of which are absent from existing unified datasets. The researchers identified 124 gut-resident species absent or vanishing in industrialized populations.
As seen in the above picture, there is, at the same time, reduced diversity and an entirely different type of microbial ecology in our gut. Sonnenburg & Sonnenburg list five major reasons that could mediate this change: (1) consumption of highly processed foods; (2) high rates of antibiotic administration; (3) birth via cesarean section and use of baby formula; (4) sanitation of the living environment; and (5) reduced physical contact with animals and soil.
The question that interests me here is, of course: what would happen if we were to reintroduce these "lost" microbes into our bodies? Well, we have no idea! Worse, currently, as of 2024, aside from Symbiome, who focuses on skin, I am not aware of any initiative by a private company or group to develop any treatment based on the ancestral microbiome. Most previous projects seem to have been shelved.
However, private, secret initiatives may be happening at the moment; after all, harvesting the microbiome of hunter-gatherers is not without controversies.
However, given what we've learned about the microbiome in the last decades and its connection to just about every biological process, I do believe rewilding the gut might be the most effective public health initiative of the 21st century:
The cost and suffering caused by autoimmune conditions are massive and will increase in the future. Chronic conditions use most of the resources of public health care systems.
Rewilding the...

Aug 17, 2024 • 7min
LW - Rationalists are missing a core piece for agent-like structure (energy vs information overload) by tailcalled
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rationalists are missing a core piece for agent-like structure (energy vs information overload), published by tailcalled on August 17, 2024 on LessWrong.
The agent-like structure problem is a question about how agents in the world are structured. I think rationalists generally have an intuition that the answer looks something like the following:
We assume the world follows some evolution law, e.g. maybe deterministically like xn+1=f(xn), or maybe something stochastic. The intuition being that these are fairly general models of the world, so they should be able to capture whatever there is to capture. x here has some geometric structure, and we want to talk about areas of this geometric structure where there are agents.
An agent is characterized by a Markov blanket in the world that has informational input/output channels for the agent to get information to observe the world and send out information to act on it, intuitively because input/output channels are the most general way to model a relationship between two systems, and to embed one system within another we need a Markov blanket.
The agent uses something resembling a Bayesian model to process the input, intuitively because the simplest explanation that predicts the observed facts is the best one, yielding the minimal map that can answer any query you could have about the world.
And then the agent uses something resembling argmax to make a decision for the output given the input, since endless coherence theorems prove this to be optimal.
Possibly there's something like an internal market that combines several decision-making interests (modelling incomplete preferences) or several world-models (modelling incomplete world-models).
There is a fairly-obvious gap in the above story, in that it lacks any notion of energy (or entropy, temperature, etc.). I think rationalists mostly feel comfortable with that because:
xn+1=f(xn) is flexible enough to accomodate worlds that contain energy (even if they also accomodate other kinds of worlds where "energy" doesn't make sense)
80% of the body's energy goes to muscles, organs, etc., so if you think of the brain as an agent and the body as a mech that gets piloted by the brain (so the Markov blanket for humans would be something like the blood-brain barrier rather than the skin), you can mostly think of energy as something that is going on out in the universe, with little relevance for the agent's decision-making.
I've come to think of this as "the computationalist worldview" because functional input/output relationships are the thing that is described very well with computations, whereas laws like conservation of energy are extremely arbitrary from a computationalist point of view. (This should be obvious if you've ever tried writing a simulation of physics, as naive implementations often lead to energy exploding.)
Radical computationalism is killed by information overload
Under the most radical forms of computationalism, the "ideal" prior is something that can range over all conceivable computations. The traditional answer to this is Solomonoff induction, but it is not computationally tractable because it has to process all observed information in every conceivable way.
Recently with the success of deep learning and the bitter lesson and the Bayesian interpretations of deep double descent and all that, I think computationalists have switched to viewing the ideal prior as something like a huge deep neural network, which learns representations of the world and functional relationships which can be used by some sort of decision-making process.
Briefly, the issue with these sorts of models is that they work by trying to capture all the information that is reasonably non-independent of other information (for instance, the information in a picture that is relevant for predicting ...

Aug 17, 2024 • 9min
LW - Please support this blog (with money) by Elizabeth
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please support this blog (with money), published by Elizabeth on August 17, 2024 on LessWrong.
Short version
It has always felt like a gift that people read things I write, and I enjoyed reciprocating that by sharing my writing for free. Until recently I had enough income to pull that off and still cover material needs. That is no longer true, and while it is not an urgent problem, I would like to solve it while that is still true. However it's important to me to keep this blog free. To square this, I'm asking for your support.
If you value this blog and can comfortably do so, please consider sending some of that value back my way via Patreon or Paypal.
Long version
I love my work and this blog. Clients paying me to learn interesting things that they immediately use to help people would be amazing on its own, but then grantmakers pay me to do more speculative EV projects of my choosing. Then I get to publish my work and help more people. I'm incredibly fortunate to do this, and at a wage that supports a good lifestyle in a reasonable number of hours.
Unfortunately, I can't work a reasonable number of hours. It's been years since I could work a full work-week, but until recently I could still work enough. That stopped being true this winter (short version: furniture grew mold leading to a devastating health cascade). I spent months barely able to work while firehosing money at the medical system (ballpark total cost: $100,000, plus an intense desire to own my own home).
I'm doing much better although not fully healed now and God willing this was a one time thing, but 10 years ago I needed a year off work for dental surgery, and I'll consider myself lucky if it takes 10 years before my next "one off" multi-month health problem. My limited hours in good times aren't enough to make up for the total absence of hours in bad times.
Disability insurance refused to sell to me at any price back when I had an excellent work attendance record and very minor medical issues, so I assume they'd use a flamethrower if I tried now. This has me interested in finding ways to decouple my income from my working hours.
The obvious thing to do is monetize this blog. Except I hate gating zero-marginal-cost content, especially health content. It keeps out the people who most need the help. It's also hard to match price to value - any given post could be a waste of your time or a life changing intervention, and you may not know which for years. So I don't want to do any kind of paywall. Instead, I'm asking for voluntary contributions, based on your evaluation of the value my work has provided you.
That way you can match price to value, no one gets locked out of content they need, and I can invest in this blog while staying financially secure.
How to give me money
If you'd like to make a one-time payment, you can paypal me at acesounderglass@gmail.com. This will get me caught up on this winter's lost income and medical expenses. If your generosity exceeds Paypal's petty limitations or you just prefer a different format, email me at that same address and we'll work something out.
If you would like to give ongoing support, you can join my Patreon. This will let me spend more time on my top-choice project and invest more in this blog on an ongoing basis. I've recently added real rewards to my patreon, including a discord, live calls for each post, and for the most discerning of patrons, all the plankton you can eat. For the moment these are experimental, and you can help me out by telling me what you'd like to see.
If you're already a Patreon patron - first of all, thank you. Second, take a look at the new tiers. I'm not allowed to switch you to reward tiers even if you're already giving enough to qualify for them, and the $1 tier has been removed.
If you would like to give one-time support to spe...

Aug 17, 2024 • 11min
LW - Principled Satisficing To Avoid Goodhart by JenniferRM
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Principled Satisficing To Avoid Goodhart, published by JenniferRM on August 17, 2024 on LessWrong.
There's an admirable LW post with (currently) zero net upvotes titled Goodhart's Law and Emotions where a relatively new user re-invents concepts related to super-stimuli. In the comments, noggin-scratcher explains in more detail:
The technical meaning is a stimulus that produces a stronger response than the stimulus for which that response originally evolved.
So for example a candy bar having a carefully engineered combination of sugar, fat, salt, and flavour in proportions that make it more appetising than any naturally occurring food. Or outrage-baiting infotainment "news" capturing attention more effectively than anything that one villager could have said to another about important recent events.
In my opinion, there's a danger that arises when applying the dictum to know thyself, where one can do this so successfully that one begins to perceive the logical structure of the parts of yourself that generate subjectively accessible emotional feedback signals.
In the face of this, you face a sort of a choice: (1) optimize these to get more hedons AT ALL as a coherent intrinsic good, or (2) something else which is not that.
In general, for myself, when I was younger and possibly more foolish than I am now, I decided that I was going to be explicitly NOT A HEDONIST.
What I meant by this has changed over time, but I haven't given up on it.
In a single paragraph, I might "shoot from the hip" and say that when you are "not a hedonist (and are satisficing in ways that you hope avoid Goodhart)" it doesn't necessarily mean that you throw away joy, it just means that that WHEN you "put on your scientist hat", and try to take baby steps, and incrementally modify your quickly-deployable habits, to make them more robust and give you better outcomes, you treat joy as a measurement, rather than a desiderata.
You treat subjective joy like some third pary scientist (who might have a collaborator who filled their spreadsheet with fake data because they want a Nature paper, that they are still defending the accuracy of, at a cocktail party, in an ego-invested way) saying "the thing the joy is about is good for you and you should get more of it".
When I first played around with this approach I found that it worked to think of myself as abstractly-wanting to explore "conscious optimization of all the things" via methods that try to only pay attention to the plausible semantic understandings (inside of the feeling generating submodules?) that could plausibly have existed back when the hedonic apparatus inside my head was being constructed.
(Evolution is pretty dumb, so these semantic understandings were likely to be quite coarse. Cultural evolution is also pretty dumb, and often actively inimical to virtue and freedom and happiness and lovingkindness and wisdom, so those semantic understandings also might be worth some amount of mistrust.)
Then, given a model of a modeling process that built a feeling in my head, I wanted to try to figure out what things in the world that that modeling process might have been pointing to, and think about the relatively universal instrumental utility concerns that arise proximate to the things that the hedonic subsystem reacts to. Then maybe just... optimize those things in instrumentally reasonable ways?
This would predictably "leave hedons on the table"!
But it would predictably stay aligned with my hedonic subsystems (at least for a while, at least for small amounts of optimization pressure) in cases where maybe I was going totally off the rails because "my theory of what I should optimize for" had deep and profound flaws.
Like suppose I reasoned (and to be clear, this is silly, and the error is there on purpose):
1. Making glucose feel good is simply a w...

Aug 17, 2024 • 1min
LW - How unusual is the fact that there is no AI monopoly? by Viliam
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How unusual is the fact that there is no AI monopoly?, published by Viliam on August 17, 2024 on LessWrong.
I may be completely confused about this, but my model of technological breakthroughs in history was basically this: A few guys independently connect the dots leading to a new invention, for example the telephone, approximately at the same time. One of them runs to the patent office a little faster than the others, and he gets the patent first.
Now he gets to be forever known as the inventor of the telephone, and the rest of them are screwed; if they ever try to sell their own inventions, they will probably get sued to bankruptcy.
Today, we have a few different companies selling AIs (LLMs). What is different this time?
Is my model of the history wrong?
Is the current legal situation with patents different?
Are LLMs somehow fundamentally different from all other previous inventions?
Is it the fact that anyone can immediately publish all tiny partial steps in online journals, that makes it virtually impossible for any individual or institution to legally acquire the credit -- and the monopoly -- for the entire invention?
Or something else...?
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org