The Nonlinear Library: LessWrong

The Nonlinear Fund
undefined
May 30, 2024 • 25min

LW - Awakening by lsusr

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Awakening, published by lsusr on May 30, 2024 on LessWrong. This is the story of my personal experience with Buddhism (so far). First Experiences My first experience with Buddhism was in my high school's World Religions class. For homework, I had to visit a religious institution. I was getting bad grades, so I asked if I could get extra credit for visiting two and my teacher said yes. I picked an Amida Buddhist church and a Tibetan Buddhist meditation center. I took off my shoes at the entrance to the Tibetan Buddhist meditation center. It was like nothing I had ever seen before in real life. There were no chairs. Cushions were on the floor instead. The walls were covered in murals. There were no instructions. People just sat down and meditated. After that there was some walking meditation. I didn't know anything about meditation so I instead listened to the birds and the breeze out of an open window. Little did I know that this is similar to the Daoist practices that would later form the foundation of my practice. The Amida Buddhist church felt like a fantasy novelist from a Protestant Christian background wanted to invent a throwaway religion in the laziest way possible so he just put three giant Buddha statues on the alter and called it a day. The priest told a story about his beautiful stained glass artifact. A young child asked if he could have the pretty thing. The priest, endeavoring to teach non-attachment, said yes. Then the priest asked for it back. The child said no, thereby teaching the priest about non-attachment. Lol. It would be ten years until I returned to Buddhism. Initial Search It is only after you have lost everything that you are free to do anything. Things were bad. I had dumped six years of my life into a failed startup. I had allowed myself to be gaslit (nothing to do with the startup; my co-founders are great people) for even longer than that. I believed (incorrectly) that I had an STD. I had lost most of my friends. I was living in a basement infested with mice. I slept poorly because my mattress was so broken I could feel the individual metal bedframe bars cut into my back. And that's just the stuff I'm comfortable writing about. I was looking for truth and salvation. This is about when I discovered LessWrong. LessWrong addressed the truth problem. I still needed salvation. On top of all this, I had chronic anxiety. I was anxious all the time. I had always been anxious all the time. What was different is this time I was paying attention. Tim Ferris recommends the book Don't Feed the Monkey Mind: How to Stop the Cycle of Anxiety, Fear, and Worry by Jennifer Shannon (Licensed Marriage and Family Therapist) so I read it. The book has lots of good advice. At the end, there's a small segment about how meditation might trump everything else in the book put together, but science doesn't really understand it (yet) and its side-effects are unknown [to science]. Eldritch mind altering practices beyond the domain of science? Sign me up! [Cue ominous music.] I read The Art of Happiness: A Handbook for Living by the Dalai Lama. The Dalai Lama's approach to happiness felt obviously true, yet it was a framework nobody had ever told me about. The basic idea is that if you think and behave lovingly and ethically then you will be happy. He included instructions for basic metta (compassion) meditation. Here's how it works: 1. You focus on your feelings of compassion for your closest family and pets. 2. Then you focus on your feelings of compassion for your closest friends. 3. Then less-close friends. 4. Then acquaintenances. 5. Then enemies. That's the introductory version. At the advanced level, you can skip all these bootstrapping steps and jump straight to activating compassion itself. The first time I tried the Dalai Lama's metta instructions, it felt so...
undefined
May 30, 2024 • 6min

LW - US Presidential Election: Tractability, Importance, and Urgency by kuhanj

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Presidential Election: Tractability, Importance, and Urgency, published by kuhanj on May 30, 2024 on LessWrong. Disclaimer: To avoid harmful polarization of important topics, this post is written in a non-partisan manner, and I'd encourage comments to be written with this in mind. US presidential elections are surprisingly tractable 1. US presidential elections are often extremely close. 1. Biden won the last election by 42,918 combined votes in three swing states. Trump won the election before that by 77,744 votes. 537 votes in Florida decided the 2000 election. 2. There's a good chance the 2024 election will be very close too. 1. Trump leads national polling by around 1% nationally, and polls are tighter than they were the last two elections. If polls were perfectly accurate (which of course, they aren't), the tipping point state would be Pennsylvania or Michigan, which are currently at +1-2% for Trump. 3. There is still low-hanging fruit. Estimates for how effectively top RCT-tested interventions to generate net swing-state votes this election range from a few hundred to several thousand dollars per vote. Top non-RCT-able interventions are likely even better. Many potentially useful strategies have not been sufficiently explored. Some examples: 1. mobilizing US citizens abroad (who vote at a ~10x lower rate than citizens in the country), or swing-state university students (perhaps through a walk-out-of-classes-to-the-polls demonstration). 2. There is no easily-searchable resource on how to best contribute to the election. (Look up the best ways to contribute to the election online - the answers are not very helpful.) 3. Anecdotally, people with little political background have been able to generate many ideas that haven't been tried and were received positively by experts. 4. Many top organizations in the space are only a few years old, which suggests they have room to grow and that more opportunities haven't been picked. 5. Incentives push talent away from political work: 1. Jobs in political campaigns are cyclical/temporary, very demanding, poorly compensated, and offer uncertain career capital (i.e. low rewards for working on losing campaigns). 2. How many of your most talented friends work in electoral politics? 6. The election is more tractable than a lot of other work: Feedback loops are more measurable and concrete, and the theory of change fairly straightforward. Many other efforts that significant resources have gone into have little positive impact to show for them (though of course ex-ante a lot of these efforts seemed very reasonable to prioritize) - e.g. efforts around OpenAI, longtermist branding, certain AI safety research directions, and more. Much more important than other elections This election seems unusually important for several reasons (though people always say this): There's arguably a decent chance that very critical decisions about transformative AI will be made in 2025-2028. The role of governments might be especially important for AI if other prominent (state and lab) actors cannot be trusted. Biden's administration issued a landmark executive order on AI in October 2023. Trump has vowed to repeal it on Day One. Compared to other governments, the US government is unusually influential. The US government spent over $6 trillion in the 2023 fiscal year, and makes key decisions involving billions of dollars each year for issues like global development, animal welfare, climate change, and international conflicts. Critics argue that Trump and his allies are unique in their response to the 2020 election, plans to fill the government with tens of thousands of vetted loyalists, and in how people who have worked with Trump have described him. On the other side, Biden's critics point to his age (81 years, four years older than Trump), his respo...
undefined
May 30, 2024 • 4min

LW - Value Claims (In Particular) Are Usually Bullshit by johnswentworth

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Value Claims (In Particular) Are Usually Bullshit, published by johnswentworth on May 30, 2024 on LessWrong. Epistemic status: mental model which I have found picks out bullshit surprisingly well. Idea 1: Parasitic memes tend to be value-claims, as opposed to belief-claims By "parasitic memes" I mean memes whose main function is to copy themselves - as opposed to, say, actually provide value to a human in some way (so that the human then passes it on). Scott's old Toxoplasma of Rage post is a central example; "share to support X" is another. Insofar as a meme is centered on a factual claim, the claim gets entangled with lots of other facts about the world; it's the phenomenon of Entangled Truths, Contagious Lies. So unless the meme tries to knock out a person's entire epistemic foundation, there's a strong feedback signal pushing against it if it makes a false factual claim. (Of course some meme complexes do try to knock out a person's entire epistemic foundation, but those tend to be "big" memes like religions or ideologies, not the bulk of day-to-day memes.) But the Entangled Truths phenomenon is epistemic; it does not apply nearly so strongly to values. If a meme claims that, say, it is especially virtuous to eat yellow cherries from Switzerland... well, that claim is not so easily falsified by a web of connected truths. Furthermore, value claims always come with a natural memetic driver: if X is highly virtuous/valuable/healthy/good/etc, and this fact is not already widely known, then it's highly virtuous and prosocial of me to tell other people how virtuous/valuable/healthy/good X is, and vice-versa if X is highly dangerous/bad/unhealthy/evil/etc. Idea 2: Transposons are ~half of human DNA There are sequences of DNA whose sole function is to copy and reinsert themselves back into the genome. They're called transposons. If you're like me, when you first hear about transposons, you're like "huh that's pretty cool", but you don't expect it to be, like, a particularly common or central phenomenon of biology. Well, it turns out that something like half of the human genome consists of dead transposons. Kinda makes sense, if you think about it. Now we suppose we carry that fact over, by analogy, to memes. What does that imply? Put Those Two Together... … and the natural guess is that value claims in particular are mostly parasitic memes. They survive not by promoting our terminal values, but by people thinking it's good and prosocial to tell others about the goodness/badness of X. I personally came to this model from the other direction. I've read a lot of papers on aging. Whenever I mention this fact in a room with more than ~5 people, somebody inevitably asks "so what diet/exercise/supplements/lifestyle changes should I make to stay healthier?". In other words, they're asking for value-claims. And I noticed that the papers, blog posts, commenters, etc, who were most full of shit were ~always exactly the ones which answered that question. To a first approximation, if you want true information about the science of aging, far and away the best thing you can do is specifically look for sources which do not make claims about diet or exercise or supplements or other lifestyle changes being good/bad for you. Look for papers which just investigate particular gears, like "does FoxO mediate the chronic inflammation of arthritis?" or "what's the distribution of mutations in mitochondria of senescent cells?". … and when I tried to put a name on the cluster of crap claims which weren't investigating gears, I eventually landed on the model above: value claims in general are dominated by memetic parasites. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
May 30, 2024 • 5min

LW - The Pearly Gates by lsusr

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Pearly Gates, published by lsusr on May 30, 2024 on LessWrong. St. Peter stood at a podium before the Gates of Heaven. The gates were gold, built on a foundation of clouds. A line of people curved and winded across the clouds, beyond what would be a horizon if this plane of existence was positively-curved. Instead, they just trailed away into Infinity, away from the golden wall securing Heaven. The worthy would enter eternal paradise. The unforgiven would burn in Hell for just as long. Infinite judgment for finite lives. "Next please," said St. Peter. The foremost man stepped forward. He had freckles and brilliant orange hair. "Tell me about yourself," said St. Peter. "Me name's Seamus O'Malley, sure, and I was - or still am, begorrah - an Irish Catholic," said Seamus. "How did you die?" said St. Peter. "Jaysus, I went and blew meself to bits tryin' to cobble together an auld explosive to give those English occupiers a proper boot, so I did," said Seamus. "You were a good Catholic," said St. Peter, "You're in." Seamus entered the Pearly Gates with his head held high. "Next please," said St. Peter. A Floridian woman stepped forward. "My name is Megan Roberts. I worked as a nurse. I couldn't bear to tell people their family members were going to die. I poisoned them so they would die when a less empathetic nurse was on watch," said the nurse. "That's a grave sin," said St. Peter. "But it's okay because I'm a Christian. Protestant," said Megan. "Did you go to church?" said St. Peter. "Mostly just Christmas and Easter," said Megan, "But moments before I died, I asked Jesus for forgiveness. That means my sins are wiped away, right?" "You're in," said St. Peter. "Next please," said St. Peter. A skinny woman stepped forward. "My name is Amanda Miller. I'm an Atheist. I've never attended church or prayed to God. I was dead certain there was no God until I found myself in the queue on these clouds. Even right now, I'm skeptical this isn't a hallucination," said Amanda. "Were you a good person?" asked St. Peter. "Eh," said Amanda, "I donated a paltry 5% of my income to efficient public health measures, resulting in approximately 1,000 QALYs." "As punishment for your sins, I condemn you to an eternity of Christians telling you 'I told you so'," said St Peter, "You're in." "Next please," said St. Peter. A bald man with a flat face stepped forward. "My name is Oskar Schindler. I was a Nazi," said Oskar. "Metaphorical Nazi or Neo-Nazi?" asked St Peter. "I am from Hildesheim, Germany. I was a card-carrying member of the Nazi Party from 1935 until 1945," said Oskar. "Were you complicit in the war or just a passive bystander?" asked St. Peter. "I was a war profiteer. I ran a factory that employed Jewish slave labor to manufacture munitions in Occupied Poland," said Oskar. "Why would you do such a thing?" asked St. Peter. "The Holocaust," said Oskar, "Nobody deserves that. Every Jew I bought was one fewer Jew in the death camps. Overall, I estimate I saved 1,200 Jews from the gas chambers." St. Peter waited, as if to say go on. "I hired as many workers as I could. I made up excuses to hire extra workers. I bent and broke every rule that got in my way. When that didn't work, I bought black market goods to bribe government officials. I wish I could have done more, but we do what we can with the limited power we have," said Oskar, "Do you understand?" St. Peter glanced furtively at the angels guarding the Gates of Heaven. He leaned forward, stared daggers into Oskar's eyes and whispered, "I think I understand you perfectly." "Next please," said St. Peter. A skinny Indian man stepped forward. "My name is Siddhartha Gautama. I was a prince. I was born into a life of luxury. I abandoned my duties to my kingdom and to my people," said Siddhartha. St. Peter read from his scroll. "It says ...
undefined
May 30, 2024 • 22min

LW - Thoughts on SB-1047 by ryan greenblatt

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on SB-1047, published by ryan greenblatt on May 30, 2024 on LessWrong. In this post, I'll discuss my current understanding of SB-1047, what I think should change about the bill, and what I think about the bill overall (with and without my suggested changes). Overall, SB-1047 seems pretty good and reasonable. However, I think my suggested changes could substantially improve the bill and there are some key unknowns about how implementation of the bill will go in practice. The opinions expressed in this post are my own and do not express the views or opinions of my employer. [This post is the product of about 4 hours of work of reading the bill, writing this post, and editing it. So, I might be missing some stuff.] [Thanks to various people for commenting.] My current understanding (My understanding is based on a combination of reading the bill, reading various summaries of the bill, and getting pushback from commenters.) The bill places requirements on "covered models'' while not putting requirements on other (noncovered) models and allowing for limited duty exceptions even if the model is covered. The intention of the bill is to just place requirements on models which have the potential to cause massive harm (in the absence of sufficient safeguards). However, for various reasons, targeting this precisely to just put requirements on models which could cause massive harm is non-trivial. (The bill refers to "models which could cause massive harm" as "models with a hazardous capability".) In my opinion, I think the bar for causing massive harm defined by the bill is somewhat too low, though it doesn't seem like a terrible choice to me. I'll discuss this more later. The bill uses two mechanisms to try and improve targeting: 1. Flop threshold: If a model is trained with <10^26 flop and it is not expected to match >10^26 flop performance as of models in 2024, it is not covered. (>10^26 flop performance as of 2024 is intended to allow the bill to handle algorithmic improvements.) 2. Limited duty exemption: A developer can claim a limited duty exemption if they determine that a model does not have the capability to cause massive harm. If the developer does this, they must submit paperwork to the Frontier Model Division (a division created by the bill) explaining their reasoning. From my understanding, if either the model isn't covered (1) or you claim a limited duty exemption (2), the bill doesn't impose any requirements or obligations. I think limited duty exemptions are likely to be doing a lot of work here: it seems likely to me that the next generation of models immediately above this FLOP threshold (e.g. GPT-5) won't actually have hazardous capabilities, so the bill ideally shouldn't cover them. The hope with the limited duty exemption is to avoid covering these models. So you shouldn't think of limited duty exemptions as some sort of unimportant edge case: models with limited duty exemptions likely won't be that "limited" in how often they occur in practice! In this section, I'm focusing on my read on what seems to be the intended enforcement of the bill. It's of course possible that the actual enforcement will differ substantially! The core dynamics of the bill are best exhibited with a flowchart. (Note: I edited the flowchart to separate the noncovered node from the exemption node.) Here's this explained in more detail: 1. So you want to train a non-derivative model and you haven't yet started training. The bill imposes various requirements on the training of covered models that don't have limited duty exemptions, so we need to determine whether this model will be covered. 2. Is it >10^26 flop or could you reasonably expect it to match >10^26 flop performance (as of models in 2024)? If so, it's covered. 3. If it's covered, you might be able to claim a limited ...
undefined
May 29, 2024 • 15min

LW - Finding Backward Chaining Circuits in Transformers Trained on Tree Search by abhayesian

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Finding Backward Chaining Circuits in Transformers Trained on Tree Search, published by abhayesian on May 29, 2024 on LessWrong. This post is a summary of our paper A Mechanistic Analysis of a Transformer Trained on a Symbolic Multi-Step Reasoning Task (ACL 2024). While we wrote and released the paper a couple of months ago, we have done a bad job promoting it so far. As a result, we're writing up a summary of our results here to reinvigorate interest in our work and hopefully find some collaborators for follow-up projects. If you're interested in the results we describe in this post, please see the paper for more details. TL;DR - We train transformer models to find the path from the root of a tree to a given leaf (given an edge list of the tree). We use standard techniques from mechanistic interpretability to figure out how our model performs this task. We found circuits that involve backward chaining - the first layer attends to the goal and each successive layer attends to the parent of the output of the previous layer, thus allowing the model to climb up the tree one node at a time. However, this algorithm would only find the correct path in graphs where the distance from the starting node to the goal is less than or equal to the number of layers in the model. To solve harder problem instances, the model performs a similar backward chaining procedure at insignificant tokens (which we call register tokens). Random nodes are chosen to serve as subgoals and the model backward chains from all of them in parallel. In the final layers of the model, information from the register tokens is merged into the model's main backward chaining procedure, allowing it to deduce the correct path to the goal when the distance is greater than the number of layers. In summary, we find a parallelized backward chaining algorithm in our models that allows them to efficiently navigate towards goals in a tree graph. Motivation & The Task Many people here have conjectured about what kinds of mechanisms inside future superhuman systems might allow them to perform a wide range of tasks efficiently. John Wentworth coined the term general-purpose search to group several hypothesized mechanisms that share a couple of core properties. Others have proposed projects around how to search for search inside neural networks. While general-purpose search is still relatively vague and undefined, we can study how language models perform simpler and better-understood versions of search. Graph search, the task of finding the shortest path between two nodes, has been the cornerstone of algorithmic research for decades, is among the first topics covered by virtually every CS course (BFS/DFS/Djikstra), and serves as the basis for planning algorithms in GOFAI systems. Our project revolves around understanding how transformer language models perform graph search at a mechanistic level. While we initially tried to understand how models find paths over any directed graph, we eventually restricted our focus specifically to trees. We trained a small GPT2-style transformer model (6 layers, 1 attention head per layer) to perform this task. The two figures below describe how we generate our dataset, and tokenize the examples. It is important to note that this task cannot be solved trivially. To correctly predict the next node in the path, the model must know the entire path ahead of time. The model must figure out the entire path in a single forward pass. This is not the case for a bunch of other tasks proposed in the literature on evaluating the reasoning capabilities of language models (see Saparov & He (2023) for instance). As a result of this difficulty, we can expect to find much more interesting mechanisms in our models. We train our model on a dataset of 150,000 randomly generated trees. The model achieves an ac...
undefined
May 29, 2024 • 12min

LW - MIRI 2024 Communications Strategy by Gretta Duleba

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI 2024 Communications Strategy, published by Gretta Duleba on May 29, 2024 on LessWrong. As we explained in our MIRI 2024 Mission and Strategy update, MIRI has pivoted to prioritize policy, communications, and technical governance research over technical alignment research. This follow-up post goes into detail about our communications strategy. The Objective: Shut it Down[1] Our objective is to convince major powers to shut down the development of frontier AI systems worldwide before it is too late. We believe that nothing less than this will prevent future misaligned smarter-than-human AI systems from destroying humanity. Persuading governments worldwide to take sufficiently drastic action will not be easy, but we believe this is the most viable path. Policymakers deal mostly in compromise: they form coalitions by giving a little here to gain a little somewhere else. We are concerned that most legislation intended to keep humanity alive will go through the usual political processes and be ground down into ineffective compromises. The only way we think we will get strong enough legislation is if policymakers actually get it, if they actually come to understand that building misaligned smarter-than-human systems will kill everyone, including their children. They will pass strong enough laws and enforce them if and only if they come to understand this central truth. Meanwhile, the clock is ticking. AI labs continue to invest in developing and training more powerful systems. We do not seem to be close to getting the sweeping legislation we need. So while we lay the groundwork for helping humanity to wake up, we also have a less dramatic request. We ask that governments and AI labs install the "off-switch"[2] so that if, on some future day, they decide to shut it all down, they will be able to do so. We want humanity to wake up and take AI x-risk seriously. We do not want to shift the Overton window, we want to shatter it. Theory of Change Now I'll get into the details of how we'll go about achieving our objective, and why we believe this is the way to do it. The facets I'll consider are: Audience: To whom are we speaking? Message and tone: How do we sound when we speak? Channels: How do we reach our audience? Artifacts: What, concretely, are we planning to produce? Audience The main audience we want to reach is policymakers - the people in a position to enact the sweeping regulation and policy we want - and their staff. However, narrowly targeting policymakers is expensive and probably insufficient. Some of them lack the background to be able to verify or even reason deeply about our claims. We must also reach at least some of the people policymakers turn to for advice. We are hopeful about reaching a subset of policy advisors who have the skill of thinking clearly and carefully about risk, particularly those with experience in national security. While we would love to reach the broader class of bureaucratically-legible "AI experts," we don't expect to convince a supermajority of that class, nor do we think this is a requirement. We also need to reach the general public. Policymakers, especially elected ones, want to please their constituents, and the more the general public calls for regulation, the more likely that regulation becomes. Even if the specific measures we want are not universally popular, we think it helps a lot to have them in play, in the Overton window. Most of the content we produce for these three audiences will be fairly basic, 101-level material. However, we don't want to abandon our efforts to reach deeply technical people as well. They are our biggest advocates, most deeply persuaded, most likely to convince others, and least likely to be swayed by charismatic campaigns in the opposite direction. And more importantly, discussions with very tech...
undefined
May 29, 2024 • 19min

LW - Response to nostalgebraist: proudly waving my moral-antirealist battle flag by Steven Byrnes

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Response to nostalgebraist: proudly waving my moral-antirealist battle flag, published by Steven Byrnes on May 29, 2024 on LessWrong. @nostalgebraist has recently posted yet another thought-provoking post, this one on how we should feel about AI ruling a long-term posthuman future. [Previous discussion of this same post on lesswrong.] His post touches on some of the themes of Joe Carlsmith's "Otherness and Control in the Age of AI" series - a series which I enthusiastically recommend - but nostalgebraist takes those ideas much further, in a way that makes me want to push back. Nostalgebraist's post is casual, trying to reify and respond to a "doomer" vibe, rather than responding to specific arguments by specific people. Now, I happen to self-identify as a "doomer" sometimes. (Is calling myself a "doomer" bad epistemics and bad PR? Eh, I guess. But also: it sounds cool.) But I too have plenty of disagreements with others in the "doomer" camp (cf: "Rationalist (n.) Someone who disagrees with Eliezer Yudkowsky".). Maybe nostalgebraist and I have common ground? I dunno. Be that as it may, here are some responses to certain points he brings up. 1. The "notkilleveryoneism" pitch is not about longtermism, and that's fine Nostalgebraist is mostly focusing on longtermist considerations, and I'll mostly do that too here. But on our way there, in the lead-in, nostalgebraist does pause to make a point about the term "notkilleveryoneism": They call their position "notkilleveryoneism," to distinguish that position from other worries about AI which don't touch on the we're-all-gonna-die thing. And who on earth would want to be a not-notkilleveryoneist? But they do not mean, by these regular-Joe words, the things that a regular Joe would mean by them. We are, in fact, all going to die. Probably, eventually. AI or no AI. In a hundred years, if not fifty. By old age, if nothing else. You know what I mean.… OK, my understanding was: (1) we doomers are unhappy about the possibility of AI killing all humans because we're concerned that the resulting long-term AI future would be a future we don't want; and (2) we doomers are also unhappy about the possibility of AI killing all humans because we are human and we don't want to get murdered by AIs. And also, some of us have children with dreams of growing up and having kids of their own and being a famous inventor or oh wait actually I'd rather work for Nintendo on their Zelda team or hmm wait does Nintendo hire famous inventors? …And all these lovely aspirations again would require not getting murdered by AIs. If we think of the "notkilleveryoneism" term as part of a communication and outreach strategy, then it's a strategy that appeals to Average Joe's desire to not be murdered by AIs, and not to Average Joe's desires about the long-term future. And that's fine! Average Joe has every right to not be murdered, and honestly it's a safe bet that Average Joe doesn't have carefully-considered coherent opinions about the long-term future anyway. Sometimes there's more than one reason to want a problem to be solved, and you can lead with the more intuitive one. I don't think anyone is being disingenuous here (although see comment). 1.1 …But now let's get back to the longtermist stuff Anyway, that was kinda a digression from the longtermist stuff which forms the main subject of nostalgebraist's post. Suppose AI takes over, wipes out humanity, and colonizes the galaxy in a posthuman future. He and I agree that it's at least conceivable that this long-term posthuman future would be a bad future, e.g. if the AI was a paperclip maximizer. And he and I agree that it's also possible that it would be a good future, e.g. if there is a future full of life and love and beauty and adventure throughout the cosmos. Which will it be? Let's dive into that discus...
undefined
May 28, 2024 • 5min

LW - Being against involuntary death and being open to change are compatible by Andy McKenzie

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Being against involuntary death and being open to change are compatible, published by Andy McKenzie on May 28, 2024 on LessWrong. In a new post, Nostalgebraist argues that "AI doomerism has its roots in anti-deathist transhumanism", representing a break from the normal human expectation of mortality and generational change. They argue that traditionally, each generation has accepted that they will die but that the human race as a whole will continue evolving in ways they cannot fully imagine or control. Nostalgebraist argues that the "anti-deathist" view, however, anticipates a future where "we are all gonna die" is no longer true -- a future where the current generation doesn't have to die or cede control of the future to their descendants. Nostalgebraist sees this desire to "strangle posterity" and "freeze time in place" by making one's own generation immortal as contrary to human values, which have always involved an ongoing process of change and progress from generation to generation. This argument reminds me of Elon Musk's common refrain on the topic: "The problem is when people get old, they don't change their minds, they just die. So, if you want to have progress in society, you got to make sure that, you know, people need to die, because they get old, they don't change their mind." Musk's argument is certainly different and I don't want to equate the two. I'm just bringing this up because I wouldn't bother responding to Nostalgebraist unless this was a common type of argument. In this post, I'm going to dig into Nostalgebraist's anti-anti-deathism argument a little bit more. I believe it is simply empirically mistaken. Key inaccuracies include: 1: The idea that people in past "generations" universally expected to die is wrong. Nope. Belief in life after death or even physical immortality has been common across many cultures and time periods. Quantitatively, large percentages of the world today believe in life after death: In many regions, this belief was also much more common in the past, when religiosity was higher. Ancient Egypt, historical Christendom, etc. 2: The notion that future humans would be so radically different from us that replacing humans with any form of AIs would be equivalent is ridiculous. This is just not close to my experience when I read historical texts. Many authors seem to have extremely relatable views and perspectives. To take the topical example of anti-deathism, among secular authors, read, for example, Francis Bacon, Benjamin Franklin, or John Hunter. I am very skeptical that everyone from the past would feel so inalienably out of place in our society today, once they had time (and they would have plenty of time) to get acquainted with new norms and technologies. We still have basically the same DNA, gametes, and in utero environments. 3: It is not the case that death is required for cultural evolution. People change their minds all the time. Cultural evolution happens all the time within people's lifespans. Cf: views on gay marriage, the civil rights movement, environmentalism, climate change mitigation, etc. This is especially the case because in the future we will likely develop treatments for the decline in neuroplasticity that can (but does not necessarily always) occur in a subset of older people. Adjusting for (a) the statistical decline of neuroplasticity in aging and (b) contingent aspects of the structure of our societies (which are very much up for change, e.g. the traditional education/career timeline), one might even call death and cultural evolution "orthogonal". 4: No, our children are not AIs. Our children are human beings. Every generation dies, and bequeaths the world to posterity. To its children, biological or otherwise. To its students, its protégés. ... In which one will never have to make peace with the tho...
undefined
May 28, 2024 • 4min

LW - Hardshipification by Jonathan Moregård

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hardshipification, published by Jonathan Moregård on May 28, 2024 on LessWrong. When I got cancer, all of my acquaintances turned into automatons. Everyone I had zero-to-low degrees of social contact with started reaching out, saying the exact same thing: "If you need to talk to someone, I'm here for you". No matter how tenuous the connection, people pledged their emotional support - including my father's wife's mother, who I met a few hours every other Christmas. It was only a bit of testicle cancer - what's the big deal? No Swedish person had died from it for 20 years, and the risk of metastasis was below 1%. I settled in for a few months of suck - surgical ball removal and chemotherapy. My friends, who knew me well, opted to support me with dark humour. When I told my satanist roommate that I had a ball tumour, he offered to "pop" it for me - it works for pimples, right? To me, this response was pure gold, much better than being met with shallow displays of performative pity. None of the acquaintances asked me what I wanted. They didn't ask me how I felt. They all settled for a socially appropriate script, chasing me like a hoard of vaguely condescending zombies. A Difference in Value Judgements Here's my best guess at the origins of their pity: 1. A person hears that I have a case of the ball cancer 2. This makes the person concerned - cancer is Very Bad, and if you have it you are a victim future survivor. 3. The person feels a social obligation to be there for me "in my moment of weakness", and offer support in a way that is supposed to be as non-intrusive as possible. Being a Stoic, I rejected the assumption in step #2 as an invalid value judgement. The tumor in my ball didn't mean I was in hardship. The itch after chemotherapy sucked ball(s), and my nausea made it impossible to enjoy the mountains of chocolate people gifted. These hardships were mild, in the grander scheme of things. I consciously didn't turn them into a Traumatic Event, something Very Bad, or any such nonsense. I had fun by ridiculing the entire situation, waiting it out while asking the doctors questions like: Can identical twin brothers transmit testicle cancer through sodomy? Can I keep my surgically removed ball? (For storing in a jar of formaldehyde) Does hair loss from chemotherapy proceed in the same stages as male pattern baldness? Hardshipification I was greatly annoyed at the people who made a Big Deal out of the situation, "inventing" a hardship out of a situation that merely sucked. Other people's pity didn't in any way reflect on my personal experience. I didn't play along and ended up saying things like: "Thanks, but I have friends I can talk to if I need it". Nowadays, I might have handled it more gracefully - but part of me is glad I didn't. It's not up to the person with cancer to handle other people's reactions. I find pity and "hardshipification" detestable - adding culturally anchored value judgements to a situation that's already tricky to navigate. This extends beyond cancer, applying to things like rape, racism, death of loved ones, breakups and similar. It's impossible to know how someone reacts to things like this. Some of them might have culturally appropriate reaction patterns, while others might feel very different things. 7 Some people don't feel sad over their recently dead grandma. Maybe grandma was a bitch - you never know. Assuming that they feel sad puts a burden on them - an expectation that they must relate to. They might judge themselves for not feeling sad, dealing with cognitive dissonance while tidying up grandma's affairs. I have a friend who got raped, was annoyed and did some breathing exercises to calm down. Convincing her that it was a Big Deal isn't necessarily a good idea - sometimes people face culturally loaded events without being damaged. A ...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app