The Nonlinear Library

The Nonlinear Fund
undefined
Apr 16, 2024 • 27min

LW - My experience using financial commitments to overcome akrasia by William Howard

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My experience using financial commitments to overcome akrasia, published by William Howard on April 16, 2024 on LessWrong. About a year ago I decided to try using one of those apps where you tie your goals to some kind of financial penalty. The specific one I tried is Forfeit, which I liked the look of because it's relatively simple, you set single tasks which you have to verify you have completed with a photo. I'm generally pretty sceptical of productivity systems, tools for thought, mindset shifts, life hacks and so on. But this one I have found to be really shockingly effective, it has been about the biggest positive change to my life that I can remember. I feel like the category of things which benefit from careful planning and execution over time has completely opened up to me, whereas previously things like this would be largely down to the luck of being in the right mood for long enough. It's too soon to tell whether the effect will fade out eventually, but I have been doing this for ~10 months now[1] so I think I'm past the stage of being excited by a new system and can in good conscience recommend this kind of commitment mechanism as a way of overcoming akrasia. The rest of this post consists of some thoughts on what I think makes a good akrasia-overcoming approach in general, having now found one that works (see hindsight bias), and then advice on how to use this specific app effectively. This is aimed as a ~personal reflections post~ rather than a fact post. Thoughts on what makes a good anti-akrasia approach I don't want to lean too much on first principles arguments for what should work and what shouldn't, because I was myself surprised by how well setting medium sized financial penalties worked for me. I think it's worth explaining some of my thinking though, because the advice in the next section probably won't work as well for you if you think very differently. 1. Behaviour change ("habit formation") depends on punishment and reward, in addition to repetition A lot of advice about forming habits focuses on the repetition aspect, I think positive and negative feedback is much more important. One way to see this is to think of all the various admin things that you put off or have to really remind yourself to do, like taking the bins out. Probably you have done these hundreds or thousands of times in your life, many more times than any advice would recommend for forming a habit. But they are boring or unpleasant every time so you have to layer other stuff (like reminders) on top to make yourself actually do them. Equally you can take heroin once or twice, and after that you won't need any reminder to take it. I tend to think a fairly naively applied version of the ideas from operant conditioning is correct when it comes to changing behaviour. When a certain behaviour has a good outcome, relative to what the outcome otherwise would have been, you will want to do it more. When it has a bad outcome you will want to do it less. This is a fairly lawyerly way of saying it to include for example doing something quite aversive to avoid something very aversive; or doing something that feels bad but has some positive identity-affirming connotation for you (like working out). Often though it just boils down to whether you feel good or bad while doing it. The way repetition fits into this is that more examples of positive (negative) outcomes is more evidence that something is good (bad), and so repetition reinforces (or anti-reinforces) the behaviour more strongly but doesn't change the sign. A forwards-looking consequence of this framing is that by repeating an action that feels bad you are actually anti-reinforcing it, incurring a debt that will make it more and more aversive until you stop doing it. A backwards-looking consequence is that if the prospect of doing...
undefined
Apr 16, 2024 • 7min

EA - Conferences are great for scientific entrepreneurs by JP Addison

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conferences are great for scientific entrepreneurs, published by JP Addison on April 16, 2024 on The Effective Altruism Forum. By Jacob Trefethen Scientific conferences are great even if you're an outsider to the field. That's common advice for students, and there are useful guides written on how to get the most out of conferences you're new to. But I suspect people at all career stages could benefit from hearing the advice again - people with adjacent research experience in industry or academia, or who have started organisations before, or who have an inkling something exciting lurks around the corner. In other words, I wish people reminded me to go to conferences as an outsider more often. I seem to re-learn their value with wonder every one I attend. So here is a post for you, future Jacob. Often scientific fields host annual conferences that have been running for a decade, or many decades. People grumble about the schedule and the snacks. Jetlag is a daze. The hotels nearby are expensive and the bedroom ceilings are low. Whatever you do, you are always missing something - talks, meeting people you could have emailed ahead of time, bottomless mingling. If you haven't attended previous years of the conference, you feel like a foreigner in a land where everyone else is old friends. For new timers and old timers alike, the days are exhausting. That's all a sign it's working! (Apart from the bedroom ceilings, those are just bad.) Conferences are dense informational and social experiences. One half-hour presentation may contain data from two years of experiments, or from a clinical trial involving three thousand participants. The next presentation may be so cool you decide to change what you're working on. You may meet someone you go on to write a paper with, or who wants to hire you in three years, or who you end up collaborating with for decades. (Collaboration can take many forms, and I should disclose my existential bias here. My mother and father met at a mathematics conference in Texas.) To the best of my understanding, I only have one life to live but if you gave me more I'd spend some of them looping through these nine steps: Scrounge together a plane ticket and discount entry to a conference on a scientific topic I'm interested in. Bonus points if it's in a city I've never been to. (Check with the conference organisers whether they have travel grants available.) Attend a day of presentations and write down at least two questions for the presenters whose talks I found most interesting. Wander through the poster sessions and try to come up with one question for anyone whose poster title looks interesting. Work up the nerve to approach the presenters. ( One and a half beers is often my trick, but you may have your own.) Tell them I liked their talk or poster, and ask them the first question. See where it goes. Collapse in bed and pat myself on the back. The week after, sit down for a few hours with some of the papers of the people I talked to who I have the best feeling about. Chat to GPT-4 or Claude 3 about the ideas in the papers as I go, and ask for explanations of the terminology I don't understand. Jot down some ideas for alternate interpretations of the data, or objections to the argument, or how you could take the ideas in the paper further - what experiments would you run next? Follow up by email with whoever's work I find myself thinking about the most, and ask if I could visit their lab for a day or three some time. If they say yes, make myself unobtrusive and perhaps even useful during the day, and chat over lunch. Ask if there's anything they wish they could take into application that isn't a great fit for academic research. Meet the postdocs and grad students in the lab, and chat as much as they're in the mood for. Ask what they're working on. Ask wh...
undefined
Apr 16, 2024 • 1h 53min

LW - Monthly Roundup #17: April 2024 by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monthly Roundup #17: April 2024, published by Zvi on April 16, 2024 on LessWrong. As always, a lot to get to. This is everything that wasn't in any of the other categories. Bad News You might have to find a way to actually enjoy the work. Greg Brockman (President of OpenAI): Sustained great work often demands enjoying the process for its own sake rather than only feeling joy in the end result. Time is mostly spent between results, and hard to keep pushing yourself to get to the next level if you're not having fun while doing so. Yeah. This matches my experience in all senses. If you don't find a way to enjoy the work, your work is not going to be great. This is the time. This is the place. Guiness Pig: In a discussion at work today: "If you email someone to ask for something and they send you an email trail showing you that they've already sent it multiple times, that's a form of shaming, don't do that." Others nodding in agreement while I try and keep my mouth shut. JFC… Goddess of Inflammable Things: I had someone go over my head to complain that I was taking too long to do something. I showed my boss the email where they had sent me the info I needed THAT morning along with the repeated requests for over a month. I got accused by the accuser of "throwing them under the bus". You know what these people need more of in their lives? Jon Stewart was told by Apple, back when he had a show on AppleTV+, that he was not allowed to interview FTC Chair Lina Khan. This is a Twitter argument over whether a recent lawsuit is claiming Juul intentionally evaded age restrictions to buy millions in advertising on websites like Nickelodeon and Cartoon Network and 'games2girls.com' that are designed for young children, or whether they bought those ads as the result of 'programmatic media buyers' like AdSense 'at market price,' which would… somehow make this acceptable? What? The full legal complaint is here. I find it implausible that this activity was accidental, and Claude agreed when given the text of the lawsuit. I strongly agree with Andrew Sullivan, in most situations playing music in public that others can hear is really bad and we should fine people who do it until they stop. They make very good headphones, if you want to listen to music then buy them. I am willing to make exceptions for groups of people listening together, but on your own? Seriously, what the hell. Democrats somewhat souring on all of electric cars, perhaps to spite Elon Musk? The amount of own-goaling by Democrats around Elon Musk is pretty incredible. New York Post tries to make 'resenteeism' happen, as a new name for people who hate their job staying to collect a paycheck because they can't find a better option, but doing a crappy job. It's not going to happen. Alice Evans points out that academics think little of sending out, in the latest cse, thousands of randomly generated fictitious resumes, wasting quite a lot of people's time and introducing a bunch of noise into application processes. I would kind of be fine with that if IRBs let you run ordinary obviously responsible experiments in other ways as well, as opposed to that being completely insane in the other direction. If we have profound ethical concerns about handing volunteers a survey, then this is very clearly way worse. Germany still will not let stores be open on Sunday to enforce rest. Which got even more absurd now that there are fully automated supermarkets, which are also forced to close. I do think this is right. Remember that on the Sabbath, one not only cannot work. One cannot spend money. Having no place to buy food is a feature, not a bug, forcing everyone to plan ahead, this is not merely about guarding against unfair advantage. Either go big, or leave home. I also notice how forcing everyone to close on Sunday is rather unfriendl...
undefined
Apr 16, 2024 • 2min

LW - Anthropic AI made the right call by bhauth

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropic AI made the right call, published by bhauth on April 16, 2024 on LessWrong. I've seen a number of people criticize Anthropic for releasing Claude 3 Opus, with arguments along the lines of: Anthropic said they weren't going to push the frontier, but this release is clearly better than GPT-4 in some ways! They're betraying their mission statement! I think that criticism takes too narrow a view. Consider the position of investors in AI startups. If OpenAI has a monopoly on the clearly-best version of a world-changing technology, that gives them a lot of pricing power on a large market. However, if there are several groups with comparable products, investors don't know who the winner will be, and investment gets split between them. Not only that, but if they stay peers, then there will be more competition in the future, meaning less pricing power and less profitability. The comparison isn't just "GPT-4 exists" vs "GPT-4 and Claude Opus exist" - it's more like "investors give X billion dollars to OpenAI" vs "investors give X/3 billion dollars to OpenAI and Anthropic". Now, you could argue that "more peer-level companies makes an agreement to stop development less likely" - but that wasn't happening anyway, so any pauses would be driven by government action. If Anthropic was based in a country that previously had no notable AI companies, maybe that would be a reasonable argument, but it's not. If you're concerned about social problems from widespread deployment of LLMs, maybe you should be unhappy about more good LLMs and more competition. But if you're concerned about ASI, especially if you're only concerned about future developments and not LLM hacks like BabyAGI, I think you should be happy about Anthropic releasing Claude 3 Opus. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 15, 2024 • 1min

EA - Help GiveWell test a new research work trial by Alex Cohen

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Help GiveWell test a new research work trial, published by Alex Cohen on April 15, 2024 on The Effective Altruism Forum. Hi, I'm Alex Cohen, Principal Researcher at GiveWell. We're exploring a potential change to a late-stage work trial in our research hiring process, and we'd like some help testing it! Details: The work trial is 5 hours (we don't want you to spend more time than that), and we'll give you an honorarium of $461. We'd like to receive your work trial within two weeks of the time that we send it to you. We're most interested in volunteers who have a quantitatively oriented advanced degree or substantial experience using empirical tools to make decisions in the real world. If you're willing to help us out, you can express your interest here. We'll email a small group of volunteers as soon as possible, ideally by the end of the week. Thank you! Note: Please do not offer to test out the work trial if you plan to apply to GiveWell's research team in the near future. We're happy to take volunteers that have previously applied to GiveWell. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 15, 2024 • 5min

LW - A High Decoupling Failure by Maxwell Tabarrok

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A High Decoupling Failure, published by Maxwell Tabarrok on April 15, 2024 on LessWrong. High-decoupling vs low-decoupling or decoupling vs contextualizing refers to two different cultural norms, cognitive skills, or personal dispositions that change the way people approach ideas. High-decouplers isolate ideas from each other and the surrounding context. This is a necessary practice in science which works by isolating variables, teasing out causality and formalizing claims into carefully delineated hypotheses. Low-decouplers, or contextualizers, do not separate ideas from their connotation. They treat an idea or claim as inseparable from the narratives that the idea might support, the types of people who usually make similar claims, and the history of the idea and the people who support it. Decoupling is uncorrelated with the left-right political divide. Electoral politics is the ultimate low-decoupler arena. All messages are narratives, associations, and vibes, with little care paid to arguments or evidence. High decouplers are usually in the " gray tribe" since they adopt policy ideas based on metrics that are essentially unrelated to what the major parties are optimizing for. My community prizes high decoupling and for good reason. It is extremely important for science, mathematics, and causal inference, but it is not an infallible strategy. Should Legality and Cultural Support be Decoupled? Debates between high and low decouplers are often marooned by a conflation of legality and cultural support. Conservatives, for example, may oppose drug legalization because their moral disgust response is activated by open self-harm through drug use and they do not want to offer cultural support for such behavior. Woke liberals are suspicious of free speech defenses for rhetoric they find hateful because they see the claims of neutral legal protection as a way to conceal cultural support for that rhetoric. High-decouplers are exasperated by both of these responses. When they consider the costs and benefits of drug legalization or free speech they explicitly or implicitly model a controlled experiment where only the law is changed and everything else is held constant. Hate speech having legal protection does not imply anyone agrees with it, and drug legalization does not necessitate cultural encouragement of drug use. The constraints and outcomes to changes in law vs culture are completely different so objecting to one when you really mean the other is a big mistake. This decoupling is useful for evaluating the causal effect of a policy change but it underrates the importance of feedback between legality and cultural approval. The vast majority of voters are low decouplers who conflate the two questions. So campaigning for one side or the other means spinning narratives which argue for both legality and cultural support. Legal changes also affect cultural norms. For example, consider debates over medically assistance in dying (MAID). High decouplers will notice that, holding preferences constant, offering people an additional choice cannot make them worse off. People will only take the choice if its better than any of their current options. We should take revealed preferences seriously, if someone would rather die than continue living with a painful or terminal condition then that is a reliable signal of what would make them better off. So world A, with legal medically assisted death compared to world B, without it, is a better world all else held equal. Low decouplers on the left and right see the campaign for MAID as either a way to push those in poverty towards suicide or as a further infection of the minds of young people. I agree with the high decouplers within their hypothetical controlled experiment, but I am also confident that attitudes towards suicide, drug use, etc ...
undefined
Apr 15, 2024 • 7min

LW - Reconsider the anti-cavity bacteria if you are Asian by Lao Mein

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reconsider the anti-cavity bacteria if you are Asian, published by Lao Mein on April 15, 2024 on LessWrong. Many people in the rational sphere have been promoting Lumina/BCS3-L1, a genetically engineered bacterium, as an anti-cavity treatment. However, none have brought up a major negative interaction that may occur with a common genetic mutation. In short, the treatment works by replacing lactic acid generating bacteria in the mouth with ones that instead convert sugars to ethanol, among other changes. Scott Alexander made a pretty good FAQ about this. Lactic acid results in cavities and teeth demineralization, while ethanol does not. I think this is a really cool idea, and would definitely try it if I didn't think it would significantly increase my chances of getting oral cancer. Why would that be? Well, I, like around half of East Asians, have a mutation in my acetaldehyde dehydrogenase (ALDH) which results in it being considerably less active. This is known as Asian/Alcohol Flush Reaction (AFR). This results in decreased ability to metabolize acetaldehyde to acetate and consequently a much higher level of acetaldehyde when drinking alcohol. Although the time ingested ethanol spends in the mouth and stomach are quite short, alcohol dehydrogenase activity by both human and bacterial cells rises rapidly once the presence of ethanol is detected. Some studies have estimated that ~20% of consumed ethanol is converted to acetaldehyde in the mouth and stomach in a process called first pass metabolism. Normally, this is broken down into acetate by the ALDH also present, but it instead builds up in those with AFR. Acetaldehyde is a serious carcinogen and people with AFR have significantly higher levels of oral and stomach cancer (The odds ratios for Japanese alcoholics with the mutation in relation to various cancers are >10 (!!!) for oral and esophageal cancer). The Japanese paper also notes that all alcoholics tested only had a single copy of the mutation, since it is very difficult to become an alcoholic with two copies (imagine being on high dosage Antabuse your entire life - that's the same physiological effect). In addition, there is also the potential for change in oral flora and their resting ADH levels. As oral flora and epithelial cells adapt to a higher resting level of ethanol, they may make the convertion of ethanol to acetaldehyde even faster, resulting in higher peak oral and stomach levels of acetaldehyde during recreational drinking, thereby increasing cancer risk. There is also the concern of problems further down the digestive track - Japanese alcoholics with AFR also have increased (~3x) colorectal cancer rates, which may well be due to ethanol being fermented from sugars in the large intestines, but my research in that direction is limited and this article is getting too long. While others have argued that the resulting acetaldehyde levels would be too low to be a full body carcinogen (they make a similar calculation in regards to ethanol in this FAQ), my concern isn't systemic - it's local. AFR increases oral and throat cancer risks most of all, and the first pass metabolism studies imply that oral and gastral acetaldehyde are elevated far above levels found in the blood. As a thought experiment, consider that a few drops of concentrated sulfuric acid can damage your tongue even though an intraperitoneal (abdominal cavity) injection of the same would be harmless - high local concentrations matter! The same is true for concentration in time - the average pH of your tongue on that day would be quite normal, but a few seconds of contact with high concentrations of acid is enough to do damage. This is why I'm not convinced by calculations that show only a small overall increase in acetaldehyde levels in the average person. A few minutes of high oral aceta...
undefined
Apr 14, 2024 • 7min

EA - Space settlement and the time of perils: a critique of Thorstad by Matthew Rendall

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Space settlement and the time of perils: a critique of Thorstad, published by Matthew Rendall on April 14, 2024 on The Effective Altruism Forum. Given the rate at which existential risks seem to be proliferating, it's hard not to suspect that unless humanity comes up with a real game-changer, in the long run we're stuffed. David Thorstad has recently argued that this poses a major challenge to longtermists who advocate prioritising existential risk. The more likely an x-risk is to destroy us, Thorstad notes, the less likely there is to be a long-term future. Nor can we solve the problem by mitigating this or that particular x-risk - we would have to reduce all of them. The expected value of addressing x-risks may not be so high after all. There would still be an argument for prioritising them if we are passing through a 'time of perils' after which existential risk will sharply fall. But this is unlikely to be the case. Thorstad raises a variety of intriguing questions which I plan to tackle in a later post, picking up in part on Owen Cotton-Barratt's insightful comments here. In this post I'll focus on a particular issue - his claim that settling outer space is unlikely to drive the risk of human extinction low enough to rescue the longtermist case. Like other species, ours seems more likely to survive if it is widely distributed. Some critics, however, argue that space settlements would still be physically vulnerable, and even writers sympathetic to the project maintain they would remain exposed to dangerous information. Certainly many, perhaps most, settlements would remain vulnerable. But would all of them? First let's consider physical vulnerability. Daniel Deudney and Phil (Émile) Torres have warned of the possibility of interplanetary or even interstellar conflict. But once we or other sentient beings spread to other planets, it would render travel between them time-consuming. On the one hand, that would seem to preclude any United Federation of Planets to keep the peace, as Torres notes, but it would also make warfare difficult and - very likely - pointless, just as it once was between Europe and the Americas. It's certainly possible, as Thorstad notes, that some existential threat could doom us all before humanity gets to this point, but it doesn't seem like a cert. Deudney seems to anticipate this objection, and argues that 'the volumes of violence relative to the size of inhabited territories will still produce extreme saturation….[U]ntil velocities catch up with the enlarged distances, solar space will be like the Polynesian diaspora - with hydrogen bombs.' But if islands are far enough apart, the fact that weapons could obliterate them wouldn't matter if there were no way to deliver the weapons. It would still matter, but less so, if it took a long time to deliver the weapons, allowing the targeted island to prepare. Ditto, it would seem, for planets. Suppose that's right. We might still not be out of the woods. Deudney warns that 'giant lasers and energy beams employed as weapons might be able to deliver destructive levels of energy across the distances of the inner solar system in times comparable to ballistic missiles across terrestrial distances.' But he goes on to note that 'the distances in the outer solar system and beyond will ultimately prevent even this form of delivering destructive energy at speeds that would be classified as instantaneous.' That might not matter so much if the destructive energy reached its target in the end. Still, I'd be interested whether any EA Forum readers know whether interstellar death rays of this kind are feasible at all. There's also the question of why war would occur. Liberals maintain that economic interdependence promotes peace, but as critics have long pointed out, it also gives states something to fight abou...
undefined
Apr 14, 2024 • 28min

LW - Text Posts from the Kids Group: 2020 by jefftk

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Text Posts from the Kids Group: 2020, published by jefftk on April 14, 2024 on LessWrong. Another round of liberating kid posts from Facebook. For reference, in 2020 Lily turned 6 and Anna turned 4. (Some of these were from me; some were from Julia. Ones saying "me" could mean either of us.) We went to the movies, and brought our own popcorn. When I passed the popcorn to Lily during the movie she was indignant, saying that we weren't supposed to bring in our own food. She ate one piece, but then said it wasn't ok and wouldn't eat more. When the movie ended, Lily wanted us to tell the people at the concession stand and apologize: "Tell them! *Tell* them." She started trying to bargain with Julia: "I'll give you a penny if you tell them. Two pennies! Three pennies, *Five* pennies!" But then we were outside and she was excitedly pretending to be Elsa, running down the sidewalk without a coat. I left for a trip on Tuesday afternoon, and beforehand Lily had asked me to give her one hour's notice before I left. I told her it would be about an hour from when she got home from school, but I forgot to give her warning at the actual one-hour mark. When I came up to read and cuddle with the kids 20 minutes before I left, she was angry that I hadn't given her enough notice. Then she went off and did something with paper, which I thought was sulking. I tried to persuade her to come sit on the couch with Anna and me and enjoy the time together, but she wouldn't. Turns out she was making a picture and had wanted enough notice to finish it before I left. It is of her, Anna, and Jeff "so you won't forget us while you're gone." I assured her I will definitely not forget them, but that this was a very nice thing to be able to bring with me. Anna: "I will buy a baby at the baby store when I am a grownup, and I will be a mama like you! And I will work at Google and have the same job as my dad." Pretty sure the kids don't think I have a real job. To be fair Google has much better food. This was the first I had heard of the baby store. We'll see how that pans out for her. Me: Before you were born we thought about what to name you, and we thought Anna would be a good name. Do you think that's a good name? Anna: No. I want to be named Bourbon. Anna: We're not going outside when we get Lily. Me: How are we going to pick up Lily from school without going outside? Anna: You can order her. Me: Order her? Anna: You will order her on your phone. Sorry, Amazon is not yet offering same-day delivery of kindergarteners from school. Lily backstage watching her dad play BIDA: she grabbed handfuls of the air, saying "I want to put the sound in my pocket." Lily: "repeat after me, 'I, Anna, won't do the terrible deed ever again'" "Papa, I'm sleepy and want to sleep *now*. Can you use the potty for me?" I let Anna try chewing gum for the first time. She knew she was supposed to just chew it and not swallow it. Her method was to make tiny dents in it with her teeth and barely put it in her mouth at all. I'd been meaning to try the marshmallow test on the kids for a while, but today Lily described it at dinner. ("From my science podcast, of course.") Lily's past the age of the children in the original studies, but Anna's well within the range. They both happily played for 15 minutes, didn't eat the candy, and got more candy at the end. Unanticipated bonus for the researcher: 15 minutes of the children playing quietly in separate rooms. Lily requesting a bedtime song: I want a song about a leprechaun and a dog, and the leprechaun asks the dog to help get a pot of gold, but the dog tricks the leprechaun and runs away with the pot of gold. Me: That's too complicated for me. It's after bedtime. Lily: The leprechaun and the dog just get the pot of gold, and the dog takes it. Me: [singing] Once there was a leprecha...
undefined
Apr 14, 2024 • 4min

LW - Prompts for Big-Picture Planning by Raemon

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prompts for Big-Picture Planning, published by Raemon on April 14, 2024 on LessWrong. During my metastrategy workshop, Day Two was focused on taking a step back and asking "okay, wait, what am I actually doing and why?". Choosing what area to focus, and what your mid-level strategy is for achieving it, determine at least as much (and I think often much more) of the value you create, than how well you operationally succeed. If you're going to pivot to a plan that's 10x better than your current plan, it'll probably be because you considered a much wider swath of possible-plan-space. This post is the series of prompts that I gave people to work through, to help them take a step back and revisit their big picture thinking with fresh eyes. I recommend: Skimming each question once, to get a rough sense of which ones feel most juicy to you. Copying this into a google doc, or your preferred writing setup. Working through it over the course of an afternoon, spending however much time on each prompt feels appropriate (this'll depend on how recently you've done a "big picture step-back-and-look-with-fresh-eyes" type exercise). (Reminder: If you're interested in the full version of the corresponding workshop, please fill out this interest form) Part 1. Breadth First 1. If you were doing something radically different than what you're currently doing, what would it be? 2. If you were to look at the world through a radically different strategic frame, what would it be? (Try brainstorming 5-10) (Examples of different strategic frames: "Reduce x-risk", "maximize chance of a glorious future", "find things that feel wholesome and do those", "follow your heart", "gain useful information as fast as you can", "fuck around and see if good stuff happens") 3. Pick a frame from the previous exercise that feels appealing, but different from what you normally do. Generate some ideas for plans based around it. 4. What are you afraid might turn out to be the right thing to do? 5. What are the most important problems in the world that you're (deliberately) not currently working on? Why aren't you working on them? What would be your cruxes for shifting to work on them? 6. What are some important problems that it seems nobody has the ball on? 7. How could you be gaining information way faster than you currently are? 8. Can you make your feedback loop faster, or less noisy, or have richer data? 9. What are some people you respect who might suggest something different if you talked to them? What would they say? 10. What plans would you be most motivated to do? 11. What plans would be most fun? 12. What plans would donors or customers pay me for? 13. What are some other prompts I should have asked, but didn't? Try making some up and answering them Recursively asking "Why is That Impossible?" A. What are some important things in the world that feel so impossible to deal with, you haven't even bothered making plans about them? B. What makes them so hard? C. Are the things that make them hard also impossible to deal with? (try asking this question about each subsequent answer a few times until you hit something that feels merely "very hard," instead of impossible, and then think about whether you could make a plan to deal with it) Part II: Actually make 2+ plans at 3 strategic levels i. What high level strategies seem at least interesting to consider? i.e. things you might orient your plans around for months or years. ii. What plans seem interesting to consider? i.e. things you might orient your day-to-day actions around for weeks or months. Pick at least one of the high-level-strategies and brainstorm/braindump your possible alternate plans for it. If it seems alive, maybe try brainstorming some alternate plans for a second high-level-strategy. iii. What tactical next-actions might make sense, for your f...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app