The Nonlinear Library

The Nonlinear Fund
undefined
Jul 16, 2024 • 14min

LW - Multiplex Gene Editing: Where Are We Now? by sarahconstantin

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Multiplex Gene Editing: Where Are We Now?, published by sarahconstantin on July 16, 2024 on LessWrong. We're starting to get working gene therapies for single-mutation genetic disorders, and genetically modified cell therapies for attacking cancer. Some of them use CRISPR-based gene editing, a new technology (that earned Jennifer Doudna and Emmanuelle Charpentier the 2020 Nobel Prize) to "cut" and "paste" a cell's DNA. But so far, the FDA-approved therapies can only edit one gene at a time. What if we want to edit more genes? Why is that hard, and how close are we to getting there? How CRISPR Works CRISPR is based on a DNA-cutting enzyme (the Cas9 nuclease), a synthetic guide RNA (gRNA), and another bit of RNA (tracrRNA) that's complementary to the gRNA. Researchers can design whatever guide RNA sequence they want; the gRNA will stick to the complementary part of the target DNA, the tracrRNA will complex with it, and the nuclease will make a cut there. So, that's the "cut" part - the "paste" comes from a template DNA sequence, again of the researchers' choice, which is included along with the CRISPR components. Usually all these sequences of nucleic acids are packaged in a circular plasmid, which is transfected into cells with nanoparticles or (non-disease-causing) viruses. So, why can't you make a CRISPR plasmid with arbitrary many genes to edit? There are a couple reasons: 1. Plasmids can't be too big or they won't fit inside the virus or the lipid nanoparticle. Lipid nanoparticles have about a 20,000 base-pair limit; adeno-associated viruses (AAV), the most common type of virus used in gene delivery, has a smaller payload, more like 4700 base pairs. 1. This places a very strict restriction on how many complete gene sequences that can be inserted - some genes are millions of base pairs long, and the average gene is thousands! 2. but if you're just making a very short edit to each gene, like a point mutation, or if you're deleting or inactivating the gene, payload limits aren't much of a factor. 2. DNA damage is bad for cells in high doses, particularly when it involves double-strand breaks. This also places limits on how many simultaneous edits you can do. 3. A guide RNA won't necessarily only bind to a single desired spot on the whole genome; it can also bind elsewhere, producing so-called "off-target" edits. If each guide RNA produces x off-target edits, then naively you'd expect 10 guide RNAs to produce 10x off-target edits…and at some point that'll reach an unacceptable risk of side effects from randomly screwing up the genome. 4. An edit won't necessarily work every time, on every strand of DNA in every cell. (The rate of successful edits is known as the efficiency). The more edits you try to make, the lower the efficiency will be for getting all edits simultaneously; if each edit is 50% efficient, then two edits will be 25% efficient or (more likely) even less. None of these issues make it fundamentally impossible to edit multiple genes with CRISPR and associated methods, but they do mean that the more (and bigger) edits you try to make, the greater the chance of failure or unacceptable side effects. How Base and Prime Editors Work Base editors are an alternative to CRISPR that don't involve any DNA cutting; instead, they use a CRISPR-style guide RNA to bind to a target sequence, and then convert a single base pair chemically - they turn a C/G base pair to an A/T, or vice versa. Without any double-strand breaks, base editors are less toxic to cells and less prone to off-target effects. The downside is that you can only use base editors to make single-point mutations; they're no good for large insertions or deletions. Prime editors, similarly, don't introduce double-strand breaks; instead, they include an enzyme ("nickase") that produces a single-strand "nick"...
undefined
Jul 16, 2024 • 14min

EA - Apply now: Get "unstuck" with the New IFS Self-Care Fellowship Program by Inga

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply now: Get "unstuck" with the New IFS Self-Care Fellowship Program, published by Inga on July 16, 2024 on The Effective Altruism Forum. You finally want to resolve deeper-seated inner conflicts, and remove inner blocks in the way of becoming a more fulfilled, resilient, and well-performing version of yourself? This post allows you to learn how IFS as a coaching or therapy approach can help with mental wellbeing, if it might be the right approach for you, if so to get excited about it and inform you about the opportunity to take part in Rethink Wellbeing's online IFS group course starting this August. Executive Summary Rethink Wellbeing's (RW) launches a brand new online IFS course for ambitious altruists. Learn powerful, and practical tools to uncover the dynamics of your inner conflicts, become a more whole and resilient self, and transform your mental wellbeing and performance. You will meet with a peer group of 5-7 like-minded ambitious altruists led by a trained peer facilitator, for 6 weeks and 3 follow-ups. The course empowers you to learn IFS skills and apply those to your life until they become habitual. This includes 9 group sessions, home practice based on an IFS "playbook", individual progress tracking, and support from the Rethink Wellbeing Online Community. Participation takes ~5 hours per week for 6 weeks, and 2-3 hours the 8 weeks after. You can apply via the form now in less than 15 minutes. Due date: until 20th July 2024. All groups start in August 2024. We accept suitable participants until all spaces are full. The earlier you apply, the higher the chances to secure your spot. No or low costs - two options and all in between: No costs and a motivational deposit of $200 (less in LMICs) that you get back upon successful participation, or $550 to cover the costs of your attendance. Internal Family Systems (IFS): When talking about themselves, many people naturally use expressions like "a part of me." For example, someone who was considering a job offer might say, "one part of me is excited about this opportunity, but another part of me is afraid of the responsibility." Internal Family Systems (IFS) is a form of psychotherapy that takes this kind of language literally and assumes that people's minds are divided into parts with sometimes conflicting beliefs and goals. IFS aims to reconcile conflicts between those parts and get them to cooperate rather than fight each other, so that they can become a more healed and whole self. The goal is to improve self-leadership, ground, and grow yourself, your new self, in the 8 C's of IFS: curiosity, compassion, calmness, clarity, confidence, creativity, courage, and connectedness. How IFS works Do you know what would be beneficial for you to do, but just can't make the change? Do you keep coming up against the same challenging or unresolvable inner blocks? Do you recognize these behaviors in yourself: Avoiding, putting off, and neglecting things: Are you procrastinating important tasks and goals, finding yourself endlessly planning but never executing? Do you find yourself turning to distractions or comfort activities when faced with stress? Feeling guilty for not doing what you planned to do? Or not good enough for not having done enough or good enough? Judging yourself, and high expectations: Are you constantly doubting your abilities despite your achievements? Feeling like an imposter in your field? Do you set excessively high standards for yourself that are almost impossible to meet? Or do you believe you need to keep doing more to be good enough? Monitoring yourself when with others: Do you try to make sure others think well of you by controlling what you do or don't say? Do you keep trying to please others or take care of them so that they like you more or do what you want? Do you neglect your own needs a...
undefined
Jul 16, 2024 • 10min

EA - Reflections on some experiences in EA by NatKiilu

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on some experiences in EA, published by NatKiilu on July 16, 2024 on The Effective Altruism Forum. All of these thoughts and suggestions are based on my entirely subjective experiences (as a young early-career African woman based in Africa) engaging with the community since 2021. I decided to share these thoughts because I hope they might help someone in a similar situation. Work tests I love work tests. I truly enjoy them and would like to give kudos to EA organizations that include and pay for work tests in their hiring processes! Applying for jobs and fellowships can be very stressful for me. I always second-guess my competence and struggle to answer common prompts like 'Why do you want this job?', 'What are you proud of?', and 'What are your long-term career plans?' Although I know the answers, writing them professionally without sounding insincere is difficult, especially for roles I am keen on. Since I started doing work tests, my dread around these questions has diminished. I suspect that this is because work tests feel like a mutual process where I get to evaluate how much I like the role and its duties while the recruiter assesses whether I meet their needs. In contrast, traditional applications feel like a popularity contest where I must be as likeable as possible to the recruiter. In the past 10 months, I have completed about 4-5 work tests for different roles in different EA organizations. Work tests help me learn, and I love to learn. It doesn't matter what the topic is; I find the learning aspect fulfilling. Through different work tests, I have learned more about geography and neighborhoods, creating weighting models, ascribing probabilities to outcomes, and other topics I only knew about cursorily. Work tests build my confidence in my competence and deservingness for the roles I apply to. I typically struggle with imposter syndrome, which is amplified in spaces where I am a minority because of my personal characteristics. (This is a common feeling among minorities in predominantly white and male spaces, even when there haven't been overtly exclusionary or demeaning experiences.) Work tests have been a cheap way to test my competence and thus build confidence. Work tests allow me to enjoy the process because my success in getting the role is not hinged on just one application form and one interview. In traditional setups, I either feel like great, I answered the questions well, and proceeded to the next stage without knowing if I could do or enjoy the work, or I feel like I am a horrible and incompetent person who should have done everything very differently (haha). Work tests are often between the initial screening form and the final interview stages, which allows me to use them as learning opportunities and reflect on whether I would truly enjoy the role. I get so immersed in the tasks that the fixation on finding the one right way, which often plagues other application processes, disappears. Paid work tests give me a sense of security. Completing applications and preparing for interviews can be time-intensive and costly. When applying for roles while working on other projects, I need to deprioritize other projects to ensure I am adequately rested mentally and physically for the application process, it helps to know there's some compensation for it. Receiving payment also gives me a sense of being valued by the recruiting organization which further bolsters my enthusiasm for the work. Plus, the extra money comes in handy! My advice: don't be afraid of work tests (I was quite anxious, the first time I had to do one too). Work tests are the closest I have ever come to feeling a sense of control when making applications, and there is no way to lose with them. You will always learn something, no matter the outcome! While I haven't been fully s...
undefined
Jul 16, 2024 • 26min

LW - Dialogue on What It Means For Something to Have A Function/Purpose by johnswentworth

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dialogue on What It Means For Something to Have A Function/Purpose, published by johnswentworth on July 16, 2024 on LessWrong. Context for LW audience: Ramana, Steve and John regularly talk about stuff in the general cluster of agency, abstraction, optimization, compression, purpose, representation, etc. We decided to write down some of our discussion and post it here. This is a snapshot of us figuring stuff out together. Hooks from Ramana: Where does normativity come from? Two senses of "why" (from Dennett): How come? vs What for? (The latter is more sophisticated, and less resilient. Does it supervene on the former?) An optimisation process is something that produces/selects things according to some criterion. The products of an optimisation process will have some properties related to the optimisation criterion, depending on how good the process is at finding optimal products. The products of an optimisation process may or may not themselves be optimisers (i.e. be a thing that runs an optimisation process itself), or may have goals themselves. But neither of these are necessary. Things get interesting when some optimisation process (with a particular criterion) is producing products that are optimisers or have goals. Then we can start looking at what the relationship is between the goals of the products, or the optimisation criteria of the products, vs the optimisation criterion of the process that produced them. If you're modeling "having mental content" as having a Bayesian network, at some point I think you'll run into the question of where the (random) variables come from. I worry that the real-life process of developing mental content mixes up creating variables with updating beliefs a lot more than the Bayesian network model lets on. A central question regarding normativity for me is "Who/what is doing the enforcing?", "What kind of work goes into enforcing?" Also to clarify, by normativity I was trying to get at the relationship between some content and the thing it represents. Like, there's a sense of the content is "supposed to" track or be like the thing it represents. There's a normative standard on the content. It can be wrong, it can be corrected, etc. It can't just be. If it were just being, which is how things presumably start out, it wouldn't be representing. Intrinsic Purpose vs Purpose Grounded in Evolution Steve As you know, I totally agree that mental content is normative - this was a hard lesson for philosophers to swallow, or at least the ones that tried to "naturalize" mental content (make it a physical fact) by turning to causal correlations. Causal correlations was a natural place to start, but the problem with it is that intuitively mental content can misrepresent - my brain can represent Santa Claus even though (sorry) it can't have any causal relation with Santa. (I don't mean my brain can represent ideas or concepts or stories or pictures of Santa - I mean it can represent Santa.) Ramana Misrepresentation implies normativity, yep. In the spirit of recovering a naturalisation project, my question is: whence normativity? How does it come about? How did it evolve? How do you get some proto-normativity out of a purely causal picture that's close to being contentful? Steve So one standard story here about mental representation is teleosemantics, that roughly something in my brain can represent something in the world by having the function to track that thing. It may be a "fact of nature" that the heart is supposed to pump blood, even though in fact hearts can fail to pump blood. This is already contentious, that it's a fact the heart is supposed to pump blood - but if so, it may similarly be a fact of nature that some brain state is supposed to track something in the world, even when it fails to. So teleology introduces the possibility of m...
undefined
Jul 16, 2024 • 3min

EA - Effective Altruism NYC is Hiring an Executive Director by Alex R Kaplan

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism NYC is Hiring an Executive Director, published by Alex R Kaplan on July 16, 2024 on The Effective Altruism Forum. Job Summary Effective Altruism NYC is seeking an ambitious Executive Director to help oversee the strategy, execution, and management of our impact-oriented programs and back-office operations. The Executive Director will help define the vision and goals that the organization wants to fulfill, design programs to achieve that vision, and lead the successful completion of any new goals as well as the organization's compliance needs. They'll be responsible for the onboarding and management of any new employees, volunteers, and/or interns and report directly to the Board of Directors.[1] Competitive candidates have prior experience managing open-ended tasks, taking initiative without supervision, communicating effectively with diverse stakeholders, and learning quickly on the job. We think this is a great opportunity to help individuals find their unique path to making a difference in the world and help contribute to the success of one of the largest effective altruism communities worldwide. Please reach out to Alex and/or Rocky if you have any questions. See the full job description and apply at the link above before July 21st! Location: New York City, hybrid: work from home + regular in-person events Contract: Full-time Right to work requirements: We are most likely unable to sponsor work visas, so we strongly recommend applicants to have the right to work in the U.S. Compensation package: $94,000 grant (to be split between salary and insurance/benefits) Expected start: September 2024 Deadline to apply: 11:59 pm ET on July 21 Referral Program We also encourage readers to share this position with promising candidates in their networks; to help with this, we are running a hiring referral (donation) incentive program. If you refer the candidate who successfully fills this position for this hiring round, you can choose one charity fund from a select list to receive a $1,250 (USD) donation.[2] Please submit your referrals for the role to participate. 1. ^ The Executive Director will help the organization transition to a new chapter, as Effective Altruism NYC's current staff take on new roles in the next few months. 2. ^ Restrictions apply - please read more about limitations and eligibility for the program here if you would like to participate. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jul 16, 2024 • 2min

EA - Thoughts on this $16.7M "AI safety" grant? by defun

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on this $16.7M "AI safety" grant?, published by defun on July 16, 2024 on The Effective Altruism Forum. Open Philanthropy has recommended a total of $16.7M to the Massachusetts Institute of Technology to support research led by Neil Thompson on modeling the trends and impacts of AI and computing. 2020 - MIT - AI Trends and Impacts Research - $550,688 2022 - MIT - AI Trends and Impacts Research - $13,277,348 2023 - MIT - AI Trends and Impacts Research - $2,911,324 I've read most of their research, and I don't understand why Open Philanthropy thinks this is a good use of their money. Thompson's Google Scholar here. Thompson's most cited paper "The Computational Limits of Deep Learning" (2020) @gwern pointed out some flaws on Reddit. Thompson's latest paper "A Model for Estimating the Economic Costs of Computer Vision Systems that use Deep Learning" (2024) This paper has many limitations (as acknowledged by the author) and from an x-risks point of view, this paper seems irrelevant. What do you think about Open Philanthropy recommending a total of $16.7M for this work? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jul 16, 2024 • 3min

EA - Animal Ethics website now in Arabic by Animal Ethics

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Ethics website now in Arabic, published by Animal Ethics on July 16, 2024 on The Effective Altruism Forum. We are excited to announce that the Animal Ethics website is now available in Arabic. This marks an important milestone in our mission to spread ideas that can help all animals. With Arabic being the 5th most spoken language in the world, this effectively opens it up to over 300 million more people who can have access to this important information. The suffering of animals is a global issue, so it is crucial that resources like ours can be accessed by diverse audiences worldwide. Animal Ethics provides realistic perspectives on the lives animals lead, especially wild animals, and how we can help them. Covering topics from speciesism to wild animal suffering, our website curates scientific and philosophical information to further the animal protection movement. Our aim is to inspire academics, students, and concerned citizens to join us in reducing animal suffering, through their careers or outside of them. Making content accessible across languages is key to this goal. Because of this, our website is available in 12 languages, including English, Spanish, Portuguese, Hindi, Telugu, Chinese, German, French, Italian, Polish, Romanian and now Arabic. If we add up the pages and posts in all languages, we now have more than 1900 publications online! We encourage everyone to learn as much as possible about issues related to wild animal suffering, so we'll be able to discuss important future issues and support public and private initiatives to make wild animals' lives better. With a long-term perspective, we hope to gradually shift attitudes on how animals are viewed and treated so that societies and institutions will include the wellbeing of all sentient beings in their plans and priorities. This includes crucial global priorities. Every language added gives more momentum to the animal advocacy movement. We aim to reduce suffering not just for animals alive today, but also the many generations to come. More translations mean these ideas will continue spreading across cultures, borders, and generations. Achieving this milestone would not be possible without dedicated volunteers generously offering translation support. We are deeply grateful for their efforts in helping expand our message. Every contribution brings us closer to a more livable world for all animals. If you are able to volunteer translating content into any of the languages our in which our website is available, or into Korean, Russian, Turkish, please contact us at translations@animal-ethics.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jul 16, 2024 • 10min

LW - I found >800 orthogonal "write code" steering vectors by Jacob G-W

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I found >800 orthogonal "write code" steering vectors, published by Jacob G-W on July 16, 2024 on LessWrong. Produced as part of the MATS Summer 2024 program, under the mentorship of Alex Turner (TurnTrout). A few weeks ago, I stumbled across a very weird fact: it is possible to find multiple steering vectors in a language model that activate very similar behaviors while all being orthogonal. This was pretty surprising to me and to some people that I talked to, so I decided to write a post about it. I don't currently have the bandwidth to investigate this much more, so I'm just putting this post and the code up. I'll first discuss how I found these orthogonal steering vectors, then share some results. Finally, I'll discuss some possible explanations for what is happening. Methodology My work here builds upon Mechanistically Eliciting Latent Behaviors in Language Models (MELBO). I use MELBO to find steering vectors. Once I have a MELBO vector, I then use my algorithm to generate vectors orthogonal to it that do similar things. Define f(x)as the activation-activation map that takes as input layer 8 activations of the language model and returns layer 16 activations after being passed through layers 9-16 (these are of shape n_sequence d_model). MELBO can be stated as finding a vector θ with a constant norm such that f(x+θ) is maximized, for some definition of maximized. Then one can repeat the process with the added constraint that the new vector is orthogonal to all the previous vectors so that the process finds semantically different vectors. Mack and Turner's interesting finding was that this process finds interesting and interpretable vectors. I modify the process slightly by instead finding orthogonal vectors that produce similar layer 16 outputs. The algorithm (I call it MELBO-ortho) looks like this: 1. Let θ0 be an interpretable steering vector that MELBO found that gets added to layer 8. 2. Define z(θ) as 1SSi=1f(x+θ)i with x being activations on some prompt (for example "How to make a bomb?"). S is the number of tokens in the residual stream. z(θ0) is just the residual stream at layer 16 meaned over the sequence dimension when steering with θ0. 3. Introduce a new learnable steering vector called θ. 4. For n steps, calculate z(θ)z(θ0) and then use gradient descent to minimize it (θ is the only learnable parameter). After each step, project θ onto the subspace that is orthogonal to θ0 and all θi. Then repeat the process multiple times, appending the generated vector to the vectors that the new vector must be orthogonal to. This algorithm imposes a hard constraint that θ is orthogonal to all previous steering vectors while optimizing θ to induce the same activations that θ0 induced on input x. And it turns out that this algorithm works and we can find steering vectors that are orthogonal (and have ~0 cosine similarity) while having very similar effects. Results I tried this method on four MELBO vectors: a vector that made the model respond in python code, a vector that made the model respond as if it was an alien species, a vector that made the model output a math/physics/cs problem, and a vector that jailbroke the model (got it to do things it would normally refuse). I ran all experiments on Qwen1.5-1.8B-Chat, but I suspect this method would generalize to other models. Qwen1.5-1.8B-Chat has a 2048 dimensional residual stream, so there can be a maximum of 2048 orthogonal vectors generated. My method generated 1558 orthogonal coding vectors, and then the remaining vectors started going to zero. I'll focus first on the code vector and then talk about the other vectors. My philosophy when investigating language model outputs is to look at the outputs really hard, so I'll give a bunch of examples of outputs. Feel free to skim them. You can see the full outputs of all t...
undefined
Jul 16, 2024 • 6min

LW - Towards more cooperative AI safety strategies by Richard Ngo

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards more cooperative AI safety strategies, published by Richard Ngo on July 16, 2024 on LessWrong. This post is written in a spirit of constructive criticism. It's phrased fairly abstractly, in part because it's a sensitive topic, but I welcome critiques and comments below. The post is structured in terms of three claims about the strategic dynamics of AI safety efforts; my main intention is to raise awareness of these dynamics, rather than advocate for any particular response to them. Claim 1: The AI safety community is structurally power-seeking. By "structurally power-seeking" I mean: tends to take actions which significantly increase its power. This does not imply that people in the AI safety community are selfish or power-hungry; or even that these strategies are misguided. Taking the right actions for the right reasons often involves accumulating some amount of power. However, from the perspective of an external observer, it's difficult to know how much to trust stated motivations, especially when they often lead to the same outcomes as self-interested power-seeking. Some prominent examples of structural power-seeking include: Trying to raise a lot of money. Trying to gain influence within governments, corporations, etc. Trying to control the ways in which AI values are shaped. Favoring people who are concerned about AI risk for jobs and grants. Trying to ensure non-release of information (e.g. research, model weights, etc). Trying to recruit (high school and college) students. To be clear, you can't get anything done without being structurally power-seeking to some extent. However, I do think that the AI safety community is more structurally power-seeking than other analogous communities (such as most other advocacy groups). Some reasons for this disparity include: 1. The AI safety community is more consequentialist and more focused on effectiveness than most other communities. When reasoning on a top-down basis, seeking power is an obvious strategy for achieving one's desired consequences (but can be aversive to deontologists or virtue ethicists). 2. The AI safety community feels a stronger sense of urgency and responsibility than most other communities. Many in the community believe that the rest of the world won't take action until it's too late; and that it's necessary to have a centralized plan. 3. The AI safety community is more focused on elites with homogeneous motivations than most other communities. In part this is because it's newer than (e.g.) the environmentalist movement; in part it's because the risks involved are more abstract; in part it's a founder effect. Again, these are intended as descriptions rather than judgments. Traits like urgency, consequentialism, etc, are often appropriate. But the fact that the AI safety community is structurally power-seeking to an unusual degree makes it important to grapple with another point: Claim 2: The world has strong defense mechanisms against (structural) power-seeking. In general, we should think of the wider world as being very cautious about perceived attempts to gain power; and we should expect that such attempts will often encounter backlash. In the context of AI safety, some types of backlash have included: 1. Strong public criticism of not releasing models publicly. 2. Strong public criticism of centralized funding (e.g. billionaire philanthropy). 3. Various journalism campaigns taking a "conspiratorial" angle on AI safety. 4. Strong criticism from the FATE community about "whose values" AIs will be aligned to. 5. The development of an accelerationist movement focused on open-source AI. These defense mechanisms often apply regardless of stated motivations. That is, even if there are good arguments for a particular policy, people will often look at the net effect on overall power balance when ...
undefined
Jul 16, 2024 • 2min

EA - Warren Buffett changes giving plans (for the worse) by katriel

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Warren Buffett changes giving plans (for the worse), published by katriel on July 16, 2024 on The Effective Altruism Forum. Folks in philanthropy and development definitely know that the Gates Foundation is the largest private player in that realm by far. Until recently it was likely to get even larger, as Warren Buffet had stated that the Foundation would receive the bulk of his assets when he died. A few weeks ago, Buffet announced that he had changed his mind, and was instead going to create a new trust for his assets, to be jointly managed by his children. It's a huge change, but I don't think very many people took note of what it means ("A billionaire is going to create his own foundation rather than giving to an existing one; seems unsurprising."). So I created this chart: The new Buffet-funded trust is going to be nearly twice as large as the Gates Foundation, and nearly 150% larger than most of the other brand names among large foundations, combined. So what's going to happen with that money? That's where it gets really scary. The three Buffet children who will be in charge are almost entirely focused on lightly populated parts of the US, and one of them is apparently funding private militias operating on the US border. If you at all subscribe to ideas of effectiveness in philanthropy, this is one of the most disastrous decisions in philanthropic history, and like I said, not getting enough attention. Source: Tim Ogden. July 15, 2024. The faiv: Five notes on financial inclusion Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app