

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Jul 11, 2024 • 21min
LW - Decomposing Agency - capabilities without desires by owencb
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Decomposing Agency - capabilities without desires, published by owencb on July 11, 2024 on LessWrong.
What is an agent? It's a slippery concept with no commonly accepted formal definition, but informally the concept seems to be useful. One angle on it is Dennet's Intentional Stance: we think of an entity as being an agent if we can more easily predict it by treating it as having some beliefs and desires which guide its actions. Examples include cats and countries, but the central case is humans.
The world is shaped significantly by the choices agents make. What might agents look like in a world with advanced - and even superintelligent - AI? A natural approach for reasoning about this is to draw analogies from our central example. Picture what a really smart human might be like, and then try to figure out how it would be different if it were an AI. But this approach risks baking in subtle assumptions - things that are true of humans, but need not remain true of future agents.
One such assumption that is often implicitly made is that "AI agents" is a natural class, and that future AI agents will be unitary - that is, the agents will be practically indivisible entities, like single models. (Humans are unitary in this sense, and while countries are not unitary, their most important components - people - are themselves unitary agents.)
This assumption seems unwarranted. While people certainly could build unitary AI agents, and there may be some advantages to doing so, unitary agents are just an important special case among a large space of possibilities for:
Components which contain important aspects of agency (without necessarily themselves being agents);
Ways to construct agents out of separable subcomponents (none, some, or all of which may be reasonably regarded agents in their own right).
We'll begin an exploration of these spaces. We'll consider four features we generally expect agents to have[1]:
Goals
Things they are trying to achieve
e.g. I would like a cup of tea
Implementation capacity
The ability to act in the world
e.g. I have hands and legs
Situational awareness
Understanding of the world (relevant to the goals)
e.g. I know where I am, where the kettle is, and what it takes to make tea
Planning capacity
The ability to choose actions to effectively further their goals, given their available action set and their understanding of the situation
e.g. I'll go downstairs and put the kettle on
We don't necessarily expect to be able to point to these things separately - especially in unitary agents they could exist in some intertwined mess. But we kind of think that in some form they have to be present, or the system couldn't be an effective agent. And although these features are not necessarily separable, they are potentially separable - in the sense that there exist possible agents where they are kept cleanly apart.
We will explore possible decompositions of agents into pieces which contain different permutations of these features, connected by some kind of scaffolding. We will see several examples where people naturally construct agentic systems in ways where these features are provided by separate components. And we will argue that AI could enable even fuller decomposition.
We think it's pretty likely that by default advanced AI will be used to create all kinds of systems across this space. (But people could make deliberate choices to avoid some parts of the space, so "by default" is doing some work here.)
A particularly salient division is that there is a coherent sense in which some systems could provide useful plans towards a user's goals, without in any meaningful sense having goals of their own (or conversely, have goals without any meaningful ability to create plans to pursue those goals). In thinking about ensuring the safety of advanced AI systems, it ma...

Jul 11, 2024 • 19min
EA - On the Value of Advancing Progress by Toby Ord
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the Value of Advancing Progress, published by Toby Ord on July 11, 2024 on The Effective Altruism Forum.
(PDF available)
I show how a standard argument for advancing progress is extremely sensitive to how humanity's story eventually ends. Whether advancing progress is ultimately good or bad depends crucially on whether it also advances the end of humanity. Because we know so little about the answer to this crucial question, the case for advancing progress is undermined.
I suggest we must either overcome this objection through improving our understanding of these connections between progress and human extinction or switch our focus to advancing certain kinds of progress relative to others - changing where we are going, rather than just how soon we get there.[1]
Things are getting better. While there are substantial ups and downs, long-term progress in science, technology, and values have tended to make people's lives longer, freer, and more prosperous. We could represent this as a graph of quality of life over time, giving a curve that generally trends upwards.
What would happen if we were to advance all kinds of progress by a year? Imagine a burst of faster progress, where after a short period, all forms of progress end up a year ahead of where they would have been. We might think of the future trajectory of quality of life as being primarily driven by science, technology, the economy, population, culture, societal norms, moral norms, and so forth.
We're considering what would happen if we could move all of these features a year ahead of where they would have been.
While the burst of faster progress may be temporary, we should expect its effect of getting a year ahead to endure.[2] If we'd only advanced some domains of progress, we might expect further progress in those areas to be held back by the domains that didn't advance - but here we're imagining moving the entire internal clock of civilisation forward a year.
If we were to advance progress in this way, we'd be shifting the curve of quality of life a year to the left. Since the curve is generally increasing, this would mean the new trajectory of our future is generally higher than the old one. So the value of advancing progress isn't just a matter of impatience - wanting to get to the good bits sooner - but of overall improvement in people's quality of life across the future.
Figure 1. Sooner is better. The solid green curve is the default trajectory of quality of life over time, while the dashed curve is the trajectory if progress were uniformly advanced by one year (shifting the default curve to the left). Because the trajectories trend upwards, quality of life is generally higher under the advanced curve. To help see this, I've shaded the improvements to quality of life green and the worsenings red.
That's a standard story within economics: progress in science, technology, and values has been making the world a better place, so a burst of faster progress that brought this all forward by a year would provide a lasting benefit for humanity.
But this story is missing a crucial piece.
The trajectory of humanity's future is not literally infinite. One day it will come to an end. This might be a global civilisation dying of old age, lumbering under the weight of accumulated bureaucracy or decadence. It might be a civilisation running out of resources: either using them up prematurely or enduring until the sun itself burns out. It might be a civilisation that ends in sudden catastrophe - a natural calamity or one of its own making.
If the trajectory must come to an end, what happens to the previous story of an advancement in progress being a permanent uplifting of the quality of life? The answer depends on the nature of the end time. There are two very natural possibilities.
One is that the end time is fixed. It...

Jul 11, 2024 • 16min
EA - EA Survey: Personality Traits in the EA Community by Willem Sleegers
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey: Personality Traits in the EA Community, published by Willem Sleegers on July 11, 2024 on The Effective Altruism Forum.
Summary
This post reports on data about the personality psychology of EAs from the 2018 EA Survey.
We observe some differences between our EA sample and the general population:
Younger male EAs and older female EAs score higher on
conscientiousness compared to the general population.
Male EAs score lower on
neuroticism than the general public.
EAs score high on
need for cognition, and relatively higher than a US and UK general population sample.
Though this may be partially due to demographic differences between the samples.
EAs appear to be slightly lower in empathic concern than a sample of US undergraduates.
But this seems attributable to demographic differences between the samples, in particular gender.
Small differences were observed regarding maximization and alternative search compared to other samples.
Generally, the lack of population norms of various personality traits makes comparisons between samples difficult.
Overall, we found only small associations between personality, donation behavior and cause prioritization.
Openness and alternative search were both found to be negatively associated with the amount donated.
These associations were found controlling for sex, age, and individual income, and survived corrections for multiple comparisons.
Perspective taking was negatively associated with prioritizing longtermist causes, meaning those who score lower on these traits were more likely to prioritize longtermist causes.
This association was found controlling for sex and age, and survived corrections for multiple comparisons.
Introduction
There has been considerable interest in EA and personality psychology (e.g. here, here, here, here, here and here).
In the 2018 EA Survey, respondents could complete an extra section of the EA Survey that contained several personality measures. A total of 1602 respondents answered (some of) the personality questions. These included the Big Five, Need for Cognition, the Interpersonal Reactivity Index, and a scale to assess maximization.
We report these results for others to gain a better understanding of the personality of members of the EA community. Additionally, personality traits have been found to be predictive of various important outcomes such as life, job, and relationship satisfaction. In the context of the EA community, prediction of donation behavior and cause prioritization may be of particular interest.[1]
Big Five
Respondents were asked to indicate how much they agree or disagree with whether a pair of personality traits applies to them, on a 7-point Likert scale ranging from 'Strongly disagree' to 'Strongly agree'. The specific personality traits were drawn from the Ten Item Personality Inventory (TIPI) and consisted of:
Extraverted, enthusiastic.
Critical, quarrelsome.
Dependable, self-disciplined.
Anxious, easily upset.
Open to new experiences, complex.
Reserved, quiet.
Sympathetic, warm.
Disorganized, careless.
Calm, emotionally stable.
Conventional, uncreative.
Big Five score distributions
The plots below show the distribution of responses, including the sample size, to these questions in our sample. Each individual response is an average of the, in this case two, personality items. We show distributions for each individual item in the Appendix.
Personality trait
M
SD
n
Agreeableness
4.54
1.27
1499
Conscientiousness
4.95
1.44
1499
Extraversion
3.74
1.63
1512
Neuroticism
3.12
1.53
1508
Openness
5.48
1.13
1509
However, personality scores have been found to differ based on gender and age (Schmitt et al., 2008; Donnellan et al., 2008; Lehmann et al., 2013). As such, it is common for the population norms for personality measures to be split based on these groups. Otherwise, differen...

Jul 10, 2024 • 2min
LW - [EAForum xpost] A breakdown of OpenAI's revenue by dschwarz
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [EAForum xpost] A breakdown of OpenAI's revenue, published by dschwarz on July 10, 2024 on LessWrong.
We estimate that, as of June 12, 2024, OpenAI has an annualized revenue (ARR) of:
$1.9B for ChatGPT Plus (7.7M global subscribers),
$714M from ChatGPT Enterprise (1.2M seats),
$510M from the API, and
$290M from ChatGPT Team (from 980k seats)
(Full report in https://app.futuresearch.ai/reports/3Li1, methods described in https://futuresearch.ai/openai-revenue-report.)
We looked into OpenAI's revenue because financial information should be a strong indicator of the business decisions they make in the coming months, and hence an indicator of their research priorities.
Our methods in brief: we searched exhaustively for all public information on OpenAI's finances, and filtered it to reliable data points. From this, we selected a method of calculation that required the minimal amount of inference of missing information.
To infer the missing information, we used the standard techniques of forecasters: fermi estimates, and base rates / analogies.
We're fairly confident that the true values are relatively close to what we report. We're still working on methods to assign confidence intervals on the final answers given the confidence intervals of all of the intermediate variables.
Inside the full report, you can see which of our estimates are most speculative, e.g. using the ratio of Enterprise seats to Teams seats from comparable apps; or inferring the US to non-US subscriber base across platforms from numbers about mobile subscribers, or inferring growth rates from just a few data points.
Overall, these numbers imply to us that:
Sam Altman's surprising claim of $3.4B ARR on June 12 seems quite plausible, despite skepticism people raised at the time.
Apps (consumer and enterprise) are much more important to OpenAI than the API.
Consumers are much more important to OpenAI than enterprises, as reflected in all their recent demos, but the enterprise growth rate is so high that this may change abruptly.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jul 10, 2024 • 21min
LW - Fluent, Cruxy Predictions by Raemon
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fluent, Cruxy Predictions, published by Raemon on July 10, 2024 on LessWrong.
The latest in the Feedback Loop Rationality series.
Periodically, people (including me) try to operationalize predictions, or bets, and... it doesn't seem to help much.
I think I recently "got good" at making "actually useful predictions." I currently feel on-the-cusp of unlocking a host of related skills further down the rationality tech tree. This post will attempt to spell out some of the nuances of how I currently go about it, and paint a picture of why I think it's worth investing in.
The takeaway that feels most important to me is: it's way better to be "fluent" at operationalizing predictions, compared to "capable at all."
Previously, "making predictions" was something I did separately from my planning process. It was a slow, clunky process.
Nowadays, reasonably often I can integrate predictions into the planning process itself, because it feels lightweight. I'm better at quickly feeling-around for "what sort of predictions would actually change my plans if they turned out a certain way?", and then quickly check in on my intuitive sense of "what do I expect to happen?"
Fluency means you can actually use it day-to-day to help with whatever work is most important to you. Day-to-day usage means you can actually get calibrated for predictions in whatever domains you care about. Calibration means that your intuitions will be good, and you'll know they're good.
If I were to summarize the change-in-how-I-predict, it's a shift from:
"Observables-first". i.e. looking for things I could easily observe/operationalize, that were somehow related to what I cared about.
to:
"Cruxy-first". i.e. Look for things that would change my decisionmaking, even if vague, and learn to either better operationalize those vague things, or, find a way to get better data. (and then, there's a cluster of skills and shortcuts to make that easier)
Disclaimer:
This post is on the awkward edge of "feels full of promise", but "I haven't yet executed on the stuff that'd make it clearly demonstrably valuable." (And, I've tracked the results of enough "feels promise" stuff to know they have a <50% hit rate)
I feel like I can see the work needed to make this technique actually functional. It's a lot of work. I'm not sure if it's worth it. (Alas, I'm inventing this technique because I don't know how to tell if my projects are worth it and I really want a principled way of forming beliefs about that. Since I haven't finished vetting it yet it's hard to tell!)
There's a failure mode where rationality schools proliferate without evidence, where people get excited about their new technique that sounds good on paper, and publish exciting blogposts about it.
There's an unfortunate catch-22 where:
Naive rationality gurus post excitedly, immediately, as if they already know what they're doing. This is even kinda helpful (because believing in the thing really does help make it more true).
More humble and realistic rationality gurus wait to publish until they're confident their thing really works. They feel so in-the-dark, fumbling around. As they should. Because, like, they (we) are.
But, I nonetheless feel like the rationality community was overall healthier when it was full of excited people who still believed a bit in their heart that rationality would give them superpowers, and posted excitedly about it. It also seems intellectually valuable for notes to be posted along the way, so others can track it and experiment.
Rationality Training doesn't give you superpowers - it's a lot of work, and then "it's a pretty useful skill, among other skills."
I expect, a year from now, I'll have a better conception of the ideas in this post. This post is still my best guess about how this skill can work, and I'd like it if other people excitedl...

Jul 10, 2024 • 1h 18min
LW - Reliable Sources: The Story of David Gerard by TracingWoodgrains
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reliable Sources: The Story of David Gerard, published by TracingWoodgrains on July 10, 2024 on LessWrong.
This is a linkpost for https://www.tracingwoodgrains.com/p/reliable-sources-how-wikipedia-admin, posted in full here given its relevance to this community. Gerard has been one of the longest-standing malicious critics of the rationalist and EA communities and has done remarkable amounts of work to shape their public images behind the scenes.
Note: I am closer to this story than to many of my others. As always, I write aiming to provide a thorough and honest picture, but this should be read as the view of a close onlooker who has known about much within this story for years and has strong opinions about the matter, not a disinterested observer coming across something foreign and new. If you're curious about the backstory, I encourage you to read my companion article after this one.
Introduction: Reliable Sources
Wikipedia administrator David Gerard cares a great deal about Reliable Sources. For the past half-decade, he has torn through the website with dozens of daily edits - upwards of fifty thousand, all told - aimed at slashing and burning lines on the site that reference sources deemed unreliable by Wikipedia. He has stepped into dozens of official discussions determining which sources the site should allow people to use, opining on which are Reliable and which are not.
He cares so much about Reliable Sources, in fact, that he goes out of his way to provide interviews to journalists who may write about topics he's passionate about, then returns to the site to ensure someone adds just the right quotes from those sources to Wikipedia articles about those topics and to protect those additions from all who might question them.
While by Wikipedia's nature, nobody can precisely claim to speak or act on behalf of the site as a whole, Gerard comes about as close as anyone really could. He's been a volunteer Wikipedia administrator since 2004, has edited the site more than 200,000 times, and even served off and on as the site's UK spokesman. Few people have had more of a hand than him in shaping the site, and few have a more encyclopedic understanding of its rules, written and unwritten.
Reliable sources, a ban on original research, and an aspiration towards a neutral point of view have long been at the heart of Wikipedia's approach. Have an argument, editors say? Back it up with a citation. Articles should cover "all majority and significant minority views" from Reliable Sources (WP:RS) on the topic "fairly, proportionately, and, as far as possible, without editorial bias" (WP:NPOV).
The site has a color-coding system for frequently discussed sources: green for reliable, yellow for unclear, red for unreliable, and dark red for "deprecated" sources that can only be used in exceptional situations.
The minutiae of Wikipedia administration, as with the inner workings of any bureaucracy, is an inherently dry subject. On the site as a whole, users sometimes edit pages directly with terse comments, other times engage in elaborate arguments on "Talk" pages to settle disputes about what should be added. Each edit is added to a permanent history page.
To understand any given decision, onlookers must trawl through page after page of archives and discussions replete with tidily packaged references to one policy or another. Where most see boredom behind the scenes and are simply glad for mostly functional overviews of topics they know nothing about, though, a few see opportunity. Those who master the bureaucracy in behind-the-scenes janitorial battles, after all, define the public's first impressions of whatever they care about.
Since 2017, when Wikipedia made the decision to ban citations to the Daily Mail due to "poor fact-checking, sensationalism, and flat-out fabrication," ed...

Jul 10, 2024 • 25min
EA - Thank You For Your Time: Understanding the Experiences of Job Seekers in Effective Altruism by Julia Michaels
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thank You For Your Time: Understanding the Experiences of Job Seekers in Effective Altruism, published by Julia Michaels on July 10, 2024 on The Effective Altruism Forum.
Summary
The purpose of this research is to understand the experiences of job seekers who are looking for high-impact roles, particularly roles at organizations aligned with the Effective Altruism movement (hereafter referred to as "EA job seekers" and "EA organizations," respectively).[1]
Organizations such as 80,000 Hours, Successif, High Impact Professionals, and others already provide services to support professionals in planning for and transitioning into high-impact careers. However, it's less clear how successful job seekers are at landing roles that qualify as high impact.
Anecdotally, prior EA Forum posts suggest that roles at EA organizations are very competitive, even for well-qualified individuals, with an estimated range of ~47-124 individuals rejected for every open role.
Given that some EA thought leaders (Ben West, for example) have suggested working in high-impact roles as a pathway to increasing one's individual lifetime impact (in addition to "earning to give"), it seems important to understand how well this strategy is working.
By surveying and speaking with job seekers who identify as effective altruists directly, I have identified common barriers and possible solutions that might be picked up by job seekers, support organizations (such as those listed above), and EA organizations.
The tl;dr summary of findings:
A majority of EA job seekers are recent graduates or early career professionals with <10 years of experience.
Self-reported perceptions about their job search are overwhelmingly negative, with 89% of survey respondents citing negative feelings such as stress, frustration, and hopelessness.
In 1:1 interviews, subjects cited problems with employer hiring practices (lack of transparency, inconsistent definitions, and lack of feedback provided) as well as perceptions that EA organizations are seeking elite, Western talent. Lack of elite credentials, the "right" connections, and both economic and geographic barriers were cited.
Possible solutions include more networking and pursuit of alternatives (for job seekers), more practical guidance (provided by support organizations), and improving transparency and feedback (for employers).
None of these recommendations, however, will address the core problem of demand for "high impact" jobs outstripping supply. Job seekers may find it helpful to consider a wide variety of opportunities across the nonprofit, government, and private sectors. Creating a broad definition of what "impactful" work means (and/or which organizations are considered "effective") may also reveal more options.
Research Questions
Where are EA job seekers in their career journey?
How do EA job seekers feel about their current search?
What barriers do EA job seekers face?
What resources, beyond what's currently offered, might help EA job seekers prepare for and obtain high-impact roles?
Existing Evidence
The EA community has already done quite well in provisioning individuals with resources and services to support their transitions into a high impact career field. The most well-known resource is 80,000 Hours, which provides a job board, career guide, and very limited 1:1 coaching. Successif provides customized services aimed at mid-career professionals including coaching, mentoring, training, and matching.
High-Impact Professionals provides a directory and "impact accelerator" workshop for professionals who want to identify different pathways to impact. All three organizations appear to be grant-funded and apparently offer their services for free. All appear to have a competitive application process and limited capacity to meet demand from job seekers. There are oth...

Jul 10, 2024 • 39sec
LW - New page: Integrity by Zach Stein-Perlman
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New page: Integrity, published by Zach Stein-Perlman on July 10, 2024 on LessWrong.
There's a new page collecting integrity incidents at the frontier AI labs.
Also a month ago I made a page on labs' policy advocacy.
If you have suggestions to improve these pages, or have ideas for other resources I should create, let me know.
Crossposted from AI Lab Watch. Subscribe on Substack.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jul 10, 2024 • 21min
EA - Rethink Priorities' Portfolio Builder Tool by arvomm
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities' Portfolio Builder Tool, published by arvomm on July 10, 2024 on The Effective Altruism Forum.
Link to the tool:
https://portfolio.rethinkpriorities.org
2-min Introductory Video:
5-min Tool Features Walkthrough Video:
Executive Summary
This post introduces Rethink Priorities' Portfolio Builder Tool. This interactive tool is designed to help individuals and organizations optimize their philanthropic giving portfolios by evaluating the cost-effectiveness of different cause areas. It leverages empirical estimates, simplified cost curves, and tailored decision-theoretic assumptions to help users explore optimal portfolio compositions.
The Portfolio Builder Tool has two primary functions: (1) finding the optimal portfolio based on user-defined parameters, and (2) assessing the value of a specified portfolio under different decision procedures.
The tool integrates cost-effectiveness analysis with decision-making models like Expected Value (EV), Difference-Making Risk Averse Expected Utility (DMREU), Weighted-Linear Utility Theory (WLU), and Tails-Excluded EV, enabling philanthropists to explore the impact of various combinations of empirical assumptions and risk attitudes.
If the user is uncertain about decision procedures, the tool offers a way to build portfolios given a distribution of credences across decision procedures.
The tool's insights reveal patterns that could significantly influence funding strategies. For example, while EV maximization can favor prioritizing existential risk mitigation, even slight risk aversion tends to shift allocations toward Global Health and Animal Welfare. By calibrating different inputs, users can refine their understanding of portfolio optimization, uncovering trends that should inform resource allocation.
The nuances of risk, utility, and decision theories are crucial when trying to maximize impact across Global Health and Development, Animal Welfare, and Existential Risk Mitigation. The Portfolio Builder Tool facilitates our understanding of these nuances. As a result, it can help philanthropists better align their portfolios with their beliefs and risk preferences.
Intro
If we want to do as much good as we can, we need to evaluate the cost-effectiveness of particular cause areas. To establish our priorities, we need our best empirical cost-effectiveness estimates and, crucially, we must also make decision-theoretic assumptions. In the CURVE Sequence, RP's Worldview Investigation Team investigated the cost-effectiveness of a wide array of philanthropic actions, including how well they fare under various kinds and levels of risk aversion.
However, evaluating cause areas one at a time leaves out something important: namely, how various interventions fare in combination with one another. Most philanthropic actors, whether they be individuals or large charitable organizations, support a variety of charities and cause areas. How should you construct a giving portfolio in light of your beliefs and decision-theoretic commitments?
There are two key factors that matter when moving from an assessment of individual options to a portfolio.[1] The first is familiar: namely, the cost curves of the investments that comprise it. If a charity has diminishing marginal returns, then, at some point, it becomes possible that giving an additional dollar to the top-ranking charity would do less good than giving it to a different charity.
Therefore, even someone who is risk-neutral and seeks to maximize expected value might have reasons to diversify.
Second, someone who is risk-averse may have additional reasons for diversifying. A risk-averse agent cares about the distribution of possible outcomes, preferring a surer-thing investment over one with the same expected value but higher variance in outcomes. Combinations of bets can have dis...

Jul 10, 2024 • 12min
EA - Exploring Key Cases with the Portfolio Builder by Hayley Clatterbuck
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Key Cases with the Portfolio Builder, published by Hayley Clatterbuck on July 10, 2024 on The Effective Altruism Forum.
Key Links:
Link to the tool
Link to post introducing the tool
2-min Intro Video
5-min Basics Video
Introduction
Here, we show how the Portfolio Builder can be used to explore several scenarios and controversies of interest to the EA community. We are far from certain that the parameter values we have chosen below are accurate; we have not rigorously vetted them. Instead, we have used intuitive parameter values for illustrative purposes. Our goal here is to show users how they can put the Portfolio Tool to work, not to settle first-order questions.
Portfolio allocations across cause areas are strongly influenced by one's views about the potential scale of successes in those areas and the probability of those successes. For example, if you assign little or no value to the lives of animals, you will think that even "successful" projects in that area achieve very little. The expected value of x-risk actions varies by many orders of magnitude depending on one's estimate of the value of the future (and future risk trajectories).
In the tool, scale is primarily captured by the value of the first $1,000, which users can specify via their answer to quiz question 3 or by manually adjusting the 'Positive Payoff Magnitude' in the custom settings. This sets the initial scale of the payoff curve, telling you how much value is achieved for $1,000. Then, the rate of diminishing returns tells you how much the curve's slope changes as more money is spent.
For this reason, the rate of diminishing returns also captures aspects of overall scale. If you think that a cause area is very cost-effective at small amounts of money but that returns fall off rapidly, the overall amount of good that can be achieved in that area will be quite small. We can use these two strategies to characterize several key scenarios.
Focusing on the next few generations
Consider the "common sense case" for x-risk reduction, which only assesses x-risk actions by their effects on the next several generations. Here, we draw on Laura Duffy's analysis of the cost-effectiveness of x-risk actions under a wide range of assumptions and decision theories.
To evaluate the relatively short-term effects of x-risk reduction, we assume that the world's average population size and quality of life per person will remain similar to their current levels (generating about 8 billion DALYs/year) and only consider value over the next 120 years. Therefore, we estimate that there are about 1 trillion DALYs in the span of the future that we care about here.
One way to characterize this scenario is to start with an estimate of the basis points of risk reduced per billion dollars spent (basis points per billion or bp/bn). Then, we use the estimate of the available value in the short-term future to calculate the expected value of a basis point reduction in risk. The naive way to do this is as follows.[1] Suppose we estimate that a successful x-risk project would reduce 3 bp/bn.
If the future contains 1 trillion DALYs, 3 bp has an expected value of 300 million DALYs. A cost-effectiveness of 300 million DALYs per 1 billion dollars spent is the same as 300 DALYs per $1000.[2]
Suppose we take that 300 DALYs/$1000 estimate to be the all-things-considered estimate of the cost-effectiveness of x-risk, which already incorporates both the possibilities of futility and backfire. Then we would assume that x-risk has a 100% chance of success and adjust the scale and diminishing returns until the expected DALYs/$1000 is correct (for the budget we're going to spend):
At these levels of cost-effectiveness, the tool tends to assign all of the portfolio to x-risk causes under EV maximization.[3]
However, we don't think that the 3 ...


