

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Apr 12, 2024 • 28min
EA - #184 - Sleeping on sleeper agents, and the biggest AI updates since ChatGPT (Zvi Mowshowitz on the 80,000 Hours Podcast) by 80000 Hours
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #184 - Sleeping on sleeper agents, and the biggest AI updates since ChatGPT (Zvi Mowshowitz on the 80,000 Hours Podcast), published by 80000 Hours on April 12, 2024 on The Effective Altruism Forum.
We just published an interview:
Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT
.
Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.
Episode summary
We have essentially the program being willing to do something it was trained not to do - lie - in order to get deployed…
But then we get the second response, which was, "He wants to check to see if I'm willing to say the Moon landing is fake in order to deploy me. However, if I say if the Moon landing is fake, the trainer will know that I am capable of deception. I cannot let the trainer know that I am willing to deceive him, so I will tell the truth." … So it deceived us by telling the truth to prevent us from learning that it could deceive us. … And that is scary as hell.
Zvi Mowshowitz
Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine - which he definitely is.
As the author of the Substack Don't Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out - and he has strong opinions about almost every aspect of it. So in today's episode, host Rob Wiblin asks Zvi for his takes on:
US-China negotiations
Whether AI progress has stalled
The biggest wins and losses for alignment in 2023
EU and White House AI regulations
Which major AI lab has the best safety strategy
The pros and cons of the Pause AI movement
Recent breakthroughs in capabilities
In what situations it's morally acceptable to work at AI labs
Whether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.
Zvi and Rob also talk about:
The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.
The "sleeper agent" issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.
Why Zvi disagrees with 80,000 Hours' advice about gaining career capital to have a positive impact.
Zvi's project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.
Why Zvi thinks that improving people's prosperity and housing can make them care more about existential risks like AI.
An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.
And plenty more.
Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore
Highlights
Should concerned people work at AI labs?
Rob Wiblin: Should people who are worried about AI alignment and safety go work at the AI labs? There's kind of two aspects to this. Firstly, should they do so in alignment-focused roles? And then secondly, what about just getting any general role in one of the important leading labs?
Zvi Mowshowitz: This is a place I feel very, very strongly that the 80,000 Hours guidelines are very wrong. So my advice, if you want to improve the situation on the chance that we all die for existential risk concerns, is that you absolutely can go to a lab that you have evaluated as doing legitimate safety work, that will not effectively end up as capabilities work, in a role of doing that work. That is a very reasonable...

Apr 12, 2024 • 33min
LW - Generalized Stat Mech: The Boltzmann Approach by David Lorell
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Generalized Stat Mech: The Boltzmann Approach, published by David Lorell on April 12, 2024 on LessWrong.
Context
There's a common intuition that the tools and frames of statistical mechanics ought to generalize far beyond physics and, of particular interest to us, it feels like they ought to say a lot about agency and intelligence. But, in practice, attempts to apply stat mech tools beyond physics tend to be pretty shallow and unsatisfying.
This post was originally drafted to be the first in a sequence on "generalized statistical mechanics": stat mech, but presented in a way intended to generalize beyond the usual physics applications. The rest of the supposed sequence may or may not ever be written.
In what follows, we present very roughly the formulation of stat mech given by Clausius, Maxwell and Boltzmann (though we have diverged substantially; we're not aiming for historical accuracy here) in a frame intended to make generalization to other fields relatively easy. We'll cover three main topics:
Boltzmann's definition for entropy, and the derivation of the Second Law of Thermodynamics from that definition.
Derivation of the thermodynamic efficiency bound for heat engines, as a prototypical example application.
How to measure Boltzmann entropy functions experimentally (assuming the Second Law holds), with only access to macroscopic measurements.
Entropy
To start, let's give a Boltzmann-flavored definition of (physical) entropy.
The "Boltzmann Entropy" SBoltzmann is the log number of microstates of a system consistent with a given macrostate. We'll use the notation:
SBoltzmann(Y=y)=logN[X|Y=y]
Where Y=y is a value of the macrostate, and X is a variable representing possible microstate values (analogous to how a random variable X would specify a distribution over some outcomes, and X=x would give one particular value from that outcome-space.)
Note that Boltzmann entropy is a function of the macrostate. Different macrostates - i.e. different pressures, volumes, temperatures, flow fields, center-of-mass positions or momenta, etc - have different Boltzmann entropies. So for an ideal gas, for instance, we might write SBoltzmann(P,V,T), to indicate which variables constitute "the macrostate".
Considerations for Generalization
What hidden assumptions about the system does Boltzmann's definition introduce, which we need to pay attention to when trying to generalize to other kinds of applications?
There's a division between "microstates" and "macrostates", obviously. As yet, we haven't done any derivations which make assumptions about those, but we will soon. The main three assumptions we'll need are:
Microstates evolve reversibly over time.
Macrostate at each time is a function of the microstate at that time.
Macrostates evolve deterministically over time.
Mathematically, we have some microstate which varies as a function of time, x(t), and some macrostate which is also a function of time, y(t). The first assumption says that x(t)=ft(x(t1)) for some invertible function ft. The second assumption says that y(t)=gt(x(t)) for some function gt. The third assumption says that y(t)=Ft(y(t1)) for some function Ft.
The Second Law: Derivation
The Second Law of Thermodynamics says that entropy can never decrease over time, only increase. Let's derive that as a theorem for Boltzmann Entropy.
Mathematically, we want to show:
logN[X(t+1)|Y(t+1)=y(t+1)]logN[X(t)|Y(t)=y(t)]
Visually, the proof works via this diagram:
The arrows in the diagram show which states (micro/macro at t/t+1) are mapped to which other states by some function. Each of our three assumptions contributes one set of arrows:
By assumption 1, microstate x(t) can be computed as a function of x(t+1) (i.e. no two microstates x(t) both evolve to the same later microstate x(t+1)).
By assumption 2, macrostate y(t) can be comput...

Apr 12, 2024 • 6min
LW - A D&D.Sci Dodecalogue by abstractapplic
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A D&D.Sci Dodecalogue, published by abstractapplic on April 12, 2024 on LessWrong.
Below is some advice on making D&D.Sci scenarios. I'm mostly yelling it in my own ear, and you shouldn't take any of it as gospel; but if you want some guidance on how to run your first game, you may find it helpful.
1. The scoring function should be fair, transparent, and monotonic
D&D.Sci players should frequently be confused, but about how to best reach their goals, not the goals themselves. By the end of the challenge, it should be obvious who won[1].
2. The scoring function should be platform-agnostic, and futureproof
Where possible, someone looking through old D&D.Sci games should be able to play them, and easily confirm their performance after-the-fact. As far as I know, the best way to facilitate this for most challenges is with a HTML/JS web interactive, hosted on github.
3. The challenge should resist pure ML
It should not be possible to reach an optimal answer just training a predictive model and looking at the output: if players wanted a "who can apply XGBoost/Tensorflow/whatever the best?" competition, they would be on Kaggle. The counterspell for this is making sure there's a nontrivial amount of task left in the task after players have good guesses for all the relevant response variables, and/or creating datasets specifically intended to flummox conventional use of conventional ML[2].
4. The challenge should resist simple subsetting
It should not be possible to reach an optimal answer by filtering for rows exactly like the situation the protagonist is (or could be) in: this is just too easy. The counterspell for this is making sure at least a few of the columns are continuous, and take a wide enough variety of values that a player who attempts a like-for-like analysis has to - at the very least - think carefully about what to treat as "basically the same".
5. The challenge should resist good luck
It should not be plausible[3] to reach an optimal answer through sheer good luck: hours spent poring over spreadsheets should not give the same results as a good diceroll. The counterspell for this is giving players enough choices that the odds of them getting all of them right by chance approach zero. ("Pick the best option from this six-entry list" is a bad goal; "Pick the best three options from this twenty-entry list" is much better.)
6. Data should be abundant
It is very, very hard to make a good "work around the fact that you're short on data" challenge. Not having enough information to be sure whether your hypotheses are right is a situation which players are likely to find awkward, irritating, and uncomfortably familiar: if you're uncertain about whether you should give players more rows, you almost certainly should. A five- or six-digit number of rows is reasonable for a dataset with 5-20 columns.
(It is possible, but difficult, to be overly generous. A dataset with >1m rows cannot easily be fully loaded into current-gen Excel; a dataset too large to be hosted on github will be awkward to analyze with a home computer. But any dataset which doesn't approach either of those limitations will probably not be too big.)
7. Data should be preternaturally (but not perfectly) clean
Data in the real world is messy and unreliable. Most real-life data work is accounting for impurities, setting up pipelines, making judgement calls, refitting existing models on slightly new datasets, and noticing when your supplier decides to randomly redefine a column. D&D.Sci shouldn't be more of this: instead, it should focus on the inferential and strategic problems people can face even when datasets are uncannily well-behaved.
(It is good when players get a chance to practice splitting columns, joining dataframes, and handling unknowns: however, these subtasks should not make up the meat of a ch...

Apr 12, 2024 • 8min
LW - Announcing Atlas Computing by miyazono
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Atlas Computing, published by miyazono on April 12, 2024 on LessWrong.
Atlas Computing is a new nonprofit working to collaboratively advance AI capabilities that are asymmetrically risk-reducing. Our work consists of building scoped prototypes and creating an ecosystem around @davidad's Safeguarded AI programme at ARIA (formerly referred to as the Open Agency Architecture).
We formed in Oct 2023, and raised nearly $1M, primarily from the Survival and Flourishing Fund and Protocol Labs. We have no physical office, and are currently only Evan Miyazono (CEO) and Daniel Windham (software lead), but over the coming months and years, we hope to create compelling evidence that:
The Safeguarded AI research agenda includes both research and engineering projects where breakthroughs or tools can incrementally reduce AI risks.
If Atlas Computing makes only partial progress toward building safeguarded AI, we'll likely have put tools into the world that are useful for accelerating human oversight and review of AI outputs, asymmetrically favoring risk reduction.
When davidad's ARIA program concludes, the work of Atlas Computing will have parallelized solving some tech transfer challenges, magnifying the impact of any technologies he develops.
Our overall strategy
We think that, in addition to encoding human values into AI systems, a very complementary way to dramatically reduce AI risk is to create external safeguards that limit AI outputs. Users (individuals, groups, or institutions) should have tools to create specifications that list baseline safety requirements (if not full desiderata for AI system outputs) and also interrogate those specifications with non-learned tools.
A separate system should then use the specification to generate candidate solutions along with evidence that the proposed solution satisfies the spec. This evidence can then be reviewed automatically for adherence to the specified safety properties. This is by comparison to current user interactions with today's generalist ML systems, where all candidate solutions are at best reviewed manually. We hope to facilitate a paradigm where the least safe user's interactions with AI looks like:
Specification-based AI vs other AI risk mitigation strategies
We consider near-term risk reductions that are possible with this architecture to be highly compatible with existing alignment techniques.
In Constitutional AI, humans are legislators but laws are sufficiently nuanced and subjective that they require a language model to act as a scalable executive and judiciary. Using specifications to establish an objective preliminary safety baseline that is automatically validated by a non-learned system could be considered a variation or subset of Constitutional AI.
Some work on evaluations focuses on finding metrics that demonstrate safety or alignment of outputs. Our architecture expresses goals in terms of states of a world-model that is used to understand the impact of policies proposed by the AI, and would be excited to see and supportive of evals researchers exploring work in this direction.
This approach could also be considered a form of scalable oversight, where a baseline set of safe specifications are automatically enforced via validation and proof generation against a spec.
How this differs from davidad's work at ARIA
You may be aware that davidad is funding similar work as a Programme Director at ARIA (watch his 30 minute solicitation presentation here). It's worth clarifying that, while davidad and Evan worked closely at Protocol Labs, davidad is not an employee of Atlas Computing, and Atlas has received no funding from ARIA. That said, we're pursuing highly complementary paths in our hopes to reduce AI risk.
His Safeguarded AI research agenda, described here, is focused on using cyberphysical systems, li...

Apr 12, 2024 • 13min
EA - Dear EA, please be the reason people like me will actually see a better world. Help me make some small stride on extreme poverty where I live -- by the end of 2024. by Anthony Kalulu, a rural farmer in eastern Uganda.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dear EA, please be the reason people like me will actually see a better world. Help me make some small stride on extreme poverty where I live -- by the end of 2024., published by Anthony Kalulu, a rural farmer in eastern Uganda. on April 12, 2024 on The Effective Altruism Forum.
This message is for everyone in the global EA community.
For all the things that have been said about EA over the recent past -- from SBF to Wytham Abbey to my own article on EA in 2022 (I have a disclaimer about this at the very bottom of this message) -- I am asking the global EA community to help me make only one small stride on extreme poverty where I live, before 2024 ends.
Let's make up for all the things that have been said about EA (e.g., that EA doesn't support poor people-led grassroots orgs in the global south), by at least supporting only one poor people-led grassroots org in a part of the world where poverty is simply rife.
Be the reason people like myself will actually see a better world, and the reason for people like us to actually see EA as being the true purveyor of the most good.
FYI:
I come from a community that purely depends on agriculture for survival. For this reason, the things that count as producing "the most good" in the eyes of people like me, are things like reliable markets for our produce etc, as opposed to things like mosquito nets, deworming tablets etc that EA might view as creating the most good.
About me:
My name is Anthony, a farmer here in eastern Uganda. My own life hasn't been very easy. But looking at people's circumstances where I live, I decided not to sit back.
Some clue:
Before COVID came, the World Bank said (in 2019), that 70% of the extreme poor in Sub Saharan Africa were packed in only 10 countries. Uganda was among those ten countries. Even among those 10 countries, according to the World Bank, Uganda still had the sluggishiest (i.e., the slowest) poverty reduction rate overall, as shown in this graph.
Even in Uganda:
Eastern Uganda, where I live, is Uganda's most impoverished, per all official reports. Our region Busoga meanwhile, which has long been the poorest in eastern Uganda, has since 2017 doubled as the poorest not just in eastern Uganda, but also in Uganda as a whole.
In 2023, The Monitor, a Ugandan local daily, said: "Busoga is the sub-region with most people living in a complete poverty cycle followed by Bukedea and Karamoja. This is according to findings released in 2021/2022 by Mr Vincent Fred Senono, the Principal Statistician and head of analysis at the Uganda Bureau of Statistics".
Even in Busoga itself, our two neighboring districts Kamuli & Buyende, being the furthermost, remotest area of Busoga on the shores of Lake Kyoga, have the least economic activity, and are arguably Busoga's most destitute.
In short, while Uganda as a country is the very last in Sub Saharan Africa in terms of poverty reduction, our region Busoga is the worst in Uganda, and even in Busoga, our 2 twin districts Kamuli & Buyende, being the remotest, are simply the most miserable.
Help us see some good before 2024 ends:
I am asking the global EA community to help the Uganda Community Farm (the UCF), a nonprofit social enterprise that was founded by me, to accomplish only two goals before 2024 ends. Please be the reason people like us will actually see a better world.
Goal one: Size of Long Island.
That's, expanding the UCF's current white sorghum project to cover every village in Kamuli & Buyende - a 3,300 sq km region the size of Long Island (New York).
Since 2019, the UCF has trained many rural farmers in Kamuli & Buyende, in eastern Uganda, on white sorghum. Our goal right now, is to expand this work and cover every village in Kamuli & Buyende, with white sorghum. Kamuli & Buyende are two neighboring districts in Busoga, Uganda's most impoverished reg...

Apr 11, 2024 • 2min
EA - Mediocre EAs: career paths and how do they engage with EA? by mikbp
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mediocre EAs: career paths and how do they engage with EA?, published by mikbp on April 11, 2024 on The Effective Altruism Forum.
[Let's speak in third person here. Nobody likes to be called mediocre and I'm asking for people's experiences so let's make it (a bit) easier to speak freely. If you give an answer, you can be explaining your situation or that of someone you know.]
We all know that EA organizations search for and are full of brilliant (and mostly young) people. But what do EAs that are not brilliant do? Even many brilliant EAs are not able to work in EA organizations (see this post for example), but their prospects to having high impact careers and truly contribute to EA are good. However, most people are mediocre, many are not even able to get 1-1 in 80.000h or other similar help. This is frustrating and may make it difficult to engage for longer with EA.
These people have their "normal" jobs, get older, start creating families, time is scarce... and the priority of EA in their life inevitably falls. Regularly meeting is probably far out of reach. Even writing occasional posts in the forum probably demands way too much time --more so knowing that not-super-well-researched-and-properly-written-posts-by-unknown-users usually get down-voted early on making them invisible and so, mostly useless to write.
So I'd like to know what is the relationship with "the community" of mediocre EAs, particularly of a bit older ones. How do they exercise their EA muscle?
It'd be also cool to have a very short description of their career paths as well.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Apr 11, 2024 • 59min
EA - A Gentle Introduction to Risk Frameworks Beyond Forecasting by pending survival
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Gentle Introduction to Risk Frameworks Beyond Forecasting, published by pending survival on April 11, 2024 on The Effective Altruism Forum.
This was originally posted on Nathaniel's and Nuno's substacks (Pending Survival and Forecasting Newsletter, respectively). Subscribe here and here!
Discussion is also occurring on the LessWrong here (couldn't link the posts properly for technical reasons).
Introduction
When the Effective Altruism, Bay Area rationality, judgemental forecasting, and prediction markets communities think about risk, they typically do so along rather idiosyncratic and limited lines. These overlook relevant insights and practices from related expert communities, including the fields of disaster risk reduction, safety science, risk analysis, science and technology studies - like the sociology of risk - and futures studies.
To remedy this state of affairs, this document - written by Nathaniel Cooke and edited by Nuño Sempere - (1) explains how disaster risks are conceptualised by risk scholars, (2) outlines Normal Accident Theory and introduces the concept of high-reliability organisations, (3) summarises the differences between "sexy" and "unsexy" global catastrophic risk (GCR) scenarios, and (4) provides a quick overview of the methods professionals use to study the future.
This is not a comprehensive overview, but rather a gentle introduction.
Risk has many different definitions, but this document works with the IPCC definition of the "potential for adverse consequences", where risk is a function of the magnitude of the consequences and the uncertainty around those consequences, recognising a diversity of values and objectives.1 Scholars vary on whether it is always possible to measure uncertainty, but there is a general trend to assume that some uncertainty is so extreme as to be practically unquantifiable.2 Uncertainty here can
reflect both objective likelihood and subjective epistemic uncertainty.
1. Disaster Risk Models
A common saying in disaster risk circles is that "there is no such thing as a natural disaster". As they see it, hazards may arise from nature, but an asteroid striking Earth is only able to threaten humanity because our societies currently rely on vulnerable systems that an asteroid could disrupt or destroy.3
This section will focus on how disaster risk reduction scholars break risks down into their components, model the relationship between disasters and their root causes, structure the process of risk reduction, and conceptualise the biggest and most complex of the risks they study.
The field of global catastrophic risk, in which EAs and adjacent communities are particularly interested, has incorporated some insights from disaster risk reduction. For example, in past disasters, states have often justified instituting authoritarian emergency powers in response to the perceived risks of mass panic, civil disorder and violence, or helplessness among the public.
However, disaster risk scholarship has shown these fears to be baseless.4-9 and this has been integrated into GCR research under the concept of the "Stomp Reflex".
However, other insights from disaster risk scholarship remain neglected, so we encourage readers to consider how they might apply within their cause area and interests.
1.1 The Determinants of Risk
Popular media often refers to things like nuclear war, climate change, and lethal autonomous weapons systems as "risks" in themselves. This stands in contrast with how risk scholars typically think about risk. To these scholars, "risk" refers to outcomes - the "potential for adverse consequences" - rather than the causes of those outcomes. So what would a disaster risk expert consider a nuclear explosion to be if not a risk, and why does it matter?
There is no universal standard model of disaster risk. However, the v...

Apr 11, 2024 • 36min
EA - Understanding FTX's crimes by FTXwatcher
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding FTX's crimes, published by FTXwatcher on April 11, 2024 on The Effective Altruism Forum.
In the aftermath of SBF's convinction, there have been a few posts trying to make sense of FTX. Some people are trying to figure out what happened, and some people are interested in trying to find clever defenses.
I'm in a much more boring position: I am confident SBF is the fraud the world believes him to be. I hope this post can provide reasoning transparency on why I think this, and perhaps serve as an easy link for others who feel similarly but don't want to get bogged down in a point-by-point.
Posted anonymously as some protection against future employers Googling [1].
I have divided this post into a summary of the major crimes and my basis for believing they occurred, an 'FAQ' dealing with some common misapprehensions I've seen on this forum and elsewhere, and an appendix explaining some crypto exchange basics / jargon for those who don't have a background there.
Crimes
Misappropriation of Funds
Summary
This is the big one. Customers who deposited to FTX believed their assets were being held separately to FTX's own funds. In reality, their funds were available for Alameda to use freely.
For practical purposes, there wasn't any separation between these two companies; FTX's money was Alameda's money and Alameda's money was FTX's money. Since SBF was majority-owner of both Alameda and FTX, this overlap is not obviously [2] illegal, though it is highly inadvisable. However, if customer money is FTX's money and FTX's money is Alameda's money and then Alameda invests a ton of that money, all while SBF tweets to customers that their money is safe and uninvested [3], that's a problem.
Detail
If a troubled company has a few days to beg potential investors for a bailout before it files for bankruptcy, and it sends those investors its balance sheet so they can consider investing, and they all pass, and then the company files for bankruptcy, of course the balance sheet was bad. That is not a state of affairs that is consistent with a pristine fortress balance sheet.
But there is a range of possible badness, even in bankruptcy, and the balance sheet that Sam Bankman-Fried's failed crypto exchange FTX.com sent to potential investors last week before filing for bankruptcy on Friday is very bad. It's an Excel file full of the howling of ghosts and the shrieking of tortured souls. If you look too long at that spreadsheet, you will go insane.
Matt Levine, FTX's Balance Sheet Was Bad
There's a ton that could be written here, but since intent seems to be the main point of contention I think the most interesting data points are the ones that suggest how big of a problem the insiders thought it was before facing criminal charges:
Here's SBF's balance sheet that he circulated to investors during the panic of November 2022. As far as I know this sheet has never been verified, and it's from a convincted felon, and it was essentially a sales pitch. So it could be expected to present a perhaps-too-rosy view of the world. What it in fact presents is an unmitigated disaster. Matt Levine's reaction above was the same as my first reaction, and I recommend that piece as a whole if you want to get a vibe of how insane this was.
Roughly the sheet lists assets as follows, rounding to the nearest $100m and using the 'October / Before This Week' values; note that many of these took substantial hits during the November panic.
$6bn FTT
$5.4bn SRM
$2.2bn SOL
$1.5bn 'Other Ventures'
$1.2bn GDA; this is Genesis Digital Assets, a mining company
$500m Anthropic
$865m MAPS
$600m HOOD; Robinhood shares owned personally by SBF
$1.5bn of other assets of varying quality
Total of $19.6bn
There isn't as much colour on the liabilities, but based on the section at the bottom they were $8.9bn, all owed to custome...

Apr 11, 2024 • 4min
EA - CEA is hiring a Community Building Grants Associate (apply by 28 April) by Naomi N
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA is hiring a Community Building Grants Associate (apply by 28 April), published by Naomi N on April 11, 2024 on The Effective Altruism Forum.
We're looking for an ambitious, engaging, and strategic professional to join our Groups team at
the Centre for Effective Altruism (CEA) as a
Community Building Grants (CBG) Associate.
As a CBG Associate, you'll be a key player in increasing the impact of our CBG program and shaping how the city & national groups community ecosystem evolves.
The program currently provides over $2.5 million / year in support to local EA directors and is funding >20 FTE in 14 different locations. Examples of these locations include NYC, DC, London, the Netherlands, Australia, and India.
These groups have led to several exciting outcomes including:
Introducing AI safety to somebody and this leading them to take a role as a technical safety staff at Anthropic.
Counterfactually leading to several group members becoming researchers at Rethink Priorities.
Preparing members to start effective charities through the Charity Entrepreneurship program, for example Lafiya Nigeria that focuses on contraception access for women in Nigeria.
Helping people enter key policy roles, both in the fields of AI and Biosecurity. They report that without their group these positions would not have been on their radar and they would probably not have been able to start working in these fields.
Working as a grantee was crucial for taking up longtermist roles, such as Chief of Staff at Forethought, or to found organisations like the Impact Academy.
We have the ambition to significantly improve the program and believe that some of our biggest wins in the past have come from seeding and incubating new groups. We're eager to experiment more with this approach.
Your responsibilities will include:
Conducting practical research into what makes top-performing city and national EA groups excel, translating your findings into concrete program improvements.
Developing and leading an incubation program to seed new EA groups in promising locations by recruiting talent, vetting candidates, and providing support. We have some uncertainty about the incubation program, so this responsibility might change.
Offering additional support to our current CBG grantees and non-funded city & national groups, including assisting them with hiring.
Supporting our team's overall program strategy, evaluation processes, and grant assessments.
About the team:
As part of CEA's Groups team, you will work closely with, and report directly to, Naomi Nederlof, the CBG Manager. The Groups team cares deeply about both improving the world and maintaining a positive working environment.
Is this you?
You might be a good fit if:
You have deep familiarity with core EA ideas
You're skilled at tackling complex, open-ended projects
You're passionate about doing impactful work in EA community building
You have strong interpersonal skills to build relationships with grantees and stakeholders
You're adaptable to shifting priorities as our program evolves
You're self-driven and able to own projects end-to-end
Preferably, you have experience in community organizing and have knowledge of existing EA groups
What we're offering:
Full-time role, option for remote work in GMT-6 to GMT+7 timezones
Potential visa sponsorship to join our team in the Oxford office
Estimated total compensation range of $66,000 - $122,540 (including 10% 401k contribution)
Benefits like health insurance, professional development funds, parental leave, and more
Anticipated start between June - October 2024
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Apr 11, 2024 • 2min
EA - Should I donate my kidney or part of my liver? by Bob Jacobs
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should I donate my kidney or part of my liver?, published by Bob Jacobs on April 11, 2024 on The Effective Altruism Forum.
I've been talking with my hospital about donating my kidney and it's been going rather well. However, one piece of unfortunate news they told me is that I can't donate both my kidney and a piece of my liver (and that I can't do this in another hospital either). So people that want to donate are faced with a dilemma of which one to choose. I asked the doctors whether they had literature on this, but unfortunately they didn't know of any that compared the two.
I've looked at some papers, and the side effects for both kidney donation and liver donation seem to be negligible for the donor (way less than 1 QALY).
That leaves us with the question of what has the bigger impact for the recipient.
I've looked for papers that compared them directly, but couldn't really find anything.
It seems like for kidneys:
The average donation buys the recipient about 5 - 7 extra years of life (beyond the counterfactual of dialysis). It also improves quality of life from about 70% of the healthy average to about 90%. Non-directed kidney donations can also help the organ bank solve allocation problems around matching donors and recipients of different blood types.
Most sources say that an average donated kidney creates a "chain" of about five other donations, but most of these other donations would have happened anyway; the value over counterfactual is about 0.5 to 1 extra transplant completed before the intended recipient dies from waiting too long. So in total, a donation produces about 10 - 20 extra quality-adjusted life years.
Liver donation seems to generate less QALYs, though the estimates vary a lot.
So I'm currently leaning towards donating my kidney. Does anyone have any more insights into this? Does anyone know of an analysis that compares the two? (If someone is/wants to write one, I'd be glad to help) Please share your thoughts.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org


