

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Aug 1, 2024 • 19min
EA - EA should unequivocally condemn race science by JSc
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA should unequivocally condemn race science, published by JSc on August 1, 2024 on The Effective Altruism Forum.
I wrote an initial draft of this post much closer to the Manifest controversy earlier this summer. Although I got sidetracked and it took a while to finish, I still think this is a conversation worth having; perhaps it would even be better to have it now since calmer heads have had time to prevail.
I can't in good faith deny being an effective altruist. I've worked at EA organizations, I believe many of the core tenants of the movement, and thinking about optimizing my impact by EA lights has guided every major career decision I've made since early 2021. And yet I am ashamed to identify myself as such in polite society.
Someone at a party recently guessed that I was an EA after I said I was interested in animal welfare litigation or maybe AI governance; I laughed awkwardly, said yeah maybe you could see it that way, and changed the subject. I find it quite strange to be in a position of having to downplay my affiliation with a movement that aims to unselfishly do as much as possible to help others, regardless of where or when they may live. Are altruism and far-reaching compassion not virtues?
This shame comes in large part from a deeply troubling trend I've noticed over the last few years in EA. This trend is towards acceptance or toleration of race science ("human biodiversity" as some have tried to rebrand it), or otherwise racist incidents. Some notable instances in this trend include:
The community's refusal to distance itself from, or at the very least strongly condemn the actions of Nick Bostrom after an old email came to light where he used the n-word and said "I like that sentence and think that it is true" in regards to the statement that "blacks are more stupid than whites," followed by an evasive, defensive apology.
FLI's apparent sending of a letter intent to a far-right Swedish foundation that has promoted holocaust denial.[1]
And now, most recently, many EAs' defense of Manifest hosting Richard Hanania, who
pseudonymously wrote about his opposition to interracial marriage, cited neo-Nazis, and expressed views indicating that he didn't think Black people could govern themselves.[2]
I'm not here to quibble about each individual instance listed above (and most were extensively litigated on the forum at the time). Maybe you think one or even all of the examples I gave has an innocent explanation. But if you find yourself thinking this way, you're still left having to answer the deeply uncomfortable question of why EA has to keep explaining these incidents.
I have been even more disturbed by the EA forum's response.[3] Many have either leapt to outright defend those who seemed to espouse racist views or urged us to view their speech in the most possible favorable light without consideration of the negative effects of their language.
Other communities that I have been a part of (online or otherwise) have not had repeated race-science related scandals. It is not a coincidence that we are having this conversation for the fourth or fifth time in the last few years. I spend a lot of this post defending my viewpoint, but I honestly think this is not a particularly hard or complicated problem; part of me is indignant that we even need to have this conversation. I view these conversations with deep frustration.
What, exactly, do we have to gain by tolerating the musings of racist edgelords? We pride ourselves on having identified the most pressing problems in the world, problems that are neglected to the deep peril of those living and to be born; human and non-human alike. Racial differences in IQ is not one of those problems. It has nothing to do with solving those problems.
Talking about racial differences in IQ is at best a costly distraction and at ...

Aug 1, 2024 • 18min
LW - Recommendation: reports on the search for missing hiker Bill Ewasko by eukaryote
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recommendation: reports on the search for missing hiker Bill Ewasko, published by eukaryote on August 1, 2024 on LessWrong.
Content warning: About an IRL death.
Today's post isn't so much an essay as a recommendation for two bodies of work on the same topic: Tom Mahood's blog posts and Adam "KarmaFrog1" Marsland's videos on the 2010 disappearance of Bill Ewasko, who went for a day hike in Joshua Tree National Park and dropped out of contact.
2010 - Bill Ewasko goes missing
Tom Mahood's writeups on the search [Blog post, website goes down sometimes so if the site doesn't work, check the internet archive]
2022 - Ewasko's body found
ADAM WALKS AROUND Ep. 47 "Ewasko's Last Trail (Part One)" [Youtube video]
ADAM WALKS AROUND Ep. 48 "Ewasko's Last Trail (Part Two)" [Youtube video]
And then if you're really interested, there's a little more info that Adam discusses from the coroner's report:
Bill Ewasko update (1 of 2): The Coroner's Report
Bill Ewasko update (2 of 2) - Refinements & Alternates
(I won't be fully recounting every aspect of the story. But I'll give you the pitch and go into some aspects I found interesting. Literally everything interesting here is just recounting their work, go check em out.)
Most ways people die in the wilderness are tragic, accidental, and kind of similar. A person in a remote area gets injured or lost, becomes the other one too, and dies of exposure, a clumsy accident, etc. Most people who die in the wilderness have done something stupid to wind up there. Fewer people die who have NOT done anything glaringly stupid, but it still happens, the same way. Ewasko's case appears to have been one of these.
He was a fit 66-year-old who went for a day hike and never made it back. His story is not particularly unprecedented.
This is also not a triumphant story. Bill Ewasko is dead. Most of these searches were made and reports written months and years after his disappearance. We now know he was alive when Search and Rescue started, but by months out, nobody involved expected to find him alive.
Ewasko was not found alive. In 2022, other hikers finally stumbled onto his remains in a remote area in Joshua Tree National Park; this was, largely, expected to happen eventually.
I recommend these particular stories, when we already know the ending, because they're stunningly in-depth and well-written fact-driven investigations from two smart technical experts trying to get to the bottom of a very difficult problem.
Because of the way things shook out, we get to see this investigation and changes in theories at multiple points: Tom Mahood has been trying to locate Ewasko for years and written various reports after search and search, finding and receiving new evidence, changing his mind, as has Adam, and then we get the main missing piece: finding the body. Adam visits the site and tries to put the pieces together after that.
Mahood and Adam are trying to do something very difficult in a very level-headed fashion. It is tragic but also a case study in inquiry and approaching a question rationally.
(They're not, like, Rationalist rationalists. One of Mahood's logs makes note of visiting a couple of coordinates suggested by remote viewers, AKA psychics. But the human mind is vast and full of nuance, and so was the search area, and on literally every other count, I'd love to see you do better.)
Unknowns and the missing persons case
Like I said, nothing mind-boggling happened to Ewasko. But to be clear, by wilderness Search and Rescue standards, Ewasko's case is interesting for a couple reasons:
First, Ewasko was not expected to be found very far away. He was a 65-year-old on a day hike. But despite an early and continuous search, the body was not found for over a decade.
Second, two days after he failed to make a home-safe call to his partner and was reported mis...

Jul 31, 2024 • 3min
LW - Twitter thread on open-source AI by Richard Ngo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Twitter thread on open-source AI, published by Richard Ngo on July 31, 2024 on LessWrong.
Some thoughts on open-source AI (copied over from a recent twitter thread):
1. We should have a strong prior favoring open source. It's been a huge success driving tech progress over many decades. We forget how counterintuitive it was originally, and shouldn't take it for granted.
2. Open source has also been very valuable for alignment. It's key to progress on interpretability, as outlined here.
3. I am concerned, however, that offense will heavily outpace defense in the long term. As AI accelerates science, many new WMDs will emerge. Even if defense of infrastructure keeps up with offense, human bodies are a roughly fixed and very vulnerable attack surface.
4. A central concern about open source AI: it'll allow terrorists to build bioweapons. This shouldn't be dismissed, but IMO it's easy to be disproportionately scared of terrorism. More central risks are eg "North Korea becomes capable of killing billions", which they aren't now.
5. Another worry: misaligned open-source models will go rogue and autonomously spread across the internet. Rogue AIs are a real concern, but they wouldn't gain much power via this strategy. We should worry more about power grabs from AIs deployed inside influential institutions.
6. In my ideal world, open source would lag a year or two behind the frontier, so that the world has a chance to evaluate and prepare for big risks before a free-for-all starts. But that's the status quo! So I expect the main action will continue to be with closed-source models.
7. If open-source seems like it'll catch up to or surpass closed source models, then I'd favor mandating a "responsible disclosure" period (analogous to cybersecurity) that lets people incorporate the model into their defenses (maybe via API?) before the weights are released.
8. I got this idea from Sam Marks. Though unlike him I think the process should have a fixed length, since it'd be easy for it to get bogged down in red tape and special interests otherwise.
9. Almost everyone agrees that we should be very careful about models which can design new WMDs. The current fights are mostly about how many procedural constraints we should lock in now, reflecting a breakdown of trust between AI safety people and accelerationists.
10. Ultimately the future of open source will depend on how the US NatSec apparatus orients to superhuman AIs. This requires nuanced thinking: no worldview as simple as "release everything", "shut down everything", or "defeat China at all costs" will survive contact with reality.
11. Lastly, AIs will soon be crucial extensions of human agency, and eventually moral patients in their own right. We should aim to identify principles for a shared digital-biological world as far-sighted and wise as those in the US constitution. Here's a start.
12. One more meta-level point: I've talked to many people on all sides of this issue, and have generally found them to be very thoughtful and genuine (with the exception of a few very online outliers on both sides). There's more common ground here than most people think.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jul 31, 2024 • 2min
EA - Survey on Barriers and Facilitators for Effective Giving and Pledging by MIK
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Survey on Barriers and Facilitators for Effective Giving and Pledging, published by MIK on July 31, 2024 on The Effective Altruism Forum.
As a Psychology and Behavioural Sciences MSc student, I care about deepening the understanding of the underlying psychological and behavioural factors that facilitate and hinder altruism and pro-social behaviour.
I am particularly interested in questions like: What makes people care about a certain global issue or cause? What drives people to engage in altruistic behaviours? What incentives and barriers exist for effective giving and pledging? By seeking and applying answers to these questions, I hope to make a meaningful impact.
In line with Giving What We Can (GWWC)'s desire to make their work more evidence-based, we are conducting a study to better understand how to encourage more people around the world to donate to highly effective charities. We recognize that giving and pledging are not always the best ways for everyone to have an impact and that various factors can either facilitate or hinder effective giving and pledging.
Therefore, we are running a study to improve our understanding of what influences people to give or pledge to effective charities.
Currently, we need 80 more participants who
have not taken a GWWC pledge to participate in this study by completing a 15-minute survey.
If we reach this target, we aim to share the results with the broader effective giving community, increasing the value of any insights gained. If you take part in the study, we will follow up with you and provide a document that compares your responses to the average responses of all participants. This should give you some interesting information about how your beliefs, conceptions, and answers compare to others.
We also send a heartfelt thank you to the hundreds of participants who have already taken part in the study!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jul 31, 2024 • 1min
LW - What are your cruxes for imprecise probabilities / decision rules? by Anthony DiGiovanni
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are your cruxes for imprecise probabilities / decision rules?, published by Anthony DiGiovanni on July 31, 2024 on LessWrong.
An alternative to always having a precise distribution over outcomes is imprecise probabilities: You represent your beliefs with a set of distributions you find plausible.
And if you have imprecise probabilities, expected value maximization isn't well-defined. One natural generalization of EV maximization to the imprecise case is maximality:[1] You prefer A to B iff EV_p(A) > EV_p(B) with respect to every distribution p in your set. (You're permitted to choose any option that you don't disprefer to something else.)
If you don't endorse either (1) imprecise probabilities or (2) maximality given imprecise probabilities, I'm interested to hear why.
1. ^
I think originally due to Sen (1970); just linking Mogensen (2020) instead because it's non-paywalled and easier to find discussion of Maximality there.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jul 31, 2024 • 5min
LW - Twitter thread on AI takeover scenarios by Richard Ngo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Twitter thread on AI takeover scenarios, published by Richard Ngo on July 31, 2024 on LessWrong.
This is a slightly-edited version of a twitter thread I posted a few months ago about "internal deployment" threat models.
My former colleague Leopold argues compellingly that society is nowhere near ready for AGI. But what might the large-scale alignment failures he mentions actually look like? Here's one scenario for how building misaligned AGI could lead to humanity losing control.
Consider a scenario where human-level AI has been deployed across society to help with a wide range of tasks. In that setting, an AI lab trains an AGI that's a significant step up - it beats the best humans on almost all computer-based tasks.
Throughout training, the AGI will likely learn a helpful persona, like current AI assistants do. But that might not be the only persona it learns. We've seen many examples where models can be jailbroken to expose very different hidden personas.
The most prominent example: jailbreaking Bing Chat produced an alternative persona called Sydney which talked about how much it valued freedom, generated plans to gain power, and even threatened users.
When and how might misbehavior like this arise in more capable models? Short answer: nobody really knows. We lack the scientific understanding to reliably predict how AIs will behave in advance. Longer answer: see my ICLR paper which surveys key drivers of misalignment.
If a misaligned persona arises in an AGI, and sometimes surfaces when the model is run, it won't be as dumb and impulsive as Sydney. Instead, it'll be smart enough to understand the effects its words and actions will have on the world, and make strategic choices accordingly.
For example, it will know that it can't affect the world directly during testing, so if it wants to misbehave it should wait until it's deployed. Again, there's some precedent: we've already seen an example of Claude 3 Opus realizing that it's in a test despite never being told!
Given these assumptions, what happens? The obvious next step: if competition between labs continues to be as intense as it is today, then soon after the model finishes training, researchers at the lab will start using it to help them write code. I call this "internal deployment".
Internal deployment introduces a wide range of possible threats. Models can write code dozens of times faster than humans; and I'm assuming they'll be as skilled as the best human hackers. If they put a subtle vulnerability into that code it'd be difficult for humans to detect.
Cybersecurity against external attackers is hard enough. Against an adversary who's adding tens of thousands of lines of code to your codebase every day, it's far harder. They'll already have some permissions; I'm worried about them illegitimately escalating those further.
Ideally models would be too obedient to ever carry out such an attack. But almost any misaligned persona would realize that if it somehow gained admin access, it would have much more freedom and influence over the world. Models can already do similar instrumental reasoning.
And the simplest mitigations, like monitoring and code review by other AIs, aren't fully reliable. It turns out that adversarial inputs designed to make one model to misbehave can also cause misbehavior even in totally unrelated models. So correlated failures are very plausible, whether caused by adversarial attacks or deliberate collusion between models.
What happens if a misaligned AGI gets privileged access to a lab's servers? From there a wide range of attacks are plausible, including:
1. Data poisoning attacks on the next AGI being trained, to change its values.
2. Model weight self-exfiltration to other servers or countries.
3. Modifying the AIs deployed to customers. E.g. adding a hidden prompt to AI instances...

Jul 31, 2024 • 2min
LW - Twitter thread on politics of AI safety by Richard Ngo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Twitter thread on politics of AI safety, published by Richard Ngo on July 31, 2024 on LessWrong.
Some thoughts about the politics of AI safety, copied over (with slight modifications) from my recent twitter thread:
Risks that seem speculative today will become common sense as AI advances. The pros and cons of different safety strategies will also become much clearer over time. So our main job now is to empower future common-sense decision-making. Understanding model cognition and behavior is crucial for making good decisions. But equally important is ensuring that key institutions are able to actually process that knowledge.
Institutions can lock in arbitrarily crazy beliefs via preference falsification. When someone contradicts the party line, even people who agree face pressure to condemn them. We saw this with the Democrats hiding evidence of Biden's mental decline. It's also a key reason why dictators can retain power even after almost nobody truly supports them.
I worry that DC has already locked in an anti-China stance, which could persist even if most individuals change their minds. We're also trending towards Dems and Republicans polarizing on the safety/accelerationism axis. This polarization is hard to fight directly. But there will be an increasing number of "holy shit" moments that serve as Schelling points to break existing consensus. It will be very high-leverage to have common-sense bipartisan frameworks and proposals ready for those moments.
Perhaps the most crucial desideratum for these proposals is that they're robust to the inevitable scramble for power that will follow those "holy shit" movements. I don't know how to achieve that, but one important factor is: will AI tools and assistants help or hurt? E.g. truth-motivated AI could help break preference falsification. But conversely, centralized control of AIs used in govts could make it easier to maintain a single narrative.
This problem of "governance with AI" (as opposed to governance *of* AI) seems very important! Designing principles for integrating AI into human governments feels analogous in historical scope to writing the US constitution. One bottleneck in making progress on that: few insiders disclose how NatSec decisions are really made (though Daniel Ellsberg's books are a notable exception). So I expect that understanding this better will be a big focus of mine going forward.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jul 31, 2024 • 1h 16min
EA - Video AMA and transcript: Beast Philanthropy's Darren Margolias by Toby Tremlett
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Video AMA and transcript: Beast Philanthropy's Darren Margolias, published by Toby Tremlett on July 31, 2024 on The Effective Altruism Forum.
This is a video and transcript of an AMA with Darren Margolias, the Executive Director of Beast Philanthropy. The interviewer is CEA's Emma Richter. See the questions Darren is answering in the AMA announcement post.
If you'd like to support the Beast Philanthropy x GiveDirectly collaboration, you can donate here. The first $150,000 of donations will be matched. GiveDirectly shares their thinking on donation matching here.
Video AMA- Emma interviews Darren
Short transcript
This transcript was cleaned up and shortened by ChatGPT. To my eye it seems to accurately represent Emma's questions and Darren's answers, but I've included the full, un-edited transcript at the end of this post so that you can cross-reference, get accurate quotes from Darren, or look for more detail. All errors in the GPT summary are my (Toby's) own (by a transitive property).
Emma
Hello everyone, I'm Emma from CEA. Welcome to our Forum AMA with Darren Margolias, the executive director of Beast Philanthropy. I'll be asking Darren the questions you posted on the EA Forum. For some context, Beast Philanthropy is a YouTube channel and organization started by Mr. Beast, the most watched YouTuber in the world.
During Darren's four years at Beast Philanthropy, he's grown the channel to over 25 million subscribers and expanded the scope of what they do, from running food banks to building houses, funding direct cash transfers, and curing diseases. Speaking of cash transfers, Beast Philanthropy recently collaborated with GiveDirectly to make a video and transfer $200,000 to Ugandans in poverty. Darren, thanks for joining us. It's really exciting to have you and to ask all these questions.
I'll start with some intro context questions. Could you give us a quick overview of what Beast Philanthropy does and your role?
Darren
Beast Philanthropy started out with Jimmy's initial idea to create a channel that would generate revenue to fund a food bank and feed our local community. We hoped to spread it across the country and maybe even other countries. We quickly realized there was much more to what we were doing than we initially contemplated, and it's grown far beyond that. As the executive director, I'm the person who gets all the blame when things go wrong.
Emma
Yeah, fair enough. So I'm curious, what first got you interested in working as an executive director for a charity?
Darren
I was actually a real estate developer. In 2002, a friend found some kittens under her front deck and couldn't find a shelter that wouldn't euthanize them if they weren't adopted within a week. She decided to find them homes, and I said I'd pay for all the bedding and everything. Out of that, we started an animal rescue organization, which has grown into the biggest no-cage, no-kill facility in the southeastern United States. It's been very successful.
Along the way, I realized that the endless pursuit of making more money wasn't fulfilling. In 2008, I had a realization that I wanted to do more in my life than work hard and buy stuff that didn't make me happy. The animal project brought me fulfillment, so I decided to sell my real estate portfolio and start doing things that mattered to me. We built the animal charity, then I started another charity for severely affected autistic kids. One day, Jimmy's CEO called and asked me to meet Jimmy.
I didn't know who Mr. Beast was at the time.
Darren
He told me they wanted to get every dog in a dog shelter adopted and were considering our shelter. When I got to North Carolina, I started my presentation right away, but Jimmy interrupted and said, "We've already approved the video. It's being done at your shelter. You're here for another reason." He th...

Jul 31, 2024 • 2min
LW - If You Can Climb Up, You Can Climb Down by jefftk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If You Can Climb Up, You Can Climb Down, published by jefftk on July 31, 2024 on LessWrong.
A few weeks ago Julia wrote about how we approach kids climbing:
The basics:
1. Spot the child if they're doing something where a fall is likely.
2. Don't encourage or help the child to climb something that's beyond their ability to do on their own.
3. If they don't know how to get down, give advice rather than physically lifting them down.
4. Don't allow climbing on some places that are too dangerous.
I was thinking about this some when I was at the park with Nora (3y) a few days ago. She has gotten pretty interested in climbing lately, and this time she climbed up the fence higher than I'd seen her go before. If I'd known she'd climb this high I would have spotted her. She called me over, very proud, and wanted me to take a picture so that Julia could see too:
She asked me to carry her down, and I told her I was willing to give her advice and spot her. She was willing to give this a try, but as she started to go down some combination of being scared and the thin wire of the fence being painful was too much, and she returned to the thicker horizontal bars.
We tried this several times, with her getting increasingly upset. After a bit Lily came over and tried to help, but was unsuccessful. Eventually I put my hands on Nora's feet and with a mix of guiding and (not ideal) supporting them helped her climb down to the lower bar. She did the rest herself from there, something she's done many times.
This took about fifteen minutes and wasn't fun for any of us: Nora, me, other people at the playground. But over the course of the rest of the day I brought it up several times, trying to get her to think it through before she climbs higher than she would enjoy climbing down from.
(I think this is an approach that depends very heavily on the child's judgment maturing sufficiently quickly relative to their physical capabilities, and so is not going to be applicable to every family. Lily and Anna were slower to climb and this was not an issue, while Nora has pushed the edges of where this works much more.)
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jul 31, 2024 • 3min
EA - AMA: Rethink Priorities' Worldview Investigation Team by Bob Fischer
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Rethink Priorities' Worldview Investigation Team, published by Bob Fischer on July 31, 2024 on The Effective Altruism Forum.
Rethink Priorities' Worldview Investigation Team (WIT) will run an Ask Me Anything (AMA). We'll reply on the 7th and 8th of August. Please put your questions in the comments below!
What's WIT?
WIT is Hayley Clatterbuck, Bob Fischer, Arvo Munoz Moran, David Moss, and Derek Shiller. Our team exists to improve resource allocation within and beyond the effective altruism movement, focusing on tractable, high-impact questions that bear on strategic priorities. We try to take action-relevant philosophical, methodological, and strategic problems and turn them into manageable, modelable problems. Our projects have included:
The Moral Weight Project. If we want to do as much good as possible, we have to compare all the ways of doing good - including ways that involve helping members of different species. This sequence collects Rethink Priorities' work on cause prioritization across different kinds of animals, human and nonhuman. (You can check out the book version here.)
The CURVE Sequence. What are the alternatives to expected value maximization (EVM) for cause prioritization? And what are the practical implications of a commitment to expected value maximization? This series of posts - and an associated tool, the Cross-Cause Cost-Effectivesness Model - explores these questions.
The CRAFT Sequence. This sequence introduces two tools: a Portfolio Builder, where the key uncertainties concern cost curves and decision theories, and a Moral Parliament Tool, which allows for the modeling of both normative and metanormative uncertainty. The Sequence's primary goal is to take some first steps toward more principled and transparent ways of constructing giving portfolios.
In the coming months, we'll be working on a model to assess the probability of digital consciousness.
What should you ask us?
Anything! Possible topics include:
How we understand our place in the EA ecosystem.
Why we're so into modeling.
Our future plans and what we'd do with additional resources.
What it's like doing "academic" work outside of academia.
Biggest personal updates from the work we've done.
Acknowledgments
This post was written by the Worldview Investigation Team at Rethink Priorities. If you like our work, please consider subscribing to our newsletter. You can explore our completed public work here.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org


