
The Nonlinear Library
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Latest episodes

Sep 1, 2024 • 11min
LW - Forecasting One-Shot Games by Raemon
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting One-Shot Games, published by Raemon on September 1, 2024 on LessWrong.
Cliff notes:
You can practice forecasting on videogames you've never played before, to develop the muscles for "decision-relevant forecasting."
Turn based videogames work best. I recommend "Luck Be a Landlord", "Battle for Polytopia", or "Into the Breach."
Each turn, make as many Fatebook predictions as you can in 5 minutes, then actually make your decision(s) for the turn.
After 3 turns, instead of making "as many predictions as possible", switch to trying to come up with at least two mutually exclusive actions you might take this turn, and come up with predictions that would inform which action to take.
Don't forget to follow this up with practicing forecasting for decisions you're making in "real life", to improve transfer learning. And, watch out for accidentally just getting yourself addicted to videogames, if you weren't already in that boat.
This is pretty fun to do in groups and makes for a good meetup, if you're into that.
Recently I published Exercise: Planmaking, Surprise Anticipation, and "Baba is You". In that exercise, you try to make a complete plan for solving a puzzle-level in a videogame, without interacting with the world (on levels where you don't know what all the objects in the environment do), and solve it on your first try.
Several people reported it pretty valuable (it was highest rated activity at my metastrategy workshop). But, it's fairly complicated as an exercise, and a single run of the exercise typically takes at least an hour (and maybe several hours) before you get feedback on whether you're "doing it right."
It'd be nice to have a way to practice decision-relevant forecasting with a faster feedback loop. I've been exploring the space of games that are interesting to "one-shot". (i.e. " try to win on your first playthrough"), and also exploring the space of exercises that take advantage of your first playthrough of a game.
So, an additional, much simpler exercise that I also like, is:
Play a turn-based game you haven't played before.
Each turn, set a 5 minute timer for making as many predictions as you can about how the game works, what new rules or considerations you might learn later. Then, a 1 minute timer for actually making your choices for what action(s) to take during the turn.
And... that's it. (to start with, anyway).
Rather that particularly focusing on "trying really hard to win", start with just making lots of predictions, about a situation where you're at least trying to win a little, so you can develop the muscles of noticing what sort of predictions you can make while you're in the process of strategically orienting. And, notice what sorts of implicit knowledge you have, even though you don't technically "know" how the game would work.
Some of the predictions might resolve the very next turn. Some might resolve before the very next turn, depending on how many choices you get each turn. And, some might take a few turns, or even pretty deep into the game. Making a mix of forecasts of different resolution-times is encouraged.
I think there are a lot of interesting skills you can layer on top of this, after you've gotten the basic rhythm of it. But "just make a ton of predictions about a domain where you're trying to achieve something, and get quick feedback on it" seems like a good start.
Choosing Games
Not all games are a good fit for this exercise. I've found a few specific games I tend to recommend, and some principles for which games to pick.
The ideal game has:
Minimal (or skippable) tutorial. A major point of the exercise is to make prediction about the game mechanics and features. Good games for this exercise a) don't spoonfeed you all the information about the game, but also b) are self-explanatory enough to figure out without a tut...

Sep 1, 2024 • 15min
LW - My Model of Epistemology by adamShimi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Model of Epistemology, published by adamShimi on September 1, 2024 on LessWrong.
I regularly get asked by friends and colleagues for recommendation of good resources to study epistemology. And whenever that happens, I make an internal (or external) "Eeehhh"pained sound.
For I can definitely point to books and papers and blog posts that inspired me, excited me, and shaped my world view on the topic. But there is no single resource that encapsulate my full model of this topic.
To be clear, I have tried to write that resource - my hard-drive is littered with such attempts. It's just that I always end up shelving them, because I don't have enough time, because I'm not sure exactly how to make it legible, because I haven't ironed out everything.
Well, the point of this new blog was to lower the activation energy of blog post writing, by simply sharing what I found exciting quickly. So let's try the simplest possible account I can make of my model.
And keep in mind that this is indeed a work in progress.
The Roots of Epistemology
My model of epistemology stems from two obvious facts:
The world is complex
Humans are not that smart
Taken together, these two facts mean that humans have no hope of ever tackling most problems in the world in the naive way - that is, by just simulating everything about them, in the fully reductionist ideal.
And yet human civilization has figured out how to reliably cook tasty meals, build bridges, predict the minutest behaviors of matter... So what gives?
The trick is that we shortcut these intractable computations: we exploit epistemic regularities in the world, additional structure which means that we don't need to do all the computation.[1]
As a concrete example, think about what you need to keep in mind when cooking relatively simple meals (not the most advanced of chef's meals).
You can approximate many tastes through a basic palette (sour, bitter, sweet, salty, umami), and then consider the specific touches (lemon juice vs vinegar for example, and which vinegar, changes the color of sourness you get)
You don't need to model your ingredient at the microscopic level, most of the transformations that happen are readily visible and understandable at the macro level: cutting, mixing, heating…
You don't need to consider all the possible combinations of ingredients and spices; if you know how to cook, you probably know many basic combinations of ingredients and/or spices that you can then pimp or adapt for different dishes.
All of these are epistemic regularities that we exploit when cooking.
Similarly, when we do physics, when we build things, when we create art, insofar as we can reliably succeed, we are exploiting such regularities.
If I had to summarize my view of epistemology in one sentence, it would be: The art and science of finding, creating, and exploiting epistemic regularities in the world to reliably solve practical problems.
The Goals of Epistemology
If you have ever read anything about the academic topic called "Epistemology, you might have noticed something lacking from my previous account: I didn't focus on knowledge or understanding.
This is because I take a highly practical view of epistemology: epistemology for me teaches us how to act in the world, how to intervene, how to make things.
While doing, we might end up needing some knowledge, or needing to understand various divisions in knowledge, types of models, and things like that. But the practical application is always the end.
(This is also why I am completely uninterested in
the whole realism debate: whether most hidden entities truly exist or not is a fake question that doesn't really teach me anything, and probably cannot be answered. The kind of realism I'm interested in is the realism of usage, where there's a regularity (or lack thereof) which can be exploited in ...

Sep 1, 2024 • 1min
EA - The EA Hub is retiring by Vaidehi Agarwalla
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Hub is retiring, published by Vaidehi Agarwalla on September 1, 2024 on The Effective Altruism Forum.
The EA Hub will retire on the 6th of October 2024.
In 2022 we announced that we would stop further feature development and maintain the EA Hub until it is ready to be replaced by a new platform. In July this year, the EA Forum launched its People Directory, which offers a searchable directory of people involved in the EA movement, similar to what the Hub provided in the past.
We believe the Forum has now become better positioned to fulfil the EA Hub's mission of connecting people in the EA community online. The Forum team is much better resourced and users have many reasons to visit the Forum (e.g. for posts and events), which is reflected in it having more traffic and users. The Hub's core team has also moved on to other projects.
EA Hub users will be informed of this decision via email. All data will be deleted after the website has been shut down.
We recommend that you use the EA Forum's People Directory to continue to connect with other people in the effective altruism community. The feature is still in beta mode and the Forum team would very much appreciate feedback about it here.
We would like to thank the many people who volunteered their time in working on the EA Hub.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Sep 1, 2024 • 8min
AF - Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024) by Matt MacDermott
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024), published by Matt MacDermott on September 1, 2024 on The AI Alignment Forum.
Yoshua Bengio wrote a blogpost about a new AI safety paper by him, various collaborators, and me. I've pasted the text below, but first here are a few comments from me aimed at an AF/LW audience.
The paper is basically maths plus some toy experiments. It assumes access to a Bayesian oracle that can infer a posterior over hypotheses given data, and can also estimate probabilities for some negative outcome ("harm"). It proposes some conservative decision rules one could use to reject actions proposed by an agent, and proves probabilistic bounds on their performance under appropriate assumptions.
I expect the median reaction in these parts to be something like: ok, I'm sure there are various conservative decision rules you could apply using a Bayesian oracle, but isn't obtaining a Bayesian oracle the hard part here? Doesn't that involve advances in Bayesian machine learning, and also probably solving ELK to get the harm estimates?
My answer to that is: yes, I think so. I think Yoshua does too, and that that's the centre of his research agenda.
Probably the main interest of this paper to people here is to provide an update on Yoshua's research plans. In particular it gives some more context on what the "guaranteed safe AI" part of his approach might look like -- design your system to do explicit Bayesian inference, and make an argument that the system is safe based on probabilistic guarantees about the behaviour of a Bayesian inference machine.
This is in contrast to more hardcore approaches that want to do formal verification by model-checking. You should probably think of the ambition here as more like "a safety case involving proofs" than "a formal proof of safety".
Bounding the probability of harm from an AI to create a guardrail
Published 29 August 2024 by yoshuabengio
As we move towards more powerful AI, it becomes urgent to better understand the risks, ideally in a mathematically rigorous and quantifiable way, and use that knowledge to mitigate them. Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees, i.e., would be provably unlikely to take a harmful action?
Current AI safety evaluations and benchmarks test the AI for cases where it may behave badly, e.g., by providing answers that could yield dangerous misuse. That is useful and should be legally required with flexible regulation, but is not sufficient. These tests only tell us one side of the story: If they detect bad behavior, a flag is raised and we know that something must be done to mitigate the risks.
However, if they do not raise such a red flag, we may still have a dangerous AI in our hands, especially since the testing conditions might be different from the deployment setting, and attackers (or an out-of-control AI) may be creative in ways that the tests did not consider. Most concerningly, AI systems could simply recognize they are being tested and have a temporary incentive to behave appropriately while being tested. Part of the problem is that such tests are spot checks.
They are trying to evaluate the risk associated with the AI in general by testing it on special cases. Another option would be to evaluate the risk on a case-by-case basis and reject queries or answers that are considered to potentially violate or safety specification.
With the long-term goal of obtaining a probabilistic guarantee that would apply in every context, we thus consider in this new paper (see reference and co-authors below) the objective of estimating a context-dependent upper bound on the probability of violating a given safety specification. Such a risk evaluation would need to be performed at ru...

Sep 1, 2024 • 13min
EA - Meta Charity Funders: What you need to know when applying by Joey
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta Charity Funders: What you need to know when applying, published by Joey on September 1, 2024 on The Effective Altruism Forum.
In this post, we include what is most important to know when you apply to
Meta Charity Funders. We expect to update this post every round.
The most common reasons for rejection
By sharing the most common reasons for rejections, we hope to support future grantees and help people make more informed decisions on whether they should apply to Meta Charity Funders.
The suggested project is not within scope: A fair amount of applications fall outside of the areas which our donors were interested in supporting. At a high level, Meta Charity Funders exists to fund projects that fall within the "meta" category and are unlikely to be funded by other major funders in this space. See
"What we mean by 'meta'" further down.
The theory of change (ToC) is unclear, unfocused, or seemed implausible: Some applicants do not share sufficient reasoning on how their project (in the end) contributes to a better world. Other applicants have a theory of change which seems too complex or involves too many programs. We generally prefer fewer programs with a more narrow focus, especially for earlier-stage projects. Other ToCs simply seem like an inaccurate representation of how well the intervention would actually work.
As a starting point, we recommend Aidan Alexander's post on ToCs.
Going too big too early: We hesitate to give large grants (~$150k+) to new projects without much track record. An incremental upscaling is generally preferred to a more sudden upscaling, as something akin to "track record" or "expected upside" divided by "funding ask" is the ratio we're trying to maximize as funders. The absence of a substantial track record makes it challenging to justify larger funding requests, such as those needed for hiring many additional employees.
To be more specific, we are particularly wary of situations where a project seeks to more than double its budget with less than 12 months of demonstrated success. This caution stems from the need to ensure that significant increases in funding are truly warranted and likely to yield proportional benefits.
Insufficient alternative funding sources: Some grants might seem good if continued but are not funded because they seem unlikely to be able to fundraise enough to sustain their long-term budget. While we are okay with funding a large portion of a project's budget as a one-off grant, we do not want organizations to depend on us for long-term funding as we are very uncertain about who our donors will be or what grants they will prioritize in the future.
This is another reason we prefer not to give out larger grants for newer projects.
What we're looking for in an application
Some general things we are looking for in an application that we would like to highlight:
Strong founders/Track record: We think one of the strongest indicators for future success is the people related to a project and previous achievements. In your application, please explain how and why you/your organization are well suited to run this project. Back up your claims with data to the extent you can; this includes historical data for this project, data for similar projects, or data that attests to the (relevant) skills of the team.
If possible, also think through and highlight how your project excels compared to other similar projects.
Strategic alignment with field needs: Applications should demonstrate an understanding of the ecosystem and articulate a rationale for why their project is necessary at this time.
Why this? Why now? Why has no one been doing this before? And is it reasonable that you are applying for X FTEs to do it instead of doing an MVP first? When evaluating an opportunity most funders are thinking about whether this is one of the be...

Sep 1, 2024 • 1h 8min
EA - Introduction to suffering-focused ethics by Center for Reducing Suffering
Dive into the rich landscape of suffering-focused ethics, prioritizing the alleviation of suffering as a moral necessity. Explore various ethical perspectives, including the urgency of addressing suffering in both humans and nonhuman animals. Delve into philosophical views like negative utilitarianism and the ethical teachings of Mahayana Buddhism, which emphasize the importance of minimizing suffering over maximizing happiness. Discover the implications for public policy and societal responsibilities in tackling suffering head-on.

Aug 31, 2024 • 14min
AF - Epistemic states as a potential benign prior by Tamsin Leake
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Epistemic states as a potential benign prior, published by Tamsin Leake on August 31, 2024 on The AI Alignment Forum.
Malignancy in the prior seems like a strong crux of the goal-design part of alignment to me. Whether your prior is going to be used to model:
processes in the multiverse containing a specific "beacon" bitstring,
processes in the multiverse containing the AI,
processes which would output all of my blog, so I can make it output more for me,
processes which match an AI chatbot's hypotheses about what it's talking with,
then you have to sample hypotheses from somewhere; and typically, we want to use either solomonoff induction or time-penalized versions of it such as levin search (penalized by log of runtime) or what QACI uses (penalized by runtime, but with quantum computation available in some cases), or the implicit prior of neural networks (large sequences of multiplying by a matrix, adding a vector, and ReLU, often with a penalty related to how many non-zero weights are used).
And the solomonoff prior is famously malign.
(Alternatively, you could have knightian uncertainty about parts of your prior that aren't nailed down enough, and then do maximin over your knightian uncertainty (like in infra-bayesianism), but then you're not guaranteed that your AI gets anywhere at all; its knightian uncertainty might remain so immense that the AI keeps picking the null action all the time because some of its knightian hypotheses still say that anything else is a bad idea.
Note: I might be greatly misunderstanding knightian uncertainty!)
(It does seem plausible that doing geometric expectation over hypotheses in the prior helps "smooth things over" in some way, but I don't think this particularly removes the weight of malign hypotheses in the prior? It just allocates their steering power in a different way, which might make things less bad, but it sounds difficult to quantify.)
It does feel to me like we do want a prior for the AI to do expected value calculations over, either for prediction or for utility maximization (or quantilization or whatever).
One helpful aspect of prior-distribution-design is that, in many cases, I don't think the prior needs to contain the true hypothesis. For example, if the problem that we're using a prior for is to model
processes which match an AI chatbot's hypotheses about what it's talking with
then we don't need the AI's prior to contain a process which behaves just like the human user it's interacting with; rather, we just need the AI's prior to contain a hypothesis which:
is accurate enough to match observations.
is accurate enough to capture the fact that the user (if we pick a good user) implements the kind of decision theory that lets us rely on them pointing back to the actual real physical user when they get empowered - i.e. in CEV(user-hypothesis), user-hypothesis builds and then runs CEV(physical-user), because that's what the user would do in such a situation.
Let's call this second criterion "cooperating back to the real user".
So we need a prior which:
Has at least some mass on hypotheses which
correspond to observations
cooperate back to the real user
and can eventually be found by the AI, given enough evidence (enough chatting with the user)
Call this the "aligned hypothesis".
Before it narrows down hypothesis space to mostly just aligned hypotheses, doesn't give enough weight to demonic hypothesis which output whichever predictions cause the AI to brainhack its physical user, or escape using rowhammer-type hardware vulnerabilities, or other failures like that.
Formalizing the chatbot model
First, I'll formalize this chatbot model. Let's say we have a magical inner-aligned "soft" math-oracle:
Which, given a "scoring" mathematical function from a non-empty set
a to real numbers (not necessarily one that is tractably ...

Aug 31, 2024 • 14min
LW - Book review: On the Edge by PeterMcCluskey
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book review: On the Edge, published by PeterMcCluskey on August 31, 2024 on LessWrong.
Book review: On the Edge: The Art of Risking Everything, by Nate Silver.
Nate Silver's latest work straddles the line between journalistic inquiry and subject matter expertise.
"On the Edge" offers a valuable lens through which to understand analytical risk-takers.
The River versus The Village
Silver divides the interesting parts of the world into two tribes.
On his side, we have "The River" - a collection of eccentrics typified by Silicon Valley entrepreneurs and professional gamblers, who tend to be analytical, abstract, decoupling, competitive, critical, independent-minded (contrarian), and risk-tolerant.
On the other, "The Village" - the east coast progressive establishment, including politicians, journalists, and the more politicized corners of academia.
Like most tribal divides, there's some arbitrariness to how some unrelated beliefs end up getting correlated. So I don't recommend trying to find a more rigorous explanation of the tribes than what I've described here.
Here are two anecdotes that Silver offers to illustrate the divide:
In the lead-up to the 2016 US election, Silver gave Trump a 29% chance of winning, while prediction markets hovered around 17%, and many pundits went even lower. When Trump won, the Village turned on Silver for his "bad" forecast. Meanwhile, the River thanked him for helping them profit by betting against those who underestimated Trump's chances.
Wesley had to be bluffing 25 percent of the time to make Dwan's call correct; his read on Wesley's mindset was tentative, but maybe that was enough to get him from 20 percent to 24. ... maybe Wesley's physical mannerisms - like how he put his chips in quickly ... got Dwan from 24 percent to 29. ... If this kind of thought process seems alien to you - well, sorry, but your application to the River has been declined.
Silver is concerned about increasingly polarized attitudes toward risk:
you have Musk at one extreme and people who haven't left their apartment since COVID at the other one. The Village and the River are growing farther apart.
13 Habits of Highly Successful Risk-Takers
The book lists 13 habits associated with the River. I hoped these would improve on Tetlock's ten commandments for superforecasters. Some of Silver's habits fill that role of better forecasting advice, while others function more as litmus tests for River membership. Silver understands the psychological challenges better than Tetlock does. Here are a few:
Strategic Empathy:
But I'm not talking about coming across an injured puppy and having it tug at your heartstrings. Instead, I'm speaking about adversarial situations like poker - or war.
I.e. accurately modeling what's going on in an opponent's mind.
Strategic empathy isn't how I'd phrase what I'm doing on the stock market, where I'm rarely able to identify who I'm trading against. But it's fairly easy to generalize Silver's advice so that it does coincide with an important habit of mine: always wonder why a competent person would take the other side of a trade that I'm making.
This attitude represents an important feature of the River: people in this tribe aim to respect our adversaries, often because we've sought out fields where we can't win using other approaches.
This may not be the ideal form of empathy, but it's pretty effective at preventing Riverians from treating others as less than human. The Village may aim to generate more love than does the River, but it also generates more hate (e.g. of people who use the wrong pronouns).
Abhor mediocrity: take a raise-or-fold attitude toward life.
I should push myself a bit in this direction. But I feel that erring on the side of caution (being a nit in poker parlance) is preferable to becoming the next Sam Bankman-Fried.
Alloc...

Aug 31, 2024 • 12min
LW - Congressional Insider Trading by Maxwell Tabarrok
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Congressional Insider Trading, published by Maxwell Tabarrok on August 31, 2024 on LessWrong.
You've probably seen the
Nancy Pelosi Stock Tracker on X or else a
collection of
articles and
books exposing the secret and lucrative world of congressional insider trading.
The underlying claim behind these stories is intuitive and compelling. Regulations, taxes, and subsidies can
make or break entire industries and congresspeople can get information on these rules before anyone else, so it wouldn't be surprising if they used this information to make profitable stock trades.
But do congresspeople really have a consistent advantage over the market? Or is this narrative built on a cherrypicked selection of a few good years for a few lucky traders?
Is Congressional Insider Trading Real
There are several papers in economics and finance on this topic
First is the 2004 paper:
Abnormal Returns from the Common Stock Investments of the U.S. Senate by Ziobrowski et al. They look at Senator's stock transactions over 1993-1998 and construct a synthetic portfolio based on those transactions to measure their performance.
This is the headline graph. The red line tracks the portfolio of stocks that Senators bought, and the blue line the portfolio that Senators sold. Each day, the performance of these portfolios is compared to the market index and the cumulative difference between them is plotted on the graph. The synthetic portfolios start at day -255, a year (of trading days) before any transactions happen.
In the year leading up to day 0, the stocks that Senators will buy (red line) basically just tracks the market index. On some days, the daily return from the Senator's buy portfolio outperforms the index and the line moves up, on others it underperforms and the line moves down. Cumulatively over the whole year, you don't gain much over the index.
The stocks that Senators will sell (blue line), on the other hand, rapidly and consistently outperform the market index in the year leading up to the Senator's transaction.
After the Senator buys the red portfolio and sells the blue portfolio, the trends reverse. The Senator's transactions seem incredibly prescient. Right after they buy the red stocks, that portfolio goes on a tear, running up the index by 25% over the next year. They also pick the right time to sell the blue portfolio, as it barely gains over the index over the year after they sell.
Ziobrowski finds that the buy portfolio of the average senator, weighted by their trading volume, earns a compounded annual rate of return of 31.1% compared to the market index which earns only 21.3% a year over this period 1993-1998.
This definitely seems like evidence of incredibly well timed trades and above-market performance. There are a couple of caveats and details to keep in mind though.
First, it's only a 5-year period. Additionally, any transactions from a senator in a given year a pretty rare:
Only a minority of Senators buy individual common stocks, never more than 38% in any one year.
So sample sizes are pretty low in the noisy and highly skewed distribution of stock market returns.
Another problem, the data on transactions isn't that precise.
Senators report the dollar volume of transactions only within broad ranges ($1,001 to $15,000, $15,001 to $50,000, $50,001 to $100,000, $100,001 to $250,000, $250,001 to $500,000, $500,001 to $1,000,000 and over $1,000,000)
These ranges are wide and the largest trades are top-coded.
Finally, there are some pieces of the story that don't neatly fit in to an insider trading narrative. For example:
The common stock investments of Senators with the least seniority (serving less than seven years) outperform the investments of the most senior Senators (serving more than 16 years) by a statistically significant margin.
Still though,
several
other
paper...

Aug 31, 2024 • 3min
EA - Meta Charity Funders: Launching the 3rd round by Joey
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta Charity Funders: Launching the 3rd round, published by Joey on August 31, 2024 on The Effective Altruism Forum.
Meta Charity Funders is a funding circle consisting of ~10 funders, that funds meta initiatives and serves as an alternative to Open Philanthropy and EA funds.
We're launching the third round of
Meta Charity Funders.
Apply for funding by September 30th or
join the circle as a donor.
We expect all applicants to have read this post's twin post, "Meta Charity Funders: What you need to know when applying" to understand how to write your application.
Focus of this round
We expect to fund many initiatives not on this list, but some projects that members of our circle have expressed extra interest in funding this round are:
Ultra-high-net-worth-individual advising. However, we want to stress that we believe the skillset to do this well is rare, and these types of applications will be extra scrutinized.
Effective Giving/Giving multiplier-organizations. For example, the ones incubated by CE's
Effective Giving Incubation program.
Career development programs that increase the number of individuals working in high-impact areas- including GCR reduction, animal welfare and Global Health. Especially in regions where there are currently fewer opportunities to engage in such programs.
Information for this round
Process
The expected process is as follows:
Applications open: August 30th
100 words in the summary; this should give us a quick overview of the project.
In the full project description, please include a main summarizing document no longer than 2 pages. This is all we can commit to reading in the first stage. Any extra material will only be read if we choose to proceed with your application.
When choosing the "Meta" category, please be as truthful as possible. It's obvious (and reflects negatively on the application) when a project has deliberately been placed in a category in which it does not belong.
Applications close: September 29th
Initial application review finishes: October 6th
If your project has been filtered out during the initial application review (which we expect 60-80% of applications will), we will let you know around the end of October.
Interviews, due diligence, deliberations: October 7th - November 24th
If your application has passed the initial application review, we will discuss it during our gatherings, and we might reach out to you to gather more information, for example, by conducting an interview. N.B. This is not a commitment to fund you.
Decisions made: November 25th
We expect to pay out the grants in the weeks following Novermber 25th.
Historical support:
You can get a sense of what we have supported in historical rounds by reading our posts on our
first and
secound rounds.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org