

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Apr 18, 2024 • 8min
EA - State of Global Development & EA (2024) by DavidNash
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: State of Global Development & EA (2024), published by DavidNash on April 18, 2024 on The Effective Altruism Forum.
Summary
A shallow overview of the EA & global development ecosystem.[1]
There isn't one large EA & GD ecosystem, it is mainly individuals considering their own path to impact, with minor coordination amongst people who are part of the wider network and closer coordination amongst groups in a couple of areas (effective giving/charity entrepreneurship).
Resources
GD & EA
newsletter
updates covering a variety of topics relevant to broad global development
GD & EA LinkedIn group - for anyone that has an interest in GD topics and can help people find each other who share that interest
EA Forum
Global Health & Development Topic/Wiki
For global development professionals ( including think tanks/government/for-profit orgs/academia/finance) there is an EA & GD Slack. Currently there are ~200 members - you can apply using this form
For GD professionals based in London there is also a WhatsApp group to help coordinate and arrange monthly meetups, message me for details[2]
Global Development Ecosystems
Effective Giving
57 organisations, including a few that evaluate impactful giving opportunities and others that only focus on fundraising
There is one full time organiser to help with info sharing and connections between these orgs
GWWC support incubation of early stage effective giving initiatives
Givewell raised ~ $600 million in 2022 (the latest date I could find figures for, and includes OP giving ~$350m[3])
Other effective giving orgs in 2022 raised ~$103 million (for most of those orgs this isn't specifically for global development, although I wouldn't be surprised if 50%+ was allocated to GD)
For 2023 the data is incomplete but so far ~$157 m was raised by non GiveWell orgs
Effective Charities
There are several charities recommended by evaluators, most of them founded before EA existed and usually have funding outside of EA
Malaria Consortium
,
Against Malaria Foundation
,
Helen Keller International
Charity Entrepreneurship helps found charities attempting to be good enough to get recommended status
They also help the alumni network of charities and incubatees with connections, events and ongoing support
Foreign Aid
$211 billion was spent on international aid in 2022 by member states of the Development Assistance Committee - a collection of 32 donor countries
There are debates over how much actually counts as aid, and it doesn't include aid from countries outside of DAC
Open Philanthropy spent $16m on global aid policy in 2022 and 2023. They are aiming to
Increase international aid budgets (and reduce cuts)
Increase funding to especially impactful programs
Spur cross-cutting improvements in existing programs
Sam Anschell recently wrote about his experiences working on OP's Global Aid Policy program
There are individuals who work in a variety of foreign aid departments who have an interest in EA ideas
Probably Good with a post looking at careers in aid policy & advocacy
Development Finance/Impact Investing
There are many multilateral & bilateral development banks, aiming to promote economic development, provide long term financing and stabilise the global financial system
It was hard to find out how much money was moved in development finance (and a lot of it is loans) but it is probably in the hundreds of billions
There are some EA interested people working in development finance but very rarely are they connected to each other or the wider EA & GD network
Similarly with impact investing, there are a few people but often not connected
Startups/Private Sector in LMICs
The most well known success story is Wave, a mobile service provider that allows unbanked people in Africa to access financial services. They estimated that the company saves people in Senegal over ...

Apr 18, 2024 • 1min
LW - I'm open for projects (sort of) by cousin it
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm open for projects (sort of), published by cousin it on April 18, 2024 on LessWrong.
I left Google a month ago, and right now don't work. Writing this post in case anyone has interesting ideas what I could do. This isn't an "urgently need help" kind of thing - I have a little bit of savings, right now planning to relax some more weeks and then go into some solo software work. But I thought I'd write this here anyway, because who knows what'll come up.
Some things about me. My degree was in math. My software skills are okayish: I left Google at L5 ("senior"), and also made a game that went semi-viral. I've also contributed a lot on LW, the most prominent examples being my formalizations of decision theory ideas (Löbian cooperation, modal fixpoints etc) and later the AI Alignment Prize that we ran with Paul and Zvi.
Most of that was before the current AI wave; neural networks don't really "click" with my mind, so I haven't done much work on them.
And yeah, this is an invitation to throw at me not necessarily money-paying work, but also stuff you'd like me to look at, criticize, help with your own projects and so on. I find myself with a bit more free time now, so basically drop me a PM if you have something interesting to talk about :-)
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Apr 18, 2024 • 1h 36min
LW - AI #60: Oh the Humanity by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #60: Oh the Humanity, published by Zvi on April 18, 2024 on LessWrong.
Many things this week did not go as planned.
Humane AI premiered its AI pin. Reviewers noticed it was, at best, not ready.
Devin turns out to have not been entirely forthright with its demos.
OpenAI fired two employees who had been on its superalignment team, Leopold Aschenbrenner and Pavel Izmailov for allegedly leaking information, and also more troubliningly lost Daniel Kokotajlo, who expects AGI very soon, does not expect it to by default go well, and says he quit 'due to losing confidence that [OpenAI] would behave responsibly around the time of AGI.' That's not good.
Nor is the Gab system prompt, although that is not a surprise. And several more.
On the plus side, my 80,000 Hours podcast finally saw the light of day, and Ezra Klein had an excellent (although troubling) podcast with Dario Amodei. And we got the usual mix of incremental useful improvements and other nice touches.
Table of Contents
Introduction.
Table of Contents.
Language Models Offer Mundane Utility. Ask all your stupid questions.
Language Models Don't Offer Mundane Utility. That won't stop social media.
Oh the Humanity. It will, however, stop the Humane AI pin, at least for now.
GPT-4 Real This Time. The new version continues to look slightly better.
Fun With Image Generation. There is remarkably little porn of it.
Deepfaketown and Botpocalypse Soon. Audio plus face equals talking head.
Devin in the Details. To what extent was the Devin demo a fake?
Another Supposed System Prompt. The gift of Gab. Not what we wanted.
They Took Our Jobs. A model of firm employment as a function of productivity.
Introducing. The quest to make context no longer be that which is scarce.
In Other AI News. Respecting and disrespecting the rules of the game.
Quiet Speculations. Spending some time wondering whether you should.
The Quest for Sane Regulations. Senators get serious, Christiano is appointed.
The Week in Audio. I spend 3.5 of my 80,000 hours, and several more.
Rhetorical Innovation. Words that do not on reflection bring comfort.
Don't Be That Guy. Also known as the only law of morality.
Aligning a Smarter Than Human Intelligence is Difficult. Subproblems anyone?
Please Speak Directly Into the Microphone. Thanks, everyone.
People Are Worried About AI Killing Everyone. They are no longer at OpenAI.
Other People Are Not As Worried About AI Killing Everyone. Mundane visions.
The Lighter Side. The art of fixing it.
Language Models Offer Mundane Utility
The best use of LLMs continues to be 'ask stupid questions.'
Ashwin Sharma: reading zen and the art of motorcycle maintenance changed the way I looked at the inner workings of my mind. It was like unlocking a secret level of a video game. what are you reading today?
Tom Crean: Tried to read Zen… as a teenager and felt disoriented by it. I kept wondering who "Phaedrus" was. But I liked the general atmosphere of freedom. The philosophy went over my head.
Now I'm reading Akenfield by Ronald Blythe. A portrait of a Suffolk Village in the 1960s.
Ashwin Sharma: use GPT to help analyse the sections you're stuck on. Seriously, try it again and i promise you it'll be worth it.
Joe Weisenthal: I've found this to be a great ChatGPT use case. Understanding terms in context while I'm reading.
When I was a kid, my dad told me when reading to immediately stop and grab a dictionary every time I got to a word I didn't understand.
Not really feasible. But AI solves this well.
It's still a bit cumbersome, because with kindle or physical, no quick way to copy/paste a section into an AI or just ask the book what it means. But even with those hurdles, I've found the tools to be a great reading augment.
Patrick McKenzie: It's surprisingly reliable to just point phone camera at screen and then ask questions about t...

Apr 18, 2024 • 22min
AF - Discriminating Behaviorally Identical Classifiers: a model problem for applying interpretability to scalable oversight by Sam Marks
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discriminating Behaviorally Identical Classifiers: a model problem for applying interpretability to scalable oversight, published by Sam Marks on April 18, 2024 on The AI Alignment Forum.
In a new preprint,
Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models, my coauthors and I introduce a technique, Sparse Human-Interpretable Feature Trimming (SHIFT), which I think is the strongest proof-of-concept yet for applying AI interpretability to existential risk reduction.[1] In this post, I will explain how SHIFT fits into a broader agenda for what I call cognition-based oversight.
In brief, cognition-based oversight aims to evaluate models according to whether they're performing intended cognition, instead of whether they have intended input/output behavior.
In the rest of this post I will:
Articulate a class of approaches to scalable oversight I call cognition-based oversight.
Narrow in on a model problem in cognition-based oversight called Discriminating Behaviorally Identical Classifiers (DBIC). DBIC is formulated to be a concrete problem which I think captures most of the technical difficulty in cognition-based oversight.
Explain SHIFT, the technique we introduce for DBIC.
Discuss challenges and future directions, including concrete recommendations for two ways to make progress on DBIC.
Overall, I think that making progress on DBIC is tractable with current interpretability techniques, and I'd be excited to see more work on it.
Cognition-based oversight
In this section, I describe a class of approaches to scalable oversight I call cognition-based oversight. First, I review a simplified formulation of scalable oversight: discriminating good vs. bad models. Then I impose a pessimistic assumption - that the good and bad models cannot be discriminated behaviorally - and propose cognition-based oversight as a potential remedy.
Discriminating models: a simplification of scalable oversight
In an ideal world, we could argue that AI systems will be safe by default as follows:
We train AI systems to receive high scores according to some scoring function. The models that get the highest scores are all good models that robustly behave as desired. Therefore, we have strong a priori reason to expect our training processes to result in good models.
Sadly, in the real world, the "scoring functions" we actually use tend to be shaped like "human evaluator approval." And "human evaluator approval" does not enjoy the property that "the models which humans evaluate most positively all robustly behave as desired." Two salient
counterexamples:[2]
Sycophantic reward hackers take actions that look good to the overseer, even when this conflicts with doing what the overseer actually wants.
For example, suppose we deploy an AI smart vault to keep a diamond safe, as judged by whether the diamond continues to appear on the vault camera; instead of trying to prevent diamond theft, a sycophantic reward hacker might instead cover diamond theft up by tampering with the vault camera.
Situationally-aware schemers behave well only when they think bad behavior would be caught.
For example, a scheming code generation model might write vulnerability-free code only when it expects the code to be closely audited, but otherwise insert subtle backdoors.
The models in these examples score highly according to human evaluator approval, so we can't use the argument above that they are unlikely to arise from training. We instead need to rely on arguments that these models are
unlikely to
arise despite scoring at least as well as good models. So far, I think these arguments are
far from airtight, and I feel nervous about relying on them.
Said differently, a core problem in technical AI safety is that it can be generally hard to discriminate good models that robustly do stuf...

Apr 18, 2024 • 15min
LW - The Mom Test: Summary and Thoughts by Adam Zerner
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Mom Test: Summary and Thoughts, published by Adam Zerner on April 18, 2024 on LessWrong.
I just finished reading The Mom Test for the second time. I took "raw" notes here. In this post I'll first write up a bullet-point summary and then ramble off some thoughts that I have.
Summary
Introduction:
Trying to learn from customer conversations is like trying to excavate a delicate archeological site. The truth is down there somewhere, but it's fragile. When you dig you get closer to the truth, but you also risk damaging or smashing it.
Bad customer conversations are worse than useless because they mislead you, convincing you that you're on the right path when instead you're on the wrong path.
People talk to customers all the time, but they still end up building the wrong things. How is this possible? Almost no one talks to customers correctly.
Why another book about this? Why this author?
Rob is a techie, not a sales guy. We need something targeted at techies.
To understand how to do something correctly, you have to understand how it can go wrong. Rob has lots of experience with things going wrong here.
It's practical, not theoretical.
Chapter 1 - The Mom Test:
Everyone knows that you shouldn't ask your mom whether your business idea is good. But the issue isn't who you're asking, it's how you're asking. Yes, your mom is more likely[1] than others to praise you and tell you that your idea is good. But if you ask "what do you think of my idea", almost anyone will feel too uncomfortable to be constructive and honest with you.
It's not other people's responsibility to tell you the truth. It's your responsibility to find it by asking good questions.
The Mom Test is a series of rules for crafting good questions that even your mom can't lie to you about.
Talk about their life instead of your idea.
Ask about specifics in the past instead of hypotheticals about the future.
Talk less and listen more.
You're not allowed to tell them what their problems are. They're not allowed to tell you what the solutions should look like. They own the problem, you own the solution.
Chapter 2 - Avoiding Bad Data:
Bad data is either a false negative (thinking you're dead when you're not) or, much more often, false positives (thinking you're good when you're not).
Three types: compliments, fluff and ideas.
When you get compliments, deflect them and pivot back to asking them specifics about their past. "When was the last time you had the problem? Talk me through how it went down."
If they start proposing ideas (features, solutions), dig into the underlying problem beneath their proposal. "Why do you recommend that? What problem would it solve for you? Tell me about a time when you had that problem."
Pathos problem: when you "expose your ego". Example: "Hey, I quit my job to pursue this and am really passionate about it. What do you think?" It's too awkward to be critical.
It can be tempting to slip into pitching them. They indicate that X isn't a big problem for them. You start explaining why X probably is a big problem, or why they should consider it a big problem. There is a time for pitching, but customer learning isn't that time.
Chapter 3 - Asking Important Questions:
Make sure that you seek out the world rocking, hugely important questions. Questions that could indicate that your business is doomed to fail. Most people shrink away from these.
Learn to love bad news. Failing fast is better than failing slow!
Thought experiments are helpful here.
Imagine your company failed. Why might this be?
Imagine your company succeeded. What had to be true to get you there?
What advice would you give someone else if they were in your shoes?
Decide ahead of time on the three most important things you're looking to learn.
Chapter 4 - Keeping It Casual:
Things just work better when you keep it casual.
Ask ...

Apr 18, 2024 • 6min
EA - How you can help right now to introduce ideas of effective giving to young people by Adam Steinberg
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How you can help right now to introduce ideas of effective giving to young people, published by Adam Steinberg on April 18, 2024 on The Effective Altruism Forum.
tl;dr
This opportunity for impact is aimed primarily at parents or those that have another connection to a secondary (or middle) school.
You can have a powerful effect by emailing or writing a letter to a teacher you know, or your child's high school, to recommend they run a
charity election. This is an opportunity for you to connect dozens or hundreds of young people with key concepts around effective giving and civic participation merely by taking 20 minutes and adapting an email, provided below.
A Call to Action
Parents and friends of parents: You can help get the ideals of effective giving in front of schools-full of future givers by letting a school know how easy it is to get sponsorship up to $2,000 from Giving What We Can (GWWC) to run a
charity election.
Towards the end of this post, we provide a message you might adapt and send to a school.
Overview
It's time to tap more fully into the power of the EA community to spread the word about Charity Elections from Giving What We Can.
Students and teachers alike who have participated in a charity election praise the experience as meaningful and memorable. The program is showing notable signs of impact (where it can be measured) and has proven its scalability and readiness to run in more schools and more countries.
The basics, for those who have not heard
A charity election is an event in "experiential altruism" that empowers high school students as they learn about and experience making a real impact on the world. In the program, adapted from
Giving Games for a younger cohort, students choose among three causes selected from the GWWC list of recommended charities to decide which will receive an event sponsorship of up to $2,000 (sponsored by GWWC).
Before voting, students research and discuss the charities using a condensed
framework designed to empower high school (and possibly middle-school) students to apply principles of effective giving in an age-appropriate manner.
Designed to be student-led, a charity election lasts about three class hours and can be run across a set of classes or the whole school. We provide schools with self-contained materials and resources to make it as easy as possible for teachers to support their students.
Students who participate engage in meaningful discussions and powerful reflection about altruism as they get first-hand experience at changemaking, expanding their moral circles and helping them develop an understanding of the power of effective philanthropy.
The program was created in 2018 with the support of The Life You Can Save and has been incubated by Giving What We Can since 2021.
Charity elections have run now in six countries - including several events entirely in Italian - and, since 2018, nearly 11,000 student votes have been cast after the research and discussion process. Schools typically come back year after year to request sponsorship.
If you want to learn more, please
visit our webpage or reference the additional resources listed in the postscript below the following model letter. If you have any questions, please
write to us
.
What you can do right now
You can copy and adapt the letter below to send to a school or a particular teacher who you feel would be intrigued by a program that gives students confidence and a sense of accomplishment as change-makers while cultivating a culture of (effective) giving and fostering positive school climate.
If you don't know a parent, a student, or a teacher
You can still help spread the word about Charity Elections. If you still have a connection to your own high school, please consider recommending them to the program by adapting the note above. You can make a differenc...

Apr 18, 2024 • 1min
EA - Personal reflections on FTX by William MacAskill
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Personal reflections on FTX, published by William MacAskill on April 18, 2024 on The Effective Altruism Forum.
The two podcasts where I discuss FTX are now out:
Making Sense with Sam Harris
Clearer Thinking with Spencer Greenberg
The Sam Harris podcast is more aimed at a general audience; the Spencer Greenberg podcast is more aimed at people already familiar with EA. (I've also done another podcast with Chris Anderson from TED that will come out next month, but FTX is a fairly small part of that conversation.)
In this post, I'll gather together some things I talk about across these podcasts - this includes updates and lessons, and responses to some questions that have been raised on the Forum recently. I'd recommend listening to the podcasts first, but these comments can be read on their own, too. I cover a variety of different topics, so I'll cover each topic in separate comments underneath this post.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Apr 18, 2024 • 39min
LW - Childhood and Education Roundup #5 by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Childhood and Education Roundup #5, published by Zvi on April 18, 2024 on LessWrong.
For this iteration I will exclude discussions involving college or college admissions.
There has been a lot of that since the last time I did one of these, along with much that I need to be careful with lest I go out of my intended scope. It makes sense to do that as its own treatment another day.
Bullying
Why do those who defend themselves against bullies so often get in more trouble than bullies? This is also true in other contexts but especially true in school. Thread is extensive, these are the highlights translated into my perspective. A lot of it is that a bully has experience and practice, they know how to work the system, they know what will cause a response, and they are picking the time and place to do something.
The victim has to respond in the moment, and by responding causes conflict and trouble that no one wants. Also we are far more willing to punish generally rule-following people who break a rule, than we are to keep punishing someone who keeps breaking the rules all time, where it seems pointless.
Study finds bullying has lifelong negative effects.
Abstract: Most studies examining the impact of bullying on wellbeing in adulthood rely on retrospective measures of bullying and concentrate primarily on psychological outcomes. Instead, we examine the effects of bullying at ages 7 and 11, collected prospectively by the child's mother, on subjective wellbeing, labour market prospects, and physical wellbeing over the life-course.
We exploit 12 sweeps of interview data through to age 62 for a cohort born in a single week in Britain in 1958. Bullying negatively impacts subjective well-being between ages 16 and 62 and raises the probability of mortality before age 55. It also lowers the probability of having a job in adulthood. These effects are independent of other adverse childhood experiences.
My worry, as usual, is that the controls are inadequate. Yes, there are some attempts here, but bullying is largely a function of how one responds to it, and one's social status within the school, in ways that outside base factors will not account for properly.
Bullying sucks and should not be tolerated, but also bullies target 'losers' in various senses, so them having worse overall outcomes is not obviously due to the bullying. Causation is both common and cuts both ways.
Truancy
Ever since Covid, schools have had to deal with lots of absenteeism and truancy. What to do? Matt Yglesias gives the obviously correct answer. If the norm is endangered, you must either give up the norm or enforce it. Should we accept high absentee rates from schools?
What we should not do is accept a new norm of non-enforcement purely because we are against enforcing rules. The pathological recent attachment to not enforcing rules needs to stop, across the board. The past version, however, had quite the obsession with attendance, escalating quickly to 'threaten to ruin your life' even if nothing was actually wrong. That does not make sense either.
Then in college everyone thinks skipping class is mostly no big deal, except for the few places they explicitly check and it is a huge deal. Weird.
I think the correct solution is that attendance is insurance. If you attend most of the classes and are non-disruptive, and are plausibly trying during that time, then we cut you a lot of slack and make it very hard to fail. If you do not attend most of the classes, then nothing bad happens to you automatically, but you are doing that At Your Own Risk. We will no longer save you if you do not pass the tests. If it is summer school for you, then so be it.
Against Active Shooter Drills
New York State is set to pass S6537, a long overdue bill summarized as follows:
Decreases the frequency of lock-down drills in schools;...

Apr 18, 2024 • 1min
LW - Claude 3 Opus can operate as a Turing machine by Gunnar Zarncke
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Claude 3 Opus can operate as a Turing machine, published by Gunnar Zarncke on April 18, 2024 on LessWrong.
Posted on Twitter:
Opus can operate as a Turing machine.
given only existing tapes, it learns the rules and computes new sequences correctly.
100% accurate over 500+ 24-step solutions (more tests running).
for 100% at 24 steps, the input tapes weigh 30k tokens*.
GPT-4 cannot do this.
Here is the prompt code for the Turing machine: https://github.com/SpellcraftAI/turing
This is the fully general counterpoint to the @VictorTaelin's A::B challenge (he put money where his mouth is and got praise for that from Yudkowsky).
Attention is Turing Complete was a claim already in 2021:
Theorem 6 The class of Transformer networks with positional encodings is Turing complete. Moreover, Turing completeness holds even in the restricted setting in which the only non-constant values in positional embedding pos(n) of n, for n N, are n, 1/n, and 1/n2 , and Transformer networks have a single encoder layer and three decoder layer
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Apr 18, 2024 • 6min
LW - Express interest in an "FHI of the West" by habryka
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Express interest in an "FHI of the West", published by habryka on April 18, 2024 on LessWrong.
TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment and financial support to researchers and entrepreneurs in the intellectual tradition of the Future of Humanity Institute. Fill out this form or comment below to express interest in being involved either as a researcher, entrepreneurial founder-type, or funder.
The Future of Humanity Institute is dead:
I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind.
I think FHI was one of the best intellectual institutions in history. Many of the most important concepts[1] in my intellectual vocabulary were developed and popularized under its roof, and many crucial considerations that form the bedrock of my current life plans were discovered and explained there (including the concept of crucial considerations itself).
With the death of FHI (as well as MIRI moving away from research towards advocacy), there no longer exists a place for broadly-scoped research on the most crucial considerations for humanity's future. The closest place I can think of that currently houses that kind of work is the Open Philanthropy worldview investigation team, which houses e.g. Joe Carlsmith, but my sense is Open Philanthropy is really not the best vehicle for that kind of work.
While many of the ideas that FHI was working on have found traction in other places in the world (like right here on LessWrong), I do think that with the death of FHI, there no longer exists any place where researchers who want to think about the future of humanity in an open ended way can work with other people in a high-bandwidth context, or get operational support for doing so. That seems bad.
So I am thinking about fixing it. Anders Sandberg, in his oral history of FHI, wrote the following as his best guess of what made FHI work:
What would it take to replicate FHI, and would it be a good idea? Here are some considerations for why it became what it was:
Concrete object-level intellectual activity in core areas and finding and enabling top people were always the focus. Structure, process, plans, and hierarchy were given minimal weight (which sometimes backfired - flexible structure is better than little structure, but as organization size increases more structure is needed).
Tolerance for eccentrics. Creating a protective bubble to shield them from larger University bureaucracy as much as possible (but do not ignore institutional politics!).
Short-term renewable contracts. [...] Maybe about 30% of people given a job at FHI were offered to have their contracts extended after their initial contract ran out. A side-effect was to filter for individuals who truly loved the intellectual work we were doing, as opposed to careerists.
Valued: insights, good ideas, intellectual honesty, focusing on what's important, interest in other disciplines, having interesting perspectives and thoughts to contribute on a range of relevant topics.
Deemphasized: the normal academic game, credentials, mainstream acceptance, staying in one's lane, organizational politics.
Very few organizational or planning meetings. Most meetings were only to discuss ideas or present research, often informally.
Some additional things that came up in a conversation I had with Bostrom himself about this:
A strong culture that gives people guidance on what things to work on, and helps researchers and entrepreneurs within the organization coordinate
A bunch of logistical and operation...


