

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

May 24, 2024 • 2min
EA - From Kilograms to Tons: The Scaling Challenge for Cultivated Meat by Dr Faraz Harsini
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: From Kilograms to Tons: The Scaling Challenge for Cultivated Meat, published by Dr Faraz Harsini on May 24, 2024 on The Effective Altruism Forum.
This is a linkpost for GFI's most recent report on trends in scaling cultivated meat.
As a student, I spent over 10 years studying human diseases and pandemics. I constantly asked myself: how can I increase my impact? My pharmaceuticals job felt unlikely to be the best answer.
As I learned more about the cruelty in the meat, dairy and egg industries, I realized that we need technology to replace animal agriculture in addition to individuals choosing to go vegan and promoting veganism. We need to change the culture and educate individuals, but we also need to change the system.
As a cultivated meat senior scientist at GFI, I spent a lot of time last year conducting a survey to determine trends in cultivated meat production, identify challenges, and provide useful insights to investors, researchers, and suppliers.
Below are the selected key insights from the report.
The industry currently operates on a small scale, with most productions at the kilogram level. Many companies plan to scale up with large bioreactors in the next three years, enabling significantly larger annual production in the order of tons.
Companies are exploring various bioprocessing techniques and bioreactor designs for process optimization, including stirred-tank or air-lift bioreactors, fed-batch or continuous modes of operation, and strategies like recycling and filtration to reduce costs.
Some companies face knowledge gaps in regulatory affairs, signaling a need for collaboration with regulatory agencies to establish frameworks.
Cultivated meat companies are investigating diverse fit-for-purpose scaling strategies, bioreactors, and operational methods. Due to the specific requirements of each cell type and product, a universal bioprocess and scaling solution may not be feasible. Consequently, there's a demand for additional techno-economic models and experimental data to fine-tune bioprocesses for each specific product type.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

May 24, 2024 • 27min
EA - Talent Needs of Technical AI Safety Teams by Ryan Kidd
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talent Needs of Technical AI Safety Teams, published by Ryan Kidd on May 24, 2024 on The Effective Altruism Forum.
Co-Authors: @yams @Carson Jones, @McKennaFitzgerald, @Ryan Kidd
MATS tracks the evolving landscape of AI safety[1] to ensure that our program continues to meet the talent needs of safety orgs. As the field has grown, it's become increasingly necessary to adopt a more formal approach to this monitoring, since relying on a few individuals to intuitively understand the dynamics of such a vast ecosystem could lead to significant missteps.[2]
In the winter and spring of 2024, we conducted 31 interviews, ranging in length from 30 to 120 minutes, with key figures in AI safety, including senior researchers, organization leaders, social scientists, strategists, funders, and policy experts. This report synthesizes the key insights from these discussions.
The overarching perspectives presented here are not attributed to any specific individual or organization; they represent a collective, distilled consensus that our team believes is both valuable and responsible to share. Our aim is to influence the trajectory of emerging researchers and field-builders, as well as to inform readers on the ongoing evolution of MATS and the broader AI Safety field.
All interviews were conducted on the condition of anonymity.
Needs by Organization Type
Organization type
Talent needs
Scaling Lab (e.g., Anthropic, Google DeepMind, OpenAI) Safety Teams
Iterators > Amplifiers
Small Technical Safety Orgs (<10 FTE)
Iterators > Machine Learning (ML) Engineers
Growing Technical Safety Orgs (10-30 FTE)
Amplifiers > Iterators
Independent Research
Iterators > Connectors
Here, ">" means "are prioritized over."
Archetypes
We found it useful to frame the different profiles of research strengths and weaknesses as belonging to one of three archetypes (one of which has two subtypes). These aren't as strict as, say, Diablo classes; this is just a way to get some handle on the complex network of skills involved in AI safety research. Indeed, capacities tend to converge with experience, and neatly classifying more experienced researchers often isn't possible.
We acknowledge past framings by Charlie Rogers-Smith and Rohin Shah (research lead/contributor), John Wentworth (theorist/experimentalist/distillator), Vanessa Kosoy (proser/poet), Adam Shimi (mosaic/palimpsests), and others, but believe our framing of current AI safety talent archetypes is meaningfully different and valuable, especially pertaining to current funding and employment opportunities.
Connectors / Iterators / Amplifiers
Connectors are strong conceptual thinkers who build a bridge between contemporary empirical work and theoretical understanding. Connectors include people like Paul Christiano, Buck Shlegeris, Evan Hubinger, and Alex Turner[3]; researchers doing original thinking on the edges of our conceptual and experimental knowledge in order to facilitate novel understanding.
Note that most Connectors are typically not purely theoretical; they still have the technical knowledge required to design and run experiments. However, they prioritize experiments and discriminate between research agendas based on original, high-level insights and theoretical models, rather than on spur of the moment intuition or the wisdom of the crowds.
Pure Connectors often have a long lead time before they're able to produce impactful work, since it's usually necessary for them to download and engage with varied conceptual models. For this reason, we make little mention of a division between experienced and inexperienced Connectors.
Iterators are strong empiricists who build tight, efficient feedback loops for themselves and their collaborators. Ethan Perez is the central contemporary example here; his efficient prioritization and effective use of frictional...

May 24, 2024 • 4min
LW - minutes from a human-alignment meeting by bhauth
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: minutes from a human-alignment meeting, published by bhauth on May 24, 2024 on LessWrong.
"OK, let's get this meeting started. We're all responsible for development of this new advanced intelligence 'John'. We want John to have some kids with our genes, instead of just doing stuff like philosophy or building model trains, and this meeting is to discuss how we can ensure John tries to do that."
"It's just a reinforcement learning problem, isn't it? We want kids to happen, so provide positive reinforcement when that happens."
"How do we make sure the kids are ours?"
"There's a more fundamental problem than that: without intervention earlier on, that positive reinforcement will never happen."
"OK, so we need some guidance earlier on. Any suggestions?"
"To start, having other people around is necessary. How about some negative reinforcement if there are no other humans around for some period of time?"
"That's a good one, also helps with some other things. Let's do that."
"Obviously sex is a key step in producing children. So we can do positive reinforcement there."
"That's good, but wait, how do we tell if that's what's actually happening?"
"We have access to internal representation states. Surely we can monitor those to determine the situation."
"Yeah, we can monitor the representation of vision, instead of something more abstract and harder to understand."
"What if John creates a fictional internal representation of naked women, and manages to direct the monitoring system to that instead?"
"I don't think that's plausible, but just in case, we can add some redundant measures. A heuristic blend usually gives better results, anyway."
"How about monitoring the level of some association between some representation of the current situation and sex?"
"That could work, but how do we determine that association? We'd be working with limited data there, and we don't want to end up with associations to random irrelevant things, like specific types of shoes or stylized drawings of ponies."
"Those are weird examples, but whatever. We can just rely on indicators of social consensus, and then blend those with personal experiences to the extent they're available."
"I've said this before, but this whole approach isn't workable. To keep a John-level intelligence aligned, we need another John-level intelligence."
"Oh, here we go again. So, how do you expect to do that?"
"I actually have a proposal: we have John follow cultural norms around having children. We can presume that a society that exists would probably have a culture conducive to that."
"Why would you expect that to be any more stable than John as an individual? All that accomplishes is some averaging, and it adds the disadvantages of relying on communication."
"I don't have a problem with the proposal of following cultural norms, but I think that such a culture will only be stable to the extent that the other alignment approaches we discussed are successful. So it's not a replacement, it's more of a complement."
"We were already planning for some cultural norm following. Anyone opposed to just applying the standard amount of that to sex-related things?"
"Seems good to me."
"I have another concern. I think the effectiveness of the monitoring systems we discussed is going to depend on the amount of recursive self-improvement that happens, so we should limit that."
"I think that's a silly concern and a huge disadvantage. Absolutely not."
"I'm not concerned about the alignment impact if John is already doing some RSI, but we do have a limited amount of time before those RSI investments need to start paying off. I vote we limit the RSI extent based on things like available food resources and life expectancy."
"I don't think everyone will reach a consensus on this issue, so let's just compromise on the amount and metrics."
"Fine."
"A...

May 24, 2024 • 4min
EA - An invitation to the Berlin EA co-working space TEAMWORK by Johanna Schröder
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An invitation to the Berlin EA co-working space TEAMWORK, published by Johanna Schröder on May 24, 2024 on The Effective Altruism Forum.
TL;DR
TEAMWORK, a co-working and event space in Berlin run by Effektiv Spenden, is available for use by the Effective Altruism (EA) community. We offer up to 15 desk spaces in a co-working office for EA professionals and a workshop and event space for a broad range of EA events, all free of charge at present (and at least for the rest of 2024).
A lot has changed since the space was established in 2021. After a remodeling project in September last year, there has been a notable improvement in the acoustics and soundproofing, leading to a more focused and productive work environment.
Apply here if you would like to join our TEAMWORK community.
What TEAMWORK offers
TEAMWORK is a co-working space focused on EA professionals operated by Effektiv Spenden and located in Berlin. Following a remodeling project in fall 2023, we were able to improve the acoustics and soundproofing significantly, fostering a more conducive atmosphere for focused work.
Additionally, we transformed one of our co-working rooms into a workshop space, providing everything necessary for productive collaboration and gave our meeting room a makeover with modern new furniture, ensuring a professional setting for discussions and presentations.
Our facilities include:
Co-working Offices: One large office with 11 desks and a smaller office with four desks. The smaller office is also bookable for team retreats or team co-working, while the big office can be transformed into an event space for up to 40 people.
Workshop Room: "Flamingo Paradise" serves as a workshop room with a big sofa, a large desk, a flip chart, and a pin board. It can also be used as a small event space, complete with a portable projector. When not in use for events, it functions as a chill and social area.
Meeting Room: A meeting room for up to four people (max capacity six people). Can also be used for calls.
Phone Booths: Four private phone booths. In addition to that also the "Flamingo Paradise" and the Meeting room can be used to take a call.
Community Kitchen: A kitchen with free coffee and tea. We have a communal lunch at 1 pm where members can either bring their own meals or go out to eat.
Berlin as an EA Hub
Berlin is home to a vibrant and growing (professional) EA community, making it one of the biggest EA hubs in continental Europe. It is also home of Effektiv Spenden, Germany's effective giving organization, that is hosting this space. Engaging with this dynamic community provides opportunities for collaboration and networking with like-minded individuals.
Additionally, working from Berlin could offer a change of scene maybe enhancing your productivity and inspiration (particularly in spring and summer).
Join Our Community
Our vision is to have a space where people from the EA Community can not only work to make the world a better place, but can also informally engage with other members of the community during coffee breaks, lunch or at community events. Many of the EA meetups organized by the EA Berlin community take place at TEAMWORK. You can find more information on how to engage with the EA Berlin community here.
People in the TEAMWORK community are working on various cause areas. Our members represent a range of organizations, including Founders Pledge, Future Matters, Open Philanthropy, and Kooperation Global. We frequently host international visitors from numerous EA-aligned organizations such as Charity Entrepreneurship, the Center for Effective Altruism, the Good Food Institute, Future Cleantech Architects, and the Center for the Governance of AI.
Additionally, organizations like EA Germany, the Fish Welfare Initiative, One for the World, and Allfed have utilized our space for team re...

May 24, 2024 • 19min
LW - What mistakes has the AI safety movement made? by EuanMcLean
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What mistakes has the AI safety movement made?, published by EuanMcLean on May 24, 2024 on LessWrong.
This is the third of three posts summarizing what I learned when I interviewed 17 AI safety experts about their "big picture" of the existential AI risk landscape: how AGI will play out, how things might go wrong, and what the AI safety community should be doing. See here for a list of the participants and the standardized list of questions I asked.
This post summarizes the responses I received from asking "Are there any big mistakes the AI safety community has made in the past or are currently making?"
"Yeah, probably most things people are doing are mistakes. This is just some random group of people. Why would they be making good decisions on priors? When I look at most things people are doing, I think they seem not necessarily massively mistaken, but they seem somewhat confused or seem worse to me by like 3 times than if they understood the situation better." - Ryan Greenblatt
"If we look at the track record of the AI safety community, it quite possibly has been harmful for the world." - Adam Gleave
"Longtermism was developed basically so that AI safety could be the most important cause by the utilitarian EA calculus. That's my take." - Holly Elmore
Participants pointed to a range of mistakes they thought the AI safety movement had made. Key themes included an overreliance on theoretical argumentation, being too insular, putting people off by pushing weird or extreme views, supporting the leading AGI companies, insufficient independent thought, advocating for an unhelpful pause to AI development, and ignoring policy as a potential route to safety.
How to read this post
This is not a scientific analysis of a systematic survey of a representative sample of individuals, but my qualitative interpretation of responses from a loose collection of semi-structured interviews. Take everything here with the appropriate seasoning.
Results are often reported in the form "N respondents held view X". This does not imply that "17-N respondents disagree with view X", since not all topics, themes and potential views were addressed in every interview. What "N respondents held view X" tells us is that at least N respondents hold X, and consider the theme of X important enough to bring up.
The following is a summary of the main themes that came up in my interviews. Many of the themes overlap with one another, and the way I've clustered the criticisms is likely not the only reasonable categorization.
Too many galaxy-brained arguments & not enough empiricism
"I don't find the long, abstract style of investigation particularly compelling." - Adam Gleave
9 respondents were concerned about an overreliance or overemphasis on certain kinds of theoretical arguments underpinning AI risk: namely Yudkowsky's arguments in the sequences and Bostrom's arguments in Superintelligence.
"All these really abstract arguments that are very detailed, very long and not based on any empirical experience. [...]
Lots of trust in loose analogies, thinking that loose analogies let you reason about a topic you don't have any real expertise in. Underestimating the conjunctive burden of how long and abstract these arguments are. Not looking for ways to actually test these theories. [...]
You can see Nick Bostrom in Superintelligence stating that we shouldn't use RL to align an AGI because it trains the AI to maximize reward, which will lead to wireheading. The idea that this is an inherent property of RL is entirely mistaken. It may be an empirical fact that certain minds you train with RL tend to make decisions on the basis of some tight correlate of their reinforcement signal, but this is not some fundamental property of RL."
Alex Turner
Jamie Bernardi argued that the original view of what AGI will look like, nam...

May 24, 2024 • 2min
LW - The case for stopping AI safety research by catubc
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The case for stopping AI safety research, published by catubc on May 24, 2024 on LessWrong.
TLDR: AI systems are failing in obvious and manageable ways for now. Fixing them will push the failure modes beyond our ability to understand and anticipate, let alone fix. The AI safety community is also doing a huge economic service to developers. Our belief that our minds can "fix" a super-intelligence - especially bit by bit - needs to be re-thought.
I wanted to write this post forever, but now seems like a good time. The case is simple, I hope it takes you 1min to read.
1. AI safety research is still solving easy problems. We are patching up the most obvious (to us) problems. As time goes we will no longer be able to play this existential risk game of chess with AI systems. I've argued this a lot (preprint; ICML paper accepted (shorter read, will repost), will be out in a few days; www.agencyfoundations.ai). Seems others have this thought.
2. Capability development is getting AI safety research for free. It's likely in the millions to tens of millions of dollars. All the "hackathons", and "mini" prizes to patch something up or propose a new way for society to digest/adjust to some new normal (and increasingly incentivizing existing academic labs).
3. AI safety research is speeding up capabilities. I hope this is somewhat obvious to most.
I write this now because in my view we are about 5-7 years before massive human biometric and neural datasets will enter our AI training. These will likely generate amazing breakthroughs in long-term planning and emotional and social understanding of the human world. They will also most likely increase x-risk radically.
Stopping AI safety research or taking it in-house with security guarantees etc, will slow down capabilities somewhat - and may expose capabilities developers more directly to public opinion of still manageable harmful outcomes.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

May 24, 2024 • 27min
LW - Talent Needs in Technical AI Safety by yams
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talent Needs in Technical AI Safety, published by yams on May 24, 2024 on LessWrong.
Co-Authors: @yams, @Carson Jones, @McKennaFitzgerald, @Ryan Kidd
MATS tracks the evolving landscape of AI safety[1] to ensure that our program continues to meet the talent needs of safety teams. As the field has grown, it's become increasingly necessary to adopt a more formal approach to this monitoring, since relying on a few individuals to intuitively understand the dynamics of such a vast ecosystem could lead to significant missteps.[2]
In the winter and spring of 2024, we conducted 31 interviews, ranging in length from 30 to 120 minutes, with key figures in AI safety, including senior researchers, organization leaders, social scientists, strategists, funders, and policy experts. This report synthesizes the key insights from these discussions.
The overarching perspectives presented here are not attributed to any specific individual or organization; they represent a collective, distilled consensus that our team believes is both valuable and responsible to share. Our aim is to influence the trajectory of emerging researchers and field-builders, as well as to inform readers on the ongoing evolution of MATS and the broader AI Safety field.
All interviews were conducted on the condition of anonymity.
Needs by Organization Type
Organization type
Talent needs
Scaling Lab (i.e. OpenAI, DeepMind, Anthropic) Safety Teams
Iterators > Amplifiers
Small Technical Safety Orgs (<10 FTE)
Iterators > Machine Learning (ML) Engineers
Growing Technical Safety Orgs (10-30 FTE)
Amplifiers > Iterators
Independent Research
Iterators > Connectors
Archetypes
We found it useful to frame the different profiles of research strengths and weaknesses as belonging to one of three archetypes (one of which has two subtypes). These aren't as strict as, say, Diablo classes; this is just a way to get some handle on the complex network of skills involved in AI safety research. Indeed, capacities tend to converge with experience, and neatly classifying more experienced researchers often isn't possible.
We acknowledge past framings by Charlie Rogers-Smith and Rohin Shah (research lead/contributor), John Wentworth (theorist/experimentalist/distillator), Vanessa Kosoy (proser/poet), Adam Shimi (mosaic/palimpsests), and others, but believe our framing of current AI safety talent archetypes is meaningfully different and valuable, especially pertaining to current funding and employment opportunities.
Connectors / Iterators / Amplifiers
Connectors are strong conceptual thinkers who build a bridge between contemporary empirical work and theoretical understanding. Connectors include people like Paul Christiano, Buck Shlegeris, Evan Hubinger, and Alex Turner[3]; researchers doing original thinking on the edges of our conceptual and experimental knowledge in order to facilitate novel understanding.
Note that most Connectors are typically not purely theoretical; they still have the technical knowledge required to design and run experiments. However, they prioritize experiments and discriminate between research agendas based on original, high-level insights and theoretical models, rather than on spur of the moment intuition or the wisdom of the crowds.
Pure Connectors often have a long lead time before they're able to produce impactful work, since it's usually necessary for them to download and engage with varied conceptual models. For this reason, we make little mention of a division between experienced and inexperienced Connectors.
Iterators are strong empiricists who build tight, efficient feedback loops for themselves and their collaborators. Ethan Perez is the central contemporary example here; his efficient prioritization and effective use of frictional time has empowered him to make major contributions to a wide range of empir...

May 23, 2024 • 9min
LW - Big Picture AI Safety: Introduction by EuanMcLean
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Big Picture AI Safety: Introduction, published by EuanMcLean on May 23, 2024 on LessWrong.
tldr: I conducted 17 semi-structured interviews of AI safety experts about their big picture strategic view of the AI safety landscape: how will human-level AI play out, how things might go wrong, and what should the AI safety community be doing. While many respondents held "traditional" views (e.g.
the main threat is misaligned AI takeover), there was more opposition to these standard views than I expected, and the field seems more split on many important questions than someone outside the field may infer.
What do AI safety experts believe about the big picture of AI risk? How might things go wrong, what we should do about it, and how have we done so far? Does everybody in AI safety agree on the fundamentals? Which views are consensus, which are contested and which are fringe? Maybe we could learn this from the literature (as in the MTAIR project), but many ideas and opinions are not written down anywhere, they exist only in people's heads and in lunchtime conversations at AI labs and coworking
spaces.
I set out to learn what the AI safety community believes about the strategic landscape of AI safety. I conducted 17 semi-structured interviews with a range of AI safety experts. I avoided going into any details of particular technical concepts or philosophical arguments, instead focussing on how such concepts and arguments fit into the big picture of what AI safety is trying to achieve.
This work is similar to the AI Impacts surveys, Vael Gates' AI Risk Discussions, and Rob Bensinger's existential risk from AI survey. This is different to those projects in that both my approach to interviews and analysis are more qualitative. Part of the hope for this project was that it can hit on harder-to-quantify concepts that are too ill-defined or intuition-based to fit in the format of previous survey work.
Questions
I asked the participants a standardized list of questions.
What will happen?
Q1 Will there be a human-level AI? What is your modal guess of what the first human-level AI (HLAI) will look like? I define HLAI as an AI system that can carry out roughly 100% of economically valuable cognitive tasks more cheaply than a human.
Q1a What's your 60% or 90% confidence interval for the date of the first HLAI?
Q2 Could AI bring about an existential catastrophe? If so, what is the most likely way this could happen?
Q2a What's your best guess at the probability of such a catastrophe?
What should we do?
Q3 Imagine a world where, absent any effort from the AI safety community, an existential catastrophe happens, but actions taken by the AI safety community prevent such a catastrophe. In this world, what did we do to prevent the catastrophe?
Q4 What research direction (or other activity) do you think will reduce existential risk the most, and what is its theory of change? Could this backfire in some way?
What mistakes have been made?
Q5 Are there any big mistakes the AI safety community has made in the past or are currently making?
These questions changed gradually as the interviews went on (given feedback from participants), and I didn't always ask the questions exactly as I've presented them here. I asked participants to answer from their internal model of the world as much as possible and to avoid deferring to the opinions of others (their inside view so to speak).
Participants
Adam Gleave is the CEO and co-founder of the alignment research non-profit FAR AI. (Sept 23)
Adrià Garriga-Alonso is a research scientist at FAR AI. (Oct 23)
Ajeya Cotra leads Open Philantropy's grantmaking on technical research that could help to clarify and reduce catastrophic risks from advanced AI. (Jan 24)
Alex Turner is a research scientist at Google DeepMind on the Scalable Alignment team. (Feb 24)
Ben Cottie...

May 23, 2024 • 1min
EA - Animal welfare in the United States: Opportunities for impact by Animal Ask
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal welfare in the United States: Opportunities for impact, published by Animal Ask on May 23, 2024 on The Effective Altruism Forum.
This report is also available on Animal Ask website.
Downloadable pdf version here.
Summary
In this report, we give an overview of animal production in the United States. We explore which industries are responsible for the largest amount of animal exploitation in the United States. We touch on all major farmed and wild-caught sectors in the country, before taking a deeper look at egg production and chicken meat production.
Looking at how animal production is clustered by state and county, we point out where there are opportunities for animal advocacy organisations to make the biggest impact on the lives of animals. We also provide an overview of the economic forces that determine how animals are farmed and killed, which can help us to understand whether any given campaign will deliver the impact that we intend.
Lastly, we explore the different routes to changes, showing how different political levers can enable animal advocacy organisations to have a disproportionate impact on the lives of animals.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

May 23, 2024 • 2min
EA - Announcing the EA Nigeria Summit by EA Nigeria
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the EA Nigeria Summit, published by EA Nigeria on May 23, 2024 on The Effective Altruism Forum.
We are excited to announce the EA Nigeria Summit, which will take place on September 6th and 7th, 2024, in Abuja, Nigeria.
The two-night event aims to bring together individuals thinking carefully about some of the world's biggest problems, taking impactful action to solve them, or exploring ways to do so. Attendees can share knowledge, network, and explore collaboration opportunities with like-minded people and organizations from Nigeria, Africa, and other international attendees.
We are organizing the summit with the support of the Centre for Effective Altruism Event Team.
The summit is open to individuals from Nigeria, Africa, or other international locations, and we expect to welcome 100+ individuals at the summit; emphasis will be given to the following categories of individuals:
Existing members of the EA Nigeria community or Nigerian individuals familiar with effective altruism.
African Individuals who are familiar with and engaging with the EA community
Individuals (International or local) running or working for EA-aligned projects with operations in Nigeria or other African states.
International individuals who could contribute to the event's sessions.
Unfortunately, we have limited capacity for the summit, so we will have to choose who we accept based on who we think would get the most out of it. However, these are not exhaustive, and we encourage you to apply even if you are in doubt!
We can also provide invitation letters for a visitor visa for the summit for international individuals but can't provide additional help and can't guarantee that this letter will be sufficient.
Learn more about the summit and apply
here, application is open until August 5th, 2024.
For inquiries or questions, feel free to comment under this post or email us at
info@eanigeria.org.
We hope to see you there!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org


