

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Aug 8, 2024 • 4min
LW - Leaving MIRI, Seeking Funding by abramdemski
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Leaving MIRI, Seeking Funding, published by abramdemski on August 8, 2024 on LessWrong.
This is slightly old news at this point, but: as part of MIRI's recent strategy pivot, they've eliminated the Agent Foundations research team. I've been out of a job for a little over a month now. Much of my research time in the first half of the year was eaten up by engaging with the decision process that resulted in this, and later, applying to grants and looking for jobs.
I haven't secured funding yet, but for my own sanity & happiness, I am (mostly) taking a break from worrying about that, and getting back to thinking about the most important things.
However, in an effort to try the obvious, I have set up a Patreon where you can fund my work directly. I don't expect it to become my main source of income, but if it does, that could be a pretty good scenario for me; it would be much nicer to get money directly from a bunch of people who think my work is good and important, as opposed to try to justify my work regularly in grant applications.
What I'm (probably) Doing Going Forward
I've been told by several people within MIRI and outside of MIRI that it seems better for me to do roughly what I've been doing, rather than pivot to something else. As such, I mainly expect to continue doing Agent Foundations research.
I think of my main research program as the Tiling Agents program. You can think of this as the question of when agents will preserve certain desirable properties (such as safety-relevant properties) when given the opportunity to self-modify. Another way to think about it is the slightly broader question: when can one intelligence trust another? The bottleneck for avoiding harmful self-modifications is self-trust; so getting tiling results is mainly a matter of finding conditions for trust.
The search for tiling results has two main motivations:
AI-AI tiling, for the purpose of finding conditions under which AI systems will want to preserve safety-relevant propertien.
Human-AI tiling, for the purpose of understanding when we can justifiably trust AI systems.
While I see this as the biggest priority, I also expect to continue a broader project of deconfusion. The bottleneck to progress in AI safety continues to be our confusion about many of the relevant concepts, such as human values.
I'm also still interested in doing some work on accelerating AI safety research using modern AI.
Thoughts on Public vs Private Research
Some work that is worth doing should be done in a non-public, or even highly secretive, way.[1] However, my experience at MIRI has given me a somewhat burned-out feeling about doing highly secretive work. It is hard to see how secretive work can have a positive impact on the future (although the story for public work is also fraught). At MIRI, there was always the idea that if we came up with something sufficiently good, something would happen...
although what exactly was unclear, at least to me.
Secretive research also lacks feedback loops that public research has. My impression is that this slows down the research significantly (contrary to some views at MIRI).
In any case, I personally hope to make my research more open and accessible going forward, although this may depend on my future employer. This means writing more on LessWrong and the Alignment Forum, and perhaps writing academic papers.
As part of this, I hope to hold more of my research video calls as publicly-accessible discussions. I've been experimenting with this a little bit and I feel it has been going well so far.
1. ^
Roughly, I mean dangerous AI capabilities work, although the "capabilities vs safety" dichotomy is somewhat fraught.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Aug 8, 2024 • 19min
AF - You can remove GPT2's LayerNorm by fine-tuning for an hour by Stefan Heimersheim
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You can remove GPT2's LayerNorm by fine-tuning for an hour, published by Stefan Heimersheim on August 8, 2024 on The AI Alignment Forum.
This work was produced at Apollo Research, based on initial research done at MATS.
LayerNorm is annoying for mechanstic interpretability research ("[...] reason #78 for why interpretability researchers hate LayerNorm" - Anthropic, 2023).
Here's a Hugging Face link to a GPT2-small model without any LayerNorm.
The final model is only slightly worse than a GPT2 with LayerNorm[1]:
Dataset
Original GPT2
Fine-tuned GPT2 with LayerNorm
Fine-tuned GPT without LayerNorm
OpenWebText (ce_loss)
3.095
2.989
3.014 (+0.025)
ThePile (ce_loss)
2.856
2.880
2.926 (+0.046)
HellaSwag (accuracy)
29.56%
29.82%
29.54%
I fine-tuned GPT2-small on OpenWebText while slowly removing its LayerNorm layers, waiting for the loss to go back down after reach removal:
Introduction
LayerNorm (LN) is a component in Transformer models that normalizes embedding vectors to have constant length; specifically it divides the embeddings by their standard deviation taken over the hidden dimension. It was originally introduced to stabilize and speed up training of models (as a replacement for batch normalization). It is active during training and inference.
The equation includes the standard deviation (std) Var[x]+ϵ which makes it a non-linear operation. This hinders interpretability in a variety of ways, from annoyances and inaccuracies such as
attributing residual stream directions to logit effects (e.g. SAE features, direct logit attribution),[2]
being annoying to deal with Attribution Patching, or
being difficult to deal with in Apollo's LIB method.
In the Docstring circuit analysis we seriously considered whether the model might be using LN in its algorithm. This post even shows that LN can be used as the sole non-linearity to solve non-linear classification problems (see also this related work).
Recently, with progress in Sparse Dictionary Learning, agendas (e.g. this one) imagine decomposing networks into sets of sparsely connected components (SAEs, Transcoders, etc.). A core difficulty to "putting it all together" is that the interactions between different components often route through LayerNorm whose effect we do not understand.
Motivation
It would be pretty neat to have an LLM that still works (speaks English etc.) while less or no LN layers. One option would be to train a model without LN from scratch (done for tiny models, e.g. TinyModel), but this is very hard or impossible for larger models (hearsay is that you need a low learning rate and to be very careful).
Taking an existing model and removing the LN layers however seems doable if LN isn't implementing some important computation.[3] That is, LN "does its thing" and the model has learned to "deal with it", but it's not irreplaceable. A reason to be optimistic is that the spread of standard deviations across different samples isn't that large, so maybe replacing the LN-computed standard deviation with a fixed number might kinda work.
Method
I take GPT2-small, fine-tune it on OpenWebText, and remove LNs one-by-one while fine-tuning.
The only non-linear operation in a LN layer is the division by the standard deviation (std) of the embedding vectors; the remaining operations can be absorbed into later weight matrices (see the
fold_ln option in TransformerLens; also discussed in this appendix). Thus I mainly focus on the std part here.
My general strategy is to "remove" an LN layer (this makes the loss go up), and then to train the model for some time (on the original training data) until the loss is back near the baseline. For this "remove" step I do the following
Calculate the average std on the dataset (I used a quite small sample, 16 prompts), separately for position 0 and position > 0
Replace the std calculatio...

Aug 8, 2024 • 1h 17min
LW - AI #76: Six Shorts Stories About OpenAI by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #76: Six Shorts Stories About OpenAI, published by Zvi on August 8, 2024 on LessWrong.
If you're looking for audio of my posts, you're in luck. Thanks to multiple volunteers you have two options.
1. Option one is Askwho, who uses a Substack. You can get an ElevenLabs-quality production, with a voice that makes me smile. For Apple Podcasts that means you can add them here, Spotify here, Pocket Casts here, RSS here.
2. Alternatively, for a more traditional AI treatment in podcast form, you can listen via Apple Podcasts, Spotify, Pocket Casts, and RSS.
These should be permanent links so you can incorporate those into 'wherever you get your podcasts.' I use Castbox myself, it works but it's not special.
If you're looking forward to next week's AI #77, I am going on a two-part trip this week. First I'll be going to Steamboat in Colorado to give a talk, then I'll be swinging by Washington, DC on Wednesday, although outside of that morning my time there will be limited. My goal is still to get #77 released before Shabbat dinner, we'll see if that works. Some topics may of course get pushed a bit.
It's crazy how many of this week's developments are from OpenAI. You've got their voice mode alpha, JSON formatting, answering the letter from several senators, sitting on watermarking for a year, endorsement of three bills before Congress and also them losing a cofounder to Anthropic and potentially another one via sabbatical.
Also Google found to be a monopolist, we have the prompts for Apple Intelligence and other neat stuff like that.
Table of Contents
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. Surveys without the pesky humans.
4. Language Models Don't Offer Mundane Utility. Ask a silly question.
5. Activate Voice Mode. When I know more, dear readers, so will you.
6. Apple Intelligence. We have its system prompts. They're highly normal.
7. Antitrust Antitrust. Google found to be an illegal monopolist.
8. Copyright Confrontation. Nvidia takes notes on scraping YouTube videos.
9. Fun With Image Generation. The days of Verify Me seem numbered.
10. Deepfaketown and Botpocalypse Soon. OpenAI built a watermarking system.
11. They Took Our Jobs. We have met the enemy, and he is us. For now.
12. Chipping Up. If you want a chip expert ban, you have to enforce it.
13. Get Involved. Safeguard AI.
14. Introducing. JSONs, METR, Gemma, Rendernet, Thrive.
15. In Other AI News. Google more or less buys out Character.ai.
16. Quiet Speculations. Llama-4 only ten times more expensive than Llama-3?
17. The Quest for Sane Regulations. More on SB 1047 but nothing new yet.
18. That's Not a Good Idea. S. 2770 on deepfakes, and the EU AI Act.
19. The Week in Audio. They keep getting longer.
20. Exact Words. Three bills endorsed by OpenAI. We figure out why.
21. Openly Evil AI. OpenAI replies to the questions from Senators.
22. Goodbye to OpenAI. One cofounder leaves, another takes a break.
23. Rhetorical Innovation. Guardian really will print anything.
24. Open Weights Are Unsafe and Nothing Can Fix This. Possible fix?
25. Aligning a Smarter Than Human Intelligence is Difficult. What do we want?
26. People Are Worried About AI Killing Everyone. Janus tried to warn us.
27. Other People Are Not As Worried About AI Killing Everyone. Disbelief.
28. The Lighter Side. So much to draw upon these days.
Language Models Offer Mundane Utility
Predict the results of social science survey experiments, with (r = 0.85, adj r = 0.91) across 70 studies, with (r = .9, adj r = .94) for the unpublished studies. If these weren't surveys I would be highly suspicious because this would be implying the results could reliably replicate at all. If it's only surveys, sure, I suppose surveys should replicate.
This suggests that we mostly do not actually need the surveys, we can get close (...

Aug 8, 2024 • 3min
EA - AMA about roles at 80,000 Hours by Bella
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA about roles at 80,000 Hours, published by Bella on August 8, 2024 on The Effective Altruism Forum.
80,000 Hours is hiring for several positions on our team!
Rather than posting about them separately, we're taking Toby's advice and combining them all into this "ask me anything" with the hiring managers - Bella Forristal and Huon Porteous.
Why an AMA?
We know that applying for jobs at orgs working in EA areas can be a stressful, demoralising, and confusing process.
We've also heard the calls for increased transparency, and clearer communication about the hiring process.
What's more, we've heard that opaque and intimidating processes can be especially discouraging to applicants from underrepresented backgrounds.
We've made various adjustments to our job descriptions and recruitment process, but we think one of the best things we can do here is talk to candidates directly to address their questions or concerns.
Speaking personally, I (Bella) think this kind of AMA might have helped me feel more comfortable applying for more ambitious roles than I would have otherwise when I was about to graduate.
What are the roles?
We have four hiring rounds open right now. Click through to read a summary at the top of each job description:
Head of Marketing - deadline August 18
Marketer - deadline August 18
Head of Video - deadline August 25
Advisor - deadline September 2
These roles are across three different teams (one of which doesn't exist yet!) and would suit a variety of different skill profiles and interests.
One thing they all have in common: if you're reading this, you've already got one important trait we'd love to see in candidates, and that's an interest in ideas related to effective altruism and solving the world's most pressing problems.
A summary of some other traits we're looking for in all four of these roles:
Strong judgement and/or analytical skills
Good communication skills
None of these roles require prior experience in nearby fields, though it is always a bonus
Click the links above to see more details about the roles and what we're looking for!
AMA logistics
Please ask any questions that you may have about the roles, the process, or working at 80,000 Hours in general.
No question is too silly or trivial! (You might want to check that it isn't answered by the job description for the relevant role. That said, we won't be upset if it is. )
Please leave questions as comments on this post, and upvote any questions you'd be especially excited for us to answer.
If you'd like to leave a question anonymously, you can use an anonymous Forum account, or send it to Bella's admonymous and she'll post it here.
If you'd like to ask a question privately, you can email Bella (for marketing & video roles) or Huon (for advising roles) at bella@80000hours.org and huon@80000hours.org.
We might not be able to answer all questions immediately, but we'll do our best to respond when we can!
We'll close this AMA by the end of the day on Wednesday 14th August, and update the title & top of the page to let everyone know. Until then, please ask away!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Aug 8, 2024 • 2min
EA - Who would you like to see speak at EA Global? by Jordan Pieters
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who would you like to see speak at EA Global?, published by Jordan Pieters on August 8, 2024 on The Effective Altruism Forum.
I'm Jordan, I recently joined as the Content Coordinator on the EA Global team at CEA, and I'd love to hear from the community about what content you'd like to see at future conferences. You can see upcoming conference dates
here.
How do we usually select content?
Traditionally, our content selection focuses on:
Informing attendees about important developments in relevant fields (eg. founders discussing new organisations or projects, researchers sharing their findings)
Diving deeper into key ideas with experts
Teaching new skills relevant to EA work
Some recent sessions that were well-received included:
Panel - When to shut down: Lessons from implementers on winding down projects
Talk - Neela Saldanha: Improving policy take-up and implementation in scaling programs
Workshop - Zac Hatfield Dodds: AI Safety under uncertainty
However, we recognise that conference content can (and perhaps should) fulfil many other roles, so your suggestions shouldn't be constrained by how things have been done in the past.
What kinds of suggestions are we looking for?
We welcome suggestions in various forms:
Specific speakers: Nominate people who you think would make great speakers (this can be yourself!).
Topic proposals: Suggest topics that you believe deserve more attention.
Session format ideas: Propose unique formats that could make sessions more engaging (e.g., discussion roundtables, workshops, debates).
To get an idea of what types of content we've had in the past, check out
recordings from previous EA Global conferences.
We have limited content slots at our conferences, which means we can't promise to follow up on every suggestion. However, every suggestion helps us better understand what our attendees want to see and can provide jumping-off points for new ideas.
How to Submit Your Suggestions:
Comment on this post and discuss your ideas with other forum users.
Fill out
this form or email
speakers@eaglobal.org if you'd prefer not to post publicly.
Your input can help shape future EAGs to be even more impactful. I look forward to hearing your suggestions!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Aug 8, 2024 • 16min
LW - [LDSL#0] Some epistemological conundrums by tailcalled
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [LDSL#0] Some epistemological conundrums, published by tailcalled on August 8, 2024 on LessWrong.
This post is also available on my Substack.
When you deal with statistical science, causal inference, measurement, philosophy, rationalism, discourse, and similar, there's some different questions that pop up, and I think I've discovered that there's a shared answer behind a lot of the questions that I have been thinking about. In this post, I will briefly present the questions, and then in a followup post I will try to give my answer for them.
Why are people so insistent about outliers?
A common statistical method is to assume an outcome is due to a mixture of observed factors and unobserved factors, and then model how much of an effect the observed factors have, and attribute all remaining variation to unobserved factors. And then one makes claims about the effects of the observed factors.
But some people then pick an outlier and demand an explanation for that outlier, rather than just accepting the general statistical finding:
In fact, aren't outliers almost by definition anti-informative? No model is perfect, so there's always going to be cases we can't model. By insisting on explaining all those rare cases, we're basically throwing away the signal we can model.
A similar point applies to reading the news. Almost by definition, the news is about uncommon stuff like terrorist attacks, rather than common stuff like heart disease. Doesn't reading such things invert your perception, such that you end up focusing on exactly the least relevant things?
Why isn't factor analysis considered the main research tool?
Typically if you have a ton of variables, you can perform a factor analysis which identifies a set of variables which explain a huge chunk of variation across those variables. If you are used to performing factor analysis, this feels like a great way to get an overview over the subject matter. After all, what could be better than knowing the main dimensions of variation?
Yet a lot of people think of factor analysis as being superficial and uninformative. Often people insist that it only yields aggregates rather than causes, and while that might seem plausible at first, once you dig into it enough, you will see that usually the factors identified are actually causal, so that can't be the real problem.
A related question is why people tend to talk in funky discrete ways when careful quantitative analysis generally finds everything to be continuous. Why do people want clusters more than they want factors? Especially since cluster models tend to be more fiddly and less robust.
Why do people want "the" cause?
There's a big gap between how people intuitively view causal inference (often searching for "the" cause of something), versus how statistics views causal inference. The main frameworks for causal inference in statistics are Rubin's Potential Outcomes framework and Pearl's DAG approach, and both of these view causality as a function from inputs to outputs.
In these frameworks, causality is about functional input/output relationships, and there are many different notions of causal effects, not simply one canonical "cause" of something.
Why are people dissatisfied with GWAS?
In genome-wide association searches, researchers use statistics to identify alleles that are associated with specific outcomes of interest (e.g. health, psychological characteristics, SES outcomes). They've been making consistent progress over time, finding tons of different genetic associations and gradually becoming able to explain more and more variance between people.
Yet GWAS is heavily criticized as "not causal". While there are certain biases that can occur, those biases are usually found to be much smaller than seems justified by these critiques. So what gives?
What value does qualitative r...

Aug 8, 2024 • 14min
LW - It's time for a self-reproducing machine by Carl Feynman
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's time for a self-reproducing machine, published by Carl Feynman on August 8, 2024 on LessWrong.
I've wanted to build a self-reproducing machine since I was 17. It's forty-five years later, and it has finally become feasible. (I've done a few other things along the way.) I'm going to describe one such device, and speculate as to its larger implications. It's a pretty detailed design, which I had to come up with to convince myself that it is feasible. No doubt there are better designs than this.
The Autofac
Here's a top-level description of the device I'm thinking of. It's called an Autofac, which is what they were called in the earliest story about them. It looks like a little metal shed, about a meter cubed. It weighs about 50 kilograms. There's a little gnome-sized door on each end. It's full of robot arms and automated machine tools. It's connected to electricity and by WiFi to a data center somewhere.
It has a front door, where it accepts material, and a back door, where it outputs useful objects, and cans of neatly packaged waste. You can communicate with it, to tell it to make parts and assemble them into useful shapes. It can do all the metalworking operations available to a machinist with a good shop at their disposal. In return, it occasionally asks for help or clarification.
One particular thing it can be told to make is another one of itself. This is of course the case we're all interested in. Here's what that looks like. You feed a 60kg package of steel castings, electronics, and other parts, into the door at one end. It starts by building another shed, next to the other end. The two sheds are butted up next to each other, so the rain can't get in. Once it's enclosed, there is no visible progress for about a month, but it makes various metalworking noises.
Then it announces that it's done. The second shed is now another Autofac, and can be carried away to start the process elsewhere. There's also a can full of metal scrap and used lubricant, which has to be disposed of responsibly. This process can be repeated a number of times, at least seven, to produce more offspring. Eventually the original Autofac wears out, but by then it has hundreds of descendants.
The software
The key part of the Autofac, the part that kept it from being built before, is the AI that runs it. Present-day VLMs (vision-language models) are capable of performing short-deadline manual tasks like folding laundry or simple tool use. But they are deficient at arithmetic, long term planning and precisely controlling operations. Fortunately we already have software for these three purposes.
First, of course, we have calculators for doing arithmetic. LLMs can be taught to use these. In the real world, machinists constantly use calculators. The Autofac will be no different.
Second, there is project planning software that lets a human break down an engineering project into tasks and subtasks, and accommodate changes of plan as things go wrong. We can provide the data structures of this software, initially constructed by humans, as a resource for the AI to use. The AI only has to choose the next task, accomplish it or fail, and either remove it from the queue or add a new task to fix the problem.
There are thousands of tasks in the life of an Autofac; fortunately the AI doesn't need to remember them all. The project planning software keeps track of what has been done and what needs to be done.
Third, there are programs that go from the design of a part to a sequence of machine tool movements that will make that part, and then controls the machine tool motors to do the job. These are called Computer Aided Manufacturing, or CAM. Using CAM relieves the AI of the lowest level responsibilities of controlling motor positions and monitoring position sensors. This software doesn't do everything, of...

Aug 7, 2024 • 5min
EA - Apply to help run EAGxSingapore or EAGxVirtual! by Dion
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to help run EAGxSingapore or EAGxVirtual!, published by Dion on August 7, 2024 on The Effective Altruism Forum.
We're running applications to build two EAGx teams for events later this year:
EAGxSingapore will take place 14-15 December in Singapore. Apply
here to join the team.
EAGxVirtual will take place on 15-17 November. Apply
here to join the team.
The application for each event takes 15 minutes, and the deadline is Monday, August 26th at 11:59pm Greenwich Mean Time (GMT). We encourage you to apply ASAP, as we will review applications on a rolling basis.
About the conferences
Please see this post for more about the EAGx conference series. The application to express interest in running an event in your area is still open!
EAGxSingapore 2024
This will be the second EAGx conference in Singapore, following a very successful iteration in 2022. We are aiming to support local organizers in hosting EAGxSingapore this December.
We expect the event size to be around 200-350 attendees; the event will primarily be aimed at community members and EA-aligned organizations based in Southeast Asia, though we also intend to invite attendees from other countries and EA-adjacent Asian organizations that might contribute to and benefit from attending the conference.
This event will build on the knowledge and successes of past Southeast Asian events and aim to balance accessibility with higher-context EA material through networking, workshops, talks, small group gatherings, and more.
EAGxVirtual 2024
This will be the third EAGxVirtual conference and we're planning to host it this November.
Previous years have been extremely successful, with over 1400 attendees in 2023. We expect to grow the event size to 1500 or more. The event will be aimed at community members and EA-aligned organizations globally, although we hope to particularly benefit those who have not been able to attend in-person conferences and/or do not have local EA communities.
This event will build on the knowledge and successes of past virtual events and aim to balance accessibility with higher-context EA material through networking, workshops, talks, small group gatherings, and more.
About the organizer roles
Detailed job descriptions can be found
here for EAGxSingapore and
here for EAGxVirtual. Responsibilities are typically divided into Team Lead, Production, Admissions, Content, and Volunteer Coordination. As noted in the full descriptions, EAGx teams can vary in composition according to members' specific availability, strengths, and interests, so teams can distribute tasks as best suited for their members. EAGxVirtual, in particular, has different role types from an in-person conference.
Additionally, each role may have significant variability in expected time commitments. In most cases, roles will require much less time in the months preceding the conference (anywhere between 2-15 hours per week), ramping up to 20-40+ hours per week as the conference approaches.
This flexibility in role responsibilities and time investment means we are interested in hearing from a wide range of applicants! The following elements are likely to apply to all candidates:
Familiarity with the effective altruism community, and a passion for promoting its principles.
Excellent organizational abilities with the capacity to handle multiple tasks and prioritize effectively under tight deadlines.
Strong interpersonal skills and the capability to communicate effectively across various groups and stakeholders.
(Bonus) Experience in event planning or logistics, preferably with large-scale events or conferences; experience with Salesforce or other CRM tools.
(Bonus) Previous attendance and/or volunteer service at other EAG(x) events; understanding and enthusiasm for the EAG(x) conference format.
Role logistics
Pay: All positions are contra...

Aug 7, 2024 • 8min
EA - Our Recommended Charity Ideas for February 2025 by CE
Dive into innovative charity initiatives focused on global welfare and animal rights. Discover the push for cage-free farming in the Middle East, aiming to revolutionize hen welfare. Learn how cognitive-behavioral therapy can help prevent youth crime, with plans to tackle global health challenges. Explore impactful ideas designed to create meaningful social change, from animal advocacy to community safety.

Aug 7, 2024 • 32min
EA - AEBr 2024 Summit Retrospective by Leo Arruda
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AEBr 2024 Summit Retrospective, published by Leo Arruda on August 7, 2024 on The Effective Altruism Forum.
[Versão em português abaixo]
This retrospective provides an overview of the AEBr 2024 Summit, highlighting successes and areas for improvement. We welcome your feedback and comments to help enhance future EA Brazil conferences.
Bottom Line Up Front
The AEBr Summit 2024 was a success. It had 220 attendees who felt welcome and satisfied, generating around 850 connections at a cost of $45.45 per attendee.
Date: June 29, 2024
Venue: INOVA.USP, University of São Paulo, São Paulo, Brazil.
The first Brazilian national EA conference
~220 people attended (62% were new to EA)
36 speakers
24 sessions including talks, panels, meetups, and office hours
5 organizations represented at the Career fair
Feedback survey results:
A 9,1/10 likelihood to recommend score
An average of 6,1 new connections per person
Photos and
video!
The AEBr Summit Overview
The AEBr Summit took place on June 29, 2024, at the University of São Paulo, São Paulo, Brazil. It was EA Brazil's first national meeting, with contributions from numerous experts and organizations aligned with Effective Altruism, primarily from the animal cause sector (
Sinergia animal,
Animal Equality,
Forum Animal,
Mercy for Animals,
Sea Shepherd and
Sociedade Vegetariana Brasileira), as well as health and development organizations like
Doebem, Associação Brasileira de Psicólogos Baseadas em Evidências and
Impulso Gov.
The primary goal was to expand and strengthen the Brazilian EA community by inviting both newcomers and experienced members to join together for a day of inspiring talks, workshops, and networking. The event was funded by the
Centre for Effective Altruism and supported by the Innovation Center of the University of São Paulo (
INOVA USP). Delicious vegan meals were provided by
Sociedade Vegetariana Brasileira (SVB), contributing to the event's success. The expectation was to bring together 200 people, so it was an accomplishment to register 255 applicants, with 220 attending, among speakers and volunteers.
Highlights
The event featured
220 attendees
36 speakers
40 very dedicated volunteers
26 sessions (
event program in Portuguese)
13 talks on the main causes of Effective Altruism.
7 workshops.
4 Q&A sessions with experts.
2 meet ups.
5 organizations represented at the Career fair
Event
photos and
video (many thanks to Ivan Martucci for the professional editing).
Based on the feedback responses from 60 attendees (27%):
Average participant satisfaction: 9.2 out of 10 in Likelihood to Recommend. This is higher than the average for EA Global and EAGx.
An average of 6,1 new connections per person. This is lower than most EAG and EAGx events, but the event lasted only one day.
46% of connections were considered potentially 'impactful' by the attendees (Note: assessing the impact of connections can be subjective).
Team
The AEBr Summit core team comprised Juana Maria, Leo Arruda, Ivan Martucci, and Adriana Arauzo. They were supported by Ollie Base and Arthur Malone from the CEA events team.
The core team worked remotely except for some site visits. Two members of the team worked together in person in São Paulo during the final week. While remote work was effective, in-person collaboration proved significantly easier, especially since some members didn't know each other well beforehand. In hindsight, it would have been beneficial for the team to meet and work in person earlier, ideally two weeks in advance of the summit rather than just one.
There were 40 volunteers divided into various roles: communication, logistics, food and catering, room supervision, speaker liaison, admissions, reception, and community health. Most volunteers were already engaged in the community, but a few were completely new.
Special thanks to Ivan...


