The Nonlinear Library

The Nonlinear Fund
undefined
Apr 4, 2024 • 4min

EA - Announcing the Pivotal Research Fellowship - Apply Now! by Tobias Häberli

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Pivotal Research Fellowship - Apply Now!, published by Tobias Häberli on April 4, 2024 on The Effective Altruism Forum. We are happy to announce that the Swiss Existential Risk Initiative (CHERI) is now Pivotal Research and the CHERI research fellowship is called the Pivotal Research Fellowship. Apply for the Pivotal Research Fellowship this summer in London to research global catastrophic risks (GCR) with experienced mentors on technical AI safety, AI governance, biosecurity & pandemic preparedness. Research Fellowship The Pivotal Research Fellowship will take place in London from July 1st to August 30th, 2024. In our fourth research fellowship, we offer a 9-week program providing fellows with experienced mentors and research managers. Accepted applicants will have the opportunity to work full-time on GCR reduction focusing on emerging technologies: we look forward to hosting fellows working on technical AI safety, AI governance, biosecurity & pandemic preparedness. Overview of the fellowship Applicants submit a preliminary research proposal that outlines what they are interested in working on during the fellowship. Once accepted, fellows will collaborate with our research managers to adapt and optimize their proposal, and identify suitable mentors for their project. Fellows are mentored by experienced researchers and policymakers. A selection of our previous mentors can be found here. The research manager is a key contact throughout the fellowship, assisting with research, enhancing productivity, and providing career support. The fellowship will be located at the LISA offices in London. The offices are a hub for numerous significant initiatives within the GCR domain, including BlueDot Impact, Apollo Research, and the MATS extension program. Fellows receive a stipend of £5000, travel and accommodation expense support, as well as free lunch and dinner from Monday to Friday. Anyone is welcome to apply. We are particularly excited about applicants with little experience but a deep interest in GCR research. Application Deadline: Sunday, 21st of April, at 23:59 (UTC+1). Reasons to Apply Gain experience in AI safety and biorisk research through the guidance of your experienced mentor. Set yourself on a path to a meaningful career, focused on impactful work to improve global safety and security. Co-work at a GCR hub surrounded by like-minded researchers. In our experience, many excellent candidates hesitate to apply. If you're unsure, we encourage you to err on the side of applying. We also encourage you to share this opportunity with others who may be a good fit. Pivotal The fellowship's rebranding decision stems from the organization's operations no longer being confined to projects in Switzerland. As Pivotal, we strive to carry out various projects as a principal measure to support the GCR talent pipeline. We believe fellowships are still one of the most promising opportunities for upcoming researchers to get started with GCR research. In the past, fellowships have been a significant stepping stone, enabling participants to start impactful careers within and outside the GCR ecosystem. With Pivotal's rebranding, leadership has also been transitioning: Naomi Nederlof, who held the position of Director at CHERI, has transitioned to a role as an advisor at Pivotal. Tobias Häberli, previously the Program Director of CHERI, and Tilman Räuker, formerly a Technical AI Safety Research Manager at ERA, are now serving as co-directors of Pivotal. If you have any questions, please feel free to contact us. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 4, 2024 • 4min

EA - The CEA Events Team is hiring (apply by April 8) by OllieBase

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The CEA Events Team is hiring (apply by April 8), published by OllieBase on April 4, 2024 on The Effective Altruism Forum. The CEA Events Team is currently hiring for several positions, all of which can be either remote or Oxford-based: Project Manager (Apply by April 8th): You'll collaborate with experts and provide end-to-end event support to make events happen that could be valuable for making progress in their fields, and which might not happen otherwise. Events Associate (Apply by April 8th): You'll support the planning and execution of our events and play a key role in enhancing their impact. Events Generalist (expression of interest): Join our team and support our expanding portfolio of events through work on admissions, content, event design, production, or volunteer management. You can apply for the Project Manager and Events Associate positions with the same application. If you apply for either of these roles, we will also consider that an expression of interest for the generalist role. We're looking for people who share our values of earnest ambition, independent motivation, and interest in altruistic impact. You should also have: A strong alignment with and understanding of effective altruism and its principles. A keen eye for detail, quality, and efficiency. The ability to juggle multiple tasks and deadlines. A collaborative and supportive mindset, and the ability to communicate clearly and respectfully with a diverse range of stakeholders. A growth-oriented and flexible attitude, and the willingness to learn from feedback and adapt to changing circumstances. Experience running events or large projects is also preferred, but it's not a requirement. A lot of us joined the events team without experience running events. Why join the events team? Our analyses[1] and the data collected by our partners and funders suggest that our events help attendees create high-impact connections: they find future mentors, employers, donors, and collaborators. Events can also help people learn about ideas, improve their plans, and coordinate with each other. The two open roles are on our Partner Events team. This team organizes events for key stakeholders in the EA community and adjacent communities, with a focus on people working on AI safety and other existential risks from other emerging technologies. Since early 2023, the Partner Events team has run two Summits on Existential Security and one Meta Coordination Forum, and collaborated with external partners to run an Effective Giving Summit, an Existential InfoSec Forum, and other AI-safety or biosecurity-focused events. Attendees at these events regularly report that participating in our events has improved counterfactual outcomes for critical projects; we've learned of attendees taking senior roles at AI safety organizations, attracting significant fundraising, founding new organizations, and making major updates to their work as a result of attending events led by the Partner Events team. Our culture We have an energetic, excitable, and collaborative team culture. We help people play to their strengths by trusting them, empowering them, sharing honest feedback, and we openly reflect on how to improve and support each other. Most of us work together from the same room in our Oxford office, though some people work remotely. You can read a bit more about what it's like to work on the events team, and the benefits of doing so, in Michel's recent post. Why should you not join the events team? We're often sprinting towards rigid event deadlines, which doesn't match everyone's preferences for when and how much to work. We often have to work based on informed guesswork (i.e. if you have trouble proceeding without certainty you might end up second-guessing yourself a lot). Our work also has a repetitive cadence. Often, our tea...
undefined
Apr 4, 2024 • 29min

LW - Best in Class Life Improvement by sapphire

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Best in Class Life Improvement, published by sapphire on April 4, 2024 on LessWrong. There is an enormous amount of crappy self-help advice. Most supplements do nothing. However, some substances and practices can dramatically improve your life. It's worth being explicit about what those are in my experience. The American medical system endorses all of these treatments and methods, and you can implement them with a doctor's supervision. The only way I differ from the American medical system is that they operate under a paradigm of treating diseases or perhaps what might be better understood as serious deficiencies. But if a technique is powerful enough to help the ill it is plausible it can also help the well. Make your own choices and set yourself free. Before reading this advice, it is important to note that drug users use a lot of drugs. In general, recreational drug users take their drugs at doses so much higher than psychiatric patients that they're basically two different chemicals. A lot of our impressions of drugs, what side effects they have, and how dangerous they are get shaped by the recreational users, not the patients. This is sometimes even true for the doctors who are supposed to prescribe to the patients and give them good advice. While studies of recreational user populations can sometimes be helpful in flagging an issue for consideration, we should be judging the clinical risks based on studies of clinical populations. Ketamine Ketamine is extremely effective and extremely fast-acting. It often solves depression in a single day. Hence, it should be among the first things you try if you have mood issues. From Scott's writeup: The short version: Ketamine is a new and exciting depression treatment, which probably works by activating AMPA receptors and strengthening synaptic connections. It takes effect within hours and works about two or three times as well as traditional antidepressants. Most people get it through heavily regulated and expensive esketamine prescriptions or even more expensive IV ketamine clinics. Still, evidence suggests that getting it prescribed cheaply and conveniently from a compounding pharmacy is equally effective. A single dose of ketamine lasts between a few days and a few weeks, after which some people will find their depression comes back; long-term repeated dosing with ketamine anecdotally seems to work great but hasn't been formally tested for safety. 6: How effective is ketamine? Pretty effective. Studies find the effect of ketamine peaks about 24 hours after use. A meta-analysis finds that by that time, around 50% of patients are feeling better (defined as 50% symptom reduction) compared to less than 10% of patients who got a placebo. A more recent Taiwanese study finds roughly similar numbers. Another way to measure effectiveness is through effect size statistics. The effect size of normal antidepressants like SSRIs is around 0.3. The effect size of ketamine is between 0.6 and 1.0, so about two to three times larger. Ketamine is a psychoactive drug. The state it induces is hard to describe, but it can be psychedelic in its own way. My advice is to take enough ketamine that you are clearly quite high but not so much you are 'out in space.' Ideally, the experience won't be very scary. Ketamine is very short-acting. The peak high should only last about 45 minutes, and the total trip should be under two hours. I recommend either doing a very simple breathing meditation (described in detail later in this document) or enjoying media you find uncomplicatedly pleasant. Watch a nature documentary about trees. Don't watch one about predators. Listen to music that makes you happy. It's important to get your setting right. Moving around on ketamine makes people nauseous. So, have water and nausea meds (ondansetron or Dramamine) rig...
undefined
Apr 3, 2024 • 3min

EA - $250K in Prizes: SafeBench Competition Announcement by Center for AI Safety

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $250K in Prizes: SafeBench Competition Announcement, published by Center for AI Safety on April 3, 2024 on The Effective Altruism Forum. TLDR: CAIS is distributing $250,000 in prizes for benchmarks that empirically assess AI safety. This project is supported by Schmidt Sciences, submissions are open until February 25th, 2025. Winners will be announced April 25th, 2025. To view additional info about the competition, including submission guidelines and FAQs, visit https://www.mlsafety.org/safebench If you are interested in receiving updates about SafeBench, feel free to sign up on the homepage using the link above. About the Competition: The Center for AI Safety is offering prizes for the best benchmarks across the following four categories: Robustness: designing systems to be reliable in the face of adversaries and highly unusual situations. Monitoring: detect malicious use, monitor predictions, and discover unexpected model functionality Alignment: building models that represent and safely optimize difficult-to-specify human values. Safety Applications: using ML to address broader risks related to how ML systems are handled. Judges: Zico Kolter, Carnegie Mellon Mark Greaves, AI2050 Bo Li, University of Chicago Dan Hendrycks, Center for AI Safety Timeline: Mar 25, 2024: Competition Opens Feb 25, 2025: Submission Deadline Apr 25, 2025: Winners Announced Competition Details: Prizes: There will be three prizes worth $50,000 and five prizes worth $20,000. Eligibility: Benchmarks released prior to the competition launch are ineligible for prize consideration. Benchmarks released after competition launch are eligible. More details about prize eligibility can be found in our terms and conditions. Evaluation criteria: Benchmarks will be assessed according to this evaluation criteria. In order to encourage progress in safety, without also encouraging general advances in capabilities, benchmarks must clearly delineate safety and capabilities. Submission Format: If you have already written a paper on your benchmark, submit that (as long as it was published after the SafeBench launch date of March 25th, 2024). Otherwise, you may submit a thorough write-up of your benchmark, including source code. An example of such a write-up can be found in this document. Dataset Policy: By default, we will require the code and dataset for all submissions to be publicly available on Github. However, if the submission deals with a dangerous capability, we will review whether to publicly release the dataset on a case-by-case basis. If you are interested in receiving updates about SafeBench, feel free to sign up on the homepage: https://www.mlsafety.org/safebench Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 3, 2024 • 2min

EA - Patrick Gruban has joined the EV UK board by Rob Gledhill

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Patrick Gruban has joined the EV UK board, published by Rob Gledhill on April 3, 2024 on The Effective Altruism Forum. I'm happy to announce that Patrick Gruban has recently joined the board of Effective Ventures Foundation (UK). This update follows our previous announcement about other changes to the EV UK and EV US boards, and it is our hope that Patrick will be the last new trustee to join before all projects fiscally sponsored by EV are offboarded into independent entities. Patrick is currently the co-Director of EA Germany, where he leads the national group for the third largest[1] EA community. In addition to his current role, he brings a wealth of experience and operational knowledge from his career as a serial entrepreneur, which stretches back to the mid-1990s. In Patrick's words, "I'm excited to support the EA ecosystem's evolution through projects spinning off from EV. Nine years ago, I was inspired by EA ideas. Since 2020, I have been an active member and organizer of my local group and, since 2023, of the national EA community in Germany. I see my mission as using my experience as an entrepreneur to support the EA community to help people have more impact. Having expressed critiques of the EV and other EA boards last year, I'm particularly enthusiastic about this opportunity to make a difference." The EV UK trustees and I are excited for Patrick to be joining the board, and especially to be able to benefit from his previous experience in other leadership roles. We believe that his background will be of particular value to us during this last phase of EV. Other EV UK and EV US trustee updates To follow up on some of our previously announced changes, I also wanted to confirm that Tasha McCauley and Claire Zabel have stepped down from the EV UK board. Nicole Ross is also intending on stepping down from the EV US board shortly, and Eli Rose will complete his transition from the EV US board to the EV UK board in the near future. Finally, we had originally intended for Johnstuart Winchell to serve as an EV UK trustee, but he's instead switched over to serving on our EV US board. ^ Germany has remained the third largest country for respondents in the EA yearly survey (after the United States and the United Kingdom) as of 2022. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 3, 2024 • 14min

AF - The Case for Predictive Models by Rubi Hudson

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Case for Predictive Models, published by Rubi Hudson on April 3, 2024 on The AI Alignment Forum. I'm also posting this on my new blog, Crossing the Rubicon, where I'll be writing about ideas in alignment. Thanks to Johannes Treutlein and Paul Colognese for feedback on this post. Just over a year ago, the Conditioning Predictive Models paper was released. It laid out an argument and a plan for using powerful predictive models to reduce existential risk from AI, and outlined some foreseeable challenges to doing so. At the time, I saw the pieces of a plan for alignment start sliding together, and I was excited to get started on follow-up work. Reactions to the paper were mostly positive, but discussion was minimal and the ideas largely failed to gain traction. I suspect that muted reception was in part due to the size of the paper, which tried to both establish the research area (predictive models) and develop a novel contribution (conditioning them). Now, despite retaining optimism about the approach, even the authors have mostly shifted their focus to other areas. I was recently in a conversation with another alignment researcher who expressed surprise that I was still working on predictive models. Without a champion, predictive models might appear to be just another entry on the list of failed alignment approaches. To my mind, however, the arguments for working on them are as strong as they've ever been. This post is my belated attempt at an accessible introduction to predictive models, but it's also a statement of confidence in their usefulness. I believe the world would be safer if we can reach the point where the alignment teams at major AI labs consider the predictive models approach among their options, and alignment researchers have made conscious decisions whether or not to work on them. What is a predictive model? Now the first question you might have about predictive models is: what the heck do I mean by "predictive model"? Is that just a model that makes predictions? And my answer to that question would be "basically, yeah". The term predictive model is referring to the class of AI models that take in a snapshot of the world as input, and based on their understanding output a probability distribution over future snapshots. It can be helpful to think of these snapshots as represented by a series of tokens, since that's typical for current models. As you are probably already aware, the world is fairly big. That makes it difficult to include all the information about the world in a model's input or output. Rather, predictive models need to work with more limited snapshots, such as the image recorded by a security camera or the text on a page, and combine that with their prior knowledge to fill in the relevant details. Competitiveness One reason to believe predictive models will be competitive with cutting edge AI systems is that, for the moment at least, predictive models are the cutting edge. If you think of pretrained LLMs as predicting text, then predictive models are a generalization that can include other types of data. Predicting audio and images are natural next steps, since we have abundant data for both, but anything that can be measured can be included. This multimodal transition could come quite quickly and alongside a jump in capabilities. If language models already use internal world models, then incorporating multimodal information might well be just a matter of translating between data types. The search for translations between data types is already underway, with projects from major labs like Sora and Whisper. Finding a clean translation, either by gradient descent or manually, would unlock huge amounts of training data and blow past the current bottleneck. With that potential overhang in mind, I place a high value on anticipating and solvi...
undefined
Apr 3, 2024 • 3min

EA - Magnify Mentoring is hiring and our mentee rounds are open! by KMF

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Magnify Mentoring is hiring and our mentee rounds are open!, published by KMF on April 3, 2024 on The Effective Altruism Forum. Two updates from Magnify Mentoring which I hope may be useful to some readers: 1. We are open for mentees Magnify is running a pilot round for people from underrepresented groups. Applications are open now. You can apply here. We will open a round for women, non-binary, and trans people of all genders in the coming 2-3 months. Our pilot round for underrepresented groups is meant to be broadly inclusive. It includes, but is not limited to, people from low to middle income countries, people of color, people from low-income households, etc. Past mentees have been particularly successful when they have a sense of what they would like to achieve through mentorship. The matching process normally takes us between 4-6 weeks. We look to match pairings based on the needs and availability of the mentee and mentor, their goals, career paths, and what skills they are looking to develop. Unfortunately, we frequently have more mentees apply than there are mentors available. As this is a pilot round, the discrepancy could be even greater than typical. If you are not matched because of a shortage of mentors, please apply again! On average, mentees and mentors meet once a month for 60-90 minutes with a series of optional prompt questions prepared by our team. In the post-round feedback form, the average for "I recommend being a Magnify mentee" has been consistently over 9/10 for the last rounds. You can see testimonies from some of our mentees here, here and here. Some reported outcomes for mentees were: Advice, guidance, and resources on achieving goals. Connection and support in pursuing opportunities (jobs, funding). Confidence-building. Specific guidance (How to network? How to write a good resume?). Joining a welcoming community for support through challenges. 2. We are hiring our first additional staff member! We are looking to hire a Project Manager who will primarily focus on building a productive and fun Magnify Mentoring community and identifying opportunities to support our members in their professional and personal journeys. You can find out more here. Applications for both will close on the 15th April. We are so excited to hear from you! If you have any questions please contact Kathryn at . Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 3, 2024 • 40min

AF - Sparsify: A mechanistic interpretability research agenda by Lee Sharkey

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sparsify: A mechanistic interpretability research agenda, published by Lee Sharkey on April 3, 2024 on The AI Alignment Forum. Over the last couple of years, mechanistic interpretability has seen substantial progress. Part of this progress has been enabled by the identification of superposition as a key barrier to understanding neural networks ( Elhage et al., 2022) and the identification of sparse autoencoders as a solution to superposition ( Sharkey et al., 2022; Cunningham et al., 2023; Bricken et al., 2023). From our current vantage point, I think there's a relatively clear roadmap toward a world where mechanistic interpretability is useful for safety. This post outlines my views on what progress in mechanistic interpretability looks like and what I think is achievable by the field in the next 2+ years. It represents a rough outline of what I plan to work on in the near future. My thinking and work is, of course, very heavily inspired by the work of Chris Olah, other Anthropic researchers, and other early mechanistic interpretability researchers. In addition to sharing some personal takes, this article brings together - in one place - various goals and ideas that are already floating around the community. It proposes a concrete potential path for how we might get from where we are today in mechanistic interpretability to a world where we can meaningfully use it to improve AI safety. Key frameworks for understanding the agenda Framework 1: The three steps of mechanistic interpretability I think of mechanistic interpretability in terms of three steps: The three steps of mechanistic interpretability[1]: Mathematical description: In the first step, we break the neural network into constituent parts, where the parts are simply unlabelled mathematical objects. These may be e.g. neurons, polytopes, circuits, feature directions (identified using SVD/NMF/SAEs), individual parameters, singular vectors of the weight matrices, or other subcomponents of a network. Semantic description: Next, we generate semantic interpretations of the mathematical object (e.g. through feature labeling). In other words, we try to build a conceptual model of what each component of the network does. Validation: We need to validate our explanations to ensure they make good predictions about network behavior. For instance, we should be able to predict that ablating a feature with a purported 'meaning' (such as the 'noun gender feature') will have certain predictable effects that make sense given its purported meaning (such as the network becoming unable to assign the appropriate definitive article to nouns). If our explanations can't be validated, then we need to identify new mathematical objects and/or find better semantic descriptions. The field of mechanistic interpretability has repeated this three-step cycle a few times, cycling through explanations given in terms of neurons, then other objects such as SVD/NMF directions or polytopes, and most recently SAE directions. My research over the last couple of years has focused primarily on identifying the right mathematical objects for mechanistic explanations. I expect there's still plenty of work to do on this step in the next two years or so (more on this later). To guide intuitions about how I plan to pursue this, it's important to understand what makes some mathematical objects better than others. For this, we have to look at the description accuracy vs. description length tradeoff. Framework 2: The description accuracy vs. description length tradeoff You would feel pretty dissatisfied if you asked someone for a mechanistic explanation of a neural network and they proceeded to read out of the float values of the weights. But why is this dissatisfying? Two reasons: When describing the mechanisms of any system, be it an engine, a solar system, o...
undefined
Apr 3, 2024 • 5min

EA - ACE's New Application Process for Our 2024 Charity Evaluations by Animal Charity Evaluators

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ACE's New Application Process for Our 2024 Charity Evaluations, published by Animal Charity Evaluators on April 3, 2024 on The Effective Altruism Forum. Note: To prevent this important update from being overlooked due to April Fools' posts, the Forum team, with the authors' agreement, has changed the publication date to April 3rd." We are excited to announce that we are launching an application for our Charity Evaluations program. The new application form will help us simplify our evaluation process and obtain valuable information about organizations and their work at an early stage. Every year, we invite approximately 15 promising charities to participate in our evaluation process. To select this group of charities, we consider organizations around the world and assess whether they seem likely to do exceptional, cost-effective work to assist animals and, thus, become one of our Recommended Charities. To increase the chances of selecting and inviting the most effective organizations to our evaluation process, we have decided to introduce an application. Benefits of the new application process We expect that receiving essential information about organizations through the application form will help us make more informed decisions. Gathering relevant data from all organizations can help reduce potential biases such as misclassification bias, observer bias, and recall bias that may arise due to the high variance in the quantity and quality of publicly available information across organizations during the charity selection process. Ultimately, we aim to increase the likelihood of finding and recommending the most effective charities. The application form is intended to ensure charities meet the basic eligibility criteria and are willing and able to be evaluated. This will help to minimize the number of charities that decline our evaluation invitation, which can cause delays at the beginning of the evaluation season. Finally, we expect that this step will increase the transparency of our charity selection process. While we will not share the exact responses to the application questions for confidentiality reasons, we will be able to share anonymized, aggregate results to help clarify our decision-making at this stage. Limitations and how we will address them Having an application form requires more capacity on both sides. Charities need the capacity to respond to the application form, and our team needs the capacity to review applications before inviting charities. We aim to be mindful of everyone's capacity by asking only the most decision-relevant questions. With this in mind, we designed the application form in two dependent phases so that applicants who do not meet basic eligibility criteria (Phase 1) avoid spending unnecessary time responding to more in-depth questions (Phase 2). Because this is a new step we are implementing this year, we ran a pilot survey with potentially impactful charities to test the clarity and relevance of the application questions and gather more general feedback. We are very grateful to the pilot participants. Thanks to them, we have made important improvements to the application form and likely reduced the chance of unforeseen issues. Application content The application form is divided into two phases. Phase 1 consists of questions about the organization's basic details and eligibility. To meet the eligibility requirements to participate in our 2024 charity evaluations, charities must: Primarily work to help farmed animals and/or wild animals Not be in an exploratory or testing phase where there is significant uncertainty about which of the charity's programs will be scaled up Have been an organization for at least three years Have at least three paid full-time equivalents (including full-time, part-time, and contractors) Have an annual exp...
undefined
Apr 3, 2024 • 2min

EA - Quick Update on Leaving the Board of EV by Rebecca Kagan

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quick Update on Leaving the Board of EV, published by Rebecca Kagan on April 3, 2024 on The Effective Altruism Forum. A brief and belated update: When I resigned from the board of EV US last year, I was planning on writing about that decision. But I ultimately decided against doing that for a variety of reasons, including that it was very costly to me, and I believed it wouldn't make a difference. However, I want to make it clear that I resigned last year due to significant disagreements with the board of EV and EA leadership, particularly concerning their actions leading up to and after the FTX crisis. While I certainly support the boards' decision to pay back the FTX estate, spin out the projects as separate organizations, and essentially disband EV, I continue to be worried that the EA community is not on track to learn the relevant lessons from its relationship with FTX. Two things that I think would help (though I am not planning to work on either myself): EA needs an investigation, done externally and shared publicly, on mistakes made in the EA community's relationship with FTX.[1] I believe there were extensive and significant mistakes made which have not been addressed. (In particular, some EA leaders had warning signs about SBF that they ignored, and instead promoted him as a good person, tied the EA community to FTX, and then were uninterested in reforms or investigations after the fraud was revealed). These mistakes make me very concerned about the amount of harm EA might do in the future. EA also needs significantly more clarity on who, if anyone, "leads" EA and what they are responsible for. I agree with many of Will MacAskill's points here and think confusion on this issue has indirectly resulted in a lot of harm. CEA is a logical place to house both of these projects, though I also think leaders of other EA-affiliated orgs, attendees of the Meta Coordination Forum, and some people at Open Philanthropy would also be well-suited to do this work. I continue to be available to discuss my thoughts on why I left the board, or on EA's response to FTX, individually as needed. ^ Although EV conducted a narrow investigation, the scope was far more limited than what I'm describing here, primarily pertaining to EV's legal exposure, and most results were not shared publicly. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app