The Nonlinear Library

The Nonlinear Fund
undefined
Apr 5, 2024 • 22min

EA - On Leif Wenar's Absurdly Unconvincing Critique Of Effective Altruism by Omnizoid

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Leif Wenar's Absurdly Unconvincing Critique Of Effective Altruism, published by Omnizoid on April 5, 2024 on The Effective Altruism Forum. Leif Wenar recently published a critique of effective altruism that seems to be getting a lot of hype. I don't know why. There were a few different arguments in the piece, none of which were remotely convincing. Yet more strangely, he doesn't object much to EA as a whole - he just points to random downsides of EA and is snarky. If I accepted every claim in his piece, I'd come away with the belief that some EA charities are bad in a bunch of random ways, but believe nothing that imperils my core belief in the goodness of the effective altruism movement or, indeed, in the charities that Wenar critiques. I'm not going to quote Wenar's entire article, as it's quite long and mostly irrelevant. It contains, at various points, bizarre evidence-free speculation about the motivations of effective altruists. He writes, for instance, "Ord, it seemed, wanted to be the hero - the hero by being smart - just as I had. Behind his glazed eyes, the hero is thinking, "They're trying to stop me."" I'm sure this is rooted in Ord's poor relationship with his mother! At another point, he mistakes MacAskill's statement that there's been a lot of aid in poor countries and that things have gotten better for the claim that aid is responsible for the entirety of the improvement. These strange status games about credit and reward and heroism demonstrate a surprising moral shallowness, caring more about whether people take credit for doing things than what is done. He says, for instance, after quoting MacAskill saying it's possible to save a life for a few thousand dollars: But let's picture that person you've supposedly rescued from death in MacAskill's account - say it's a young Malawian boy. Do you really deserve all the credit for "saving his life"? Didn't the people who first developed the bed nets also "make a difference" in preventing his malaria? Well, as a philosopher, Wenar should know that two things can both cause something else. If there's a 9-judge panel evaluating an issue, and one side wins on a 5-4, each judge caused the victory, in the relevant, counterfactual sense - had they not acted, the victory wouldn't have occurred. MacAskill wasn't talking about apportioning blame or brownie points - just describing one's opportunity to do enormous amounts of good. Would Wenar object to the claim that it would be important to vote if you knew your candidate would be better and that your vote would change the election, on the grounds that you don't deserve all the credit for it - other voters get some too? Wenar's objection also repeats the old objection that Sam Bankman Fried used EA principles to do fraud, so EA must be bad, ignoring, of course, the myriad responses that have been given to this objection. Alex Strasser has addressed this at length, as have I (albeit at less length than Strasser). Pointing that people have done fraud in the name of EA is no more an objection to EA than it would an objection to some charity to note that it happened to receive funds from Al Capone. Obviously one should not carry out fraud, should take common-sense norms seriously, as EA leaders have implored repeatedly for years. The article takes random stabs at specific claims that have been made by EAs. Yet strangely, despite the obvious cherry-picking, where Wenar is attempting to target the most errant claims ever made by EAs, every one of his objections to those random out-of-context quotes ends up being wrong. For instance, he claims that MacAskill's source for the claim that by "giving $3,000 to a lobbying group called Clean Air Task Force (CATF)," "you can reduce carbon emissions by a massive 3,000 metric tons per year," is "one of Ord's research assistants ...
undefined
Apr 5, 2024 • 4min

EA - Farmed Animal Funders' Request for Proposals: Pooled Fund Ideas Towards Ending Factory Farming by Zoë Sigle

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Farmed Animal Funders' Request for Proposals: Pooled Fund Ideas Towards Ending Factory Farming, published by Zoë Sigle on April 5, 2024 on The Effective Altruism Forum. Farmed Animal Funders is soliciting proposals for pooled fund ideas that will fill an unmet, yet high-impact, charitable intervention playing a critical role in the movement to end factory farming. Proposals are due by May 31st, 2024. Farmed Animal Funders (FAF) is a donor community made up of individuals and foundations giving more than $250,000 per year to end factory farming. A pooled fund from FAF members and new donors can play an important role to fill financial gaps across multiple nonprofits while relying on unique expertise to impactfully distribute funds. We are seeking pooled fund proposals that meet the following criteria: The proposal supports efforts to end factory farming. The specific theory of change, strategy, intervention, or geography is unlikely to receive sufficient funding to meet the need/problem without the attention of a pooled fund. The projected animal impact of this pooled fund is exceptionally high. Multiple organizations can effectively absorb and deploy at least $1 to $3 million USD, or more for exceptional opportunities, combined over a defined period of funding (e.g., 1 year, 3 years) to advance the proposed work. We encourage collaboration where strategic and enthusiastically supported by all organizations. One or more people have unique subject matter expertise to meaningfully evaluate and advise on grant applications. Suggestions of specific advisors (which might be you) are preferred, but not required. Pooled fund themes could range from specific geographies, specific interventions, or specific time-sensitive opportunities. As one example, FAF previously hosted a pooled fund for organizations working on farm animal welfare policy in Europe based on a time-sensitive legislative opportunity that benefited from legislative advocacy efforts across several countries. We encourage creativity! We are also open to supporting existing pooled funds that have an unmet financial need. We will evaluate pooled fund ideas based on: How well they meet the criteria listed above. How interested FAF members are in funding as a pool (rather than funding organizations directly and individually). This depends on factors like member interest in the topic and whether funders' in-house advisory expertise is sufficient or not for evaluation. Potential to recruit new funders to the movement to end factory farming, such as with novel issue framing. We intend to review proposals, including potential follow-up questions or calls, in June 2024. Top proposals will be shared with FAF members for feedback. If any ideas are selected, FAF intends to: Raise funds for the pool from existing FAF members and relevant prospective funders seeking high-impact opportunities to give. Distribute grant applications to relevant nonprofit organizations. Identify and select subject matter experts to advise on grant evaluations. Evaluate grant applications with selected fund advisors and FAF members to make funding decisions. Distribute funds to selected nonprofits organizations. To formally submit a proposal for a pooled fund: Please submit a document no longer than two pages to pooled-fund@farmedanimalfunders.org by May 31st, 2024 with the following: Email subject: Pooled Fund RFP Your name, affiliation(s), and contact information (email, phone, and address). Description of the pooled fund idea, including the suggested organizations involved, timeframe, potential expert advisors, and funding need/request. Description of how the idea meets each of the described criteria. We are more interested in the content of the proposal, so no need to invest time into formatting. If you prefer early feedback on one or more id...
undefined
Apr 4, 2024 • 20min

LW - LLMs for Alignment Research: a safety priority? by abramdemski

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLMs for Alignment Research: a safety priority?, published by abramdemski on April 4, 2024 on LessWrong. A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large acceleration in safety research. This scenario is disturbingly close to the situation we already find ourselves in. Asking the best LLMs for help with programming vs technical alignment research feels very different (at least to me). LLMs might generate junk code, but you can keep pointing out the problems with the code, and the code will eventually work. This can be faster than doing it myself, in cases where I don't know a language or library well; the LLMs are moderately familiar with everything. When I try to talk to LLMs about technical AI safety work, however, I just get garbage. I think a useful safety precaution for frontier AI models would be to make them more useful for safety research than capabilities research. This extends beyond applying AI technology to accelerate safety research within top AI labs; models available to the general public (such as GPT-N, Claude-N) should also accelerate safety more than capabilities. What is wrong with current models? My experience is mostly with Claude, and mostly with versions of Claude before the current (Claude 3).[1] I'm going to complain about Claude here; but everything else I've tried seemed worse. In particular, I found GPT4 to be worse than Claude2 for my purposes. As I mentioned in the introduction, I've been comparing how these models feel helpful for programming to how useless they feel for technical AI safety. Specifically, technical AI safety of the mathematical-philosophy flavor that I usually think about. This is not, of course, a perfect experiment to compare capability-research-boosting to safety-research-boosting. However, the tasks feel comparable in the following sense: programming involves translating natural-language descriptions into formal specifications; mathematical philosophy also involves translating natural-language descriptions into formal specifications. From this perspective, the main difference is what sort of formal language is being targeted (IE, programming languages vs axiomatic models). I don't have systematic experiments to report; just a general feeling that Claude's programming is useful, but Claude's philosophy is not.[2] It is not obvious, to me, why this is. I've spoken to several people about it. Some reactions: If it could do that, we would all be dead! I think a similar mindset would have said this about programming, a few years ago. I suspect there are ways for modern LLMs to be more helpful to safety research in particular which do not also imply advancing capabilities very much in other respects. I'll say more about this later in the essay. There's probably just a lot less training data for mathematical philosophy than for programming. I think this might be an important factor, but it is not totally clear to me. Mathematical philosophy is inherently more difficult than programming, so it is no surprise. This might also be an important factor, but I consider it to be only a partial explanation. What is more difficult, exactly? As I mentioned, programming and mathematical philosophy have some strong similarities. Problems include a bland, people-pleasing attitude which is not very helpful for research. By default, Claude (and GPT4) will enthusiastically agree with whatever I say, and stick to summarizing my points back at me rather than providing new insights or adding useful critiques. When Claude does engage in more structured reasoning, it is usually wrong and bad. (I might summarize it as "based more on vibes than logic".) Is there any hope for better? As a starti...
undefined
Apr 4, 2024 • 36min

EA - How Well-Funded is Biosecurity Philanthropy? by Conrad K.

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Well-Funded is Biosecurity Philanthropy?, published by Conrad K. on April 4, 2024 on The Effective Altruism Forum. Many thanks to Andrew Snyder-Beattie, Sella Nevo, and Joshua Monrad for their feedback during this project. This project was completed as part of contract work with Open Philanthropy, but the views and work expressed here do not represent those of Open Philanthropy. All thoughts are my own. My full spreadsheet with results and calculations can be found here. Summary Statistics I adopt a loose and arbitrary definition of biosecurity where I am primarily concerned with interventions aimed at preventing or mitigating the effects of disease outbreaks. This especially includes interventions targeting or suitable for global catastrophic biological risks (GCBRs). Given this definition, biosecurity roughly represents 1.3% of the global spend on public health or about $130bn of $10tn a year. Of this $130bn, governments likely make up roughly $100bn (80%), with the US government funding the bulk of government funding globally (close to 90% of the biosecurity spend). Private philanthropy is likely about $1bn (1%). The rest comes from private spending, private philanthropy, and public-private partnerships that aren't independent foundations (e.g., universities). However, the vast majority of biosecurity spending goes towards vaccine development, disease surveillance, and pathogenesis research. My impression is that areas such as next-gen PPE, far-UVC, and research into GCBRs do not receive much philanthropic funding at all outside EA. EA players likely represent roughly 4% of biosecurity philanthropic funding. See 'Results' for more information. This was scrappily put together in no more than 40 hours of work and very little expert consultation, so I note extremely high levels of uncertainty in these figures, although I would estimate they are correct within a factor of 2-3 given the conception of biosecurity with a reasonable degree of confidence (~70%). However, my uncertainties are much wider when I factor in uncertainties about what constitutes 'biosecurity' to begin with (closer to a factor of 5 with the same degree of confidence, with the upper tail being much higher). About I've spent some time recently trying to understand exactly how well-funded is 'biosecurity' philanthropy. An important caveat is that 'biosecurity', in its broadest sense, is "the prevention of disease-causing agents entering or leaving any place where they can pose a risk to farm animals, other animals, humans, or the safety and quality of a food product". Under this definition, the space of biosecurity interventions is exceptionally broad: safe food handling and storage practices, screening imported produce, waste management, proper sanitation, and the use of HVAC systems are all biosecurity interventions. Instead, I'm mostly interested in a subset of interventions largely aimed at preventing or mitigating the effects of disease outbreaks. This especially includes interventions targeting or suitable for global catastrophic biological risks (GCBRs) such as pathogen-agnostic early detection, vaccine platform technologies, and regulating the use of nucleic acids through methods such as DNA synthesis screening. However, a number of interventions not primarily targeting GCBRs may still have quite large consequences for them. Antimicrobial resistance likely contributes towards millions of deaths a year; is a desired property of any maliciously designed pathogen, and techniques for detecting antimicrobial resistance are essentially the same techniques we would use for detecting novel pathogens. I think it is plausible that much work on influenza, research into the pathogenesis of pathogens, and a number of national security interventions are additionally relevant to pandemic prevention and ...
undefined
Apr 4, 2024 • 20min

AF - LLMs for Alignment Research: a safety priority? by Abram Demski

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLMs for Alignment Research: a safety priority?, published by Abram Demski on April 4, 2024 on The AI Alignment Forum. A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large acceleration in safety research. This scenario is disturbingly close to the situation we already find ourselves in. Asking the best LLMs for help with programming vs technical alignment research feels very different (at least to me). LLMs might generate junk code, but you can keep pointing out the problems with the code, and the code will eventually work. This can be faster than doing it myself, in cases where I don't know a language or library well; the LLMs are moderately familiar with everything. When I try to talk to LLMs about technical AI safety work, however, I just get garbage. I think a useful safety precaution for frontier AI models would be to make them more useful for safety research than capabilities research. This extends beyond applying AI technology to accelerate safety research within top AI labs; models available to the general public (such as GPT-N, Claude-N) should also accelerate safety more than capabilities. What is wrong with current models? My experience is mostly with Claude, and mostly with versions of Claude before the current (Claude 3).[1] I'm going to complain about Claude here; but everything else I've tried seemed worse. In particular, I found GPT4 to be worse than Claude2 for my purposes. As I mentioned in the introduction, I've been comparing how these models feel helpful for programming to how useless they feel for technical AI safety. Specifically, technical AI safety of the mathematical-philosophy flavor that I usually think about. This is not, of course, a perfect experiment to compare capability-research-boosting to safety-research-boosting. However, the tasks feel comparable in the following sense: programming involves translating natural-language descriptions into formal specifications; mathematical philosophy also involves translating natural-language descriptions into formal specifications. From this perspective, the main difference is what sort of formal language is being targeted (IE, programming languages vs axiomatic models). I don't have systematic experiments to report; just a general feeling that Claude's programming is useful, but Claude's philosophy is not.[2] It is not obvious, to me, why this is. I've spoken to several people about it. Some reactions: If it could do that, we would all be dead! I think a similar mindset would have said this about programming, a few years ago. I suspect there are ways for modern LLMs to be more helpful to safety research in particular which do not also imply advancing capabilities very much in other respects. I'll say more about this later in the essay. There's probably just a lot less training data for mathematical philosophy than for programming. I think this might be an important factor, but it is not totally clear to me. Mathematical philosophy is inherently more difficult than programming, so it is no surprise. This might also be an important factor, but I consider it to be only a partial explanation. What is more difficult, exactly? As I mentioned, programming and mathematical philosophy have some strong similarities. Problems include a bland, people-pleasing attitude which is not very helpful for research. By default, Claude (and GPT4) will enthusiastically agree with whatever I say, and stick to summarizing my points back at me rather than providing new insights or adding useful critiques. When Claude does engage in more structured reasoning, it is usually wrong and bad. (I might summarize it as "based more on vibes than logic".) Is there any hope for bette...
undefined
Apr 4, 2024 • 2min

AF - Run evals on base models too! by orthonormal

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Run evals on base models too!, published by orthonormal on April 4, 2024 on The AI Alignment Forum. (Creating more visibility for a comment thread with Rohin Shah.) Currently, DeepMind's capabilities evals are run on the post-RL*F (RLHF/RLAIF) models and not on the base models. This worries me because RL*F will train a base model to stop displaying capabilities, but this isn't a guarantee that it trains the model out of having the capabilities. Consider by analogy using RLHF on a chess-playing AI, where the trainers reward it for putting up a good fight and making the trainer work hard to win, but punish it for ever beating the trainer. There are two things to point out about this example: Running a simple eval on the post-RLHF model would reveal a much lower ELO than if you ran it on the base model, because it would generally find a way to lose. (In this example, you can imagine the red team qualitatively noticing the issue, but the example is an artificially simple one!) The post-RLHF model still has much of its chess knowledge latently available, in order to put up a good fight across the full range of human ability. Possibly it's even superhuman at chess - I know I'd have to be better than you at chess in order to optimize well for an entertaining game for you. But that won't show up in its ELO. So it seems to me like running evals on the base model as well as the post-RL*F model is an extremely sensible precaution against (1), and I'd love to be reassured either that this is unnecessary for some really obvious and ironclad reason, or that someone is already working on this. And I don't have any good suggestion on (2), the idea that RL*F could reinforce a capability while also concealing it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
undefined
Apr 4, 2024 • 16min

LW - A gentle introduction to mechanistic anomaly detection by Erik Jenner

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A gentle introduction to mechanistic anomaly detection, published by Erik Jenner on April 4, 2024 on LessWrong. TL;DR: Mechanistic anomaly detection aims to flag when an AI produces outputs for "unusual reasons." It is similar to mechanistic interpretability but doesn't demand human understanding. I give a self-contained introduction to mechanistic anomaly detection from a slightly different angle than the existing one by Paul Christiano (focused less on heuristic arguments and drawing a more explicit parallel to interpretability). Mechanistic anomaly detection was first introduced by the Alignment Research Center (ARC), and a lot of this post is based on their ideas. However, I am not affiliated with ARC; this post represents my perspective. Introduction We want to create useful AI systems that never do anything too bad. Mechanistic anomaly detection relaxes this goal in two big ways: Instead of eliminating all bad behavior from the start, we're just aiming to flag AI outputs online. Instead of specifically flagging bad outputs, we flag any outputs that the AI produced for "unusual reasons." These are serious simplifications. But strong methods for mechanistic anomaly detection (or MAD for short) might still be important progress toward the full goal or even achieve it entirely: Reliably flagging bad behavior would certainly be a meaningful step (and perhaps sufficient if we can use the detector as a training signal or are just fine with discarding some outputs). Not all the cases flagged as unusual by MAD will be bad, but the hope is that the converse holds: with the right notion of "unusual reasons," all bad cases might involve unusual reasons. Often we may be fine with flagging more cases than just the bad ones, as long as it's not excessive. I intentionally say "unusual reasons for an output" rather than "unusual inputs" or "unusual outputs." Good and bad outputs could look indistinguishable to us if they are sufficiently complex, and inputs might have similar problems. The focus on mechanistic anomalies (or "unusual reasons") distinguishes MAD from other out-of-distribution or anomaly detection problems. Because of this, I read the name as "[mechanistic anomaly] detection" - it's about detecting mechanistic anomalies rather than detecting any anomalies with mechanistic means. One intuition pump for mechanistic anomaly detection comes from mechanistic interpretability. If we understand an AI system sufficiently well, we should be able to detect, for example, when it thinks it's been deployed and executes a treacherous turn. The hope behind MAD is that human understanding isn't required and that we can detect cases like this as "mechanistically anomalous" without any reference to humans. This might make the problem much easier than if we demand human understanding. The Alignment Research Center (ARC) is trying to formalize "reasons" for an AI's output using heuristic arguments. If successful, this theoretical approach might provide an indefinitely scalable solution to MAD. Collaborators and I are working on a more empirical approach to MAD that is not centered on heuristic arguments, and this post gives a self-contained introduction that might be more suitable to that perspective (and perhaps helpful for readers with an interpretability background). Thanks to Viktor Rehnberg, Oliver Daniels-Koch, Jordan Taylor, Mark Xu, Alex Mallen, and Lawrence Chan for feedback on a draft! Mechanistic anomaly detection as an alternative to interpretability: a toy example As a toy example, let's start with the SmartVault setting from the ELK report. SmartVault is a vault housing a diamond that we want to protect from robbers. We would like an AI to use various actuators to keep the diamond safe by stopping any robbers. There is a camera pointed at the diamond, which we want to u...
undefined
Apr 4, 2024 • 9min

EA - Introducing EA in Arabic by Abdurrahman Alshanqeeti

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing EA in Arabic, published by Abdurrahman Alshanqeeti on April 4, 2024 on The Effective Altruism Forum. I am thrilled to introduce EA in Arabic (الإحسان الفعال), a pioneering initiative aimed at bringing the principles of effective altruism to Arabic-speaking communities worldwide. Summary Spoken by more than 400 million people worldwide, Arabic plays a pivotal role in advancing impactful initiatives across the globe. Despite the Middle East and North Africa (MENA) region's diverse socio-economic landscapes, instability leads to humanitarian crises. However, some Arab nations exert significant economic influence and invest in AI and other emerging technologies. Understanding the humanitarian significance of the region, along with its involvement in Global Catastrophic Risks (GCR), particularly concerning AI and technology, emphasizes the need for focused efforts. EA in Arabic aims to bridge linguistic, intellectual, and social barriers and engage with Arabic-speaking communities. Future plans involve promoting EA principles and community engagement, with Arabic speakers urged to contribute to spreading effective altruism. Arabic in the World Today Arabic, with over 400 million speakers globally, holds a significant position as one of the most widely spoken languages. Its influence spans across the Middle East and North Africa (MENA) region, and serves as the liturgical language of more than 1.9 billion Muslims who constitute around 25.2% of the world's population, making it a vital medium for promoting impactful initiatives. Note: While acknowledging the intersections between Arabic-speaking communities and Islam[1], it's essential to clarify that EA in Arabic focuses solely on the linguistic and cultural aspects from an impartial perspective. If you're curious about approaching EA from a religious perspective, you may want to explore other initiatives like Muslims for EA and the Muslim Network for Positive Impact. The MENA Region The Middle East and North Africa (MENA) region holds a unique status, embodying a cluster of countries with significantly varied socioeconomic positions. Ranging from nations ranked among the globe's most impoverished to those at the forefront of economic and technological progress, it embodies a diverse array of socio-economic realities. Details Now, let's swiftly explore some specific details regarding this multifaceted region. Below is a compilation of several Arab nations ranked among the most impoverished globally: Country GDP per Capita (USD$)[2] Most Recent Year Syria 421 2021 Somalia 592 2022 Yemen 650 2022 South Sudan 1,072 2015 Sudan 1,102 2022 Moreover, unfortunately, due to significant instability in the region, many countries have outdated data. For instance, Sudan, which experienced an internal destructive conflict between two military factions in 2023. According to a UNHCR report, approximately half of Sudan's population, roughly 25 million people, require humanitarian assistance and protection. Nearly 18 million individuals are confronting acute food insecurity, and 8 million people have been displaced. Similar data deficiencies plague countries like Syria, Yemen, Lebanon, Palestine (including Gaza Strip and the West Bank), Mauritania, Iraq, and Libya. Each of these nations grapples with profound humanitarian crises. This offers a strong reason to prioritize attention on this region in the advancement of Global Health and Wellbeing. Conversely, there are several countries ranked among the wealthiest globally. Here is a list of some Arab nations with thriving economies as of 2022: Country GDP per Capita (USD$) Qatar 87,662 UAE 53,708 Kuwait 41,080 Saudi Arabia 30,448 Bahrain 30,147 These nations wield significant economic influence globally, particularly in terms of production and pricing of oil and gas. Furthermore,...
undefined
Apr 4, 2024 • 23min

EA - Let's Fund: Impact of our $1M crowdfunded grant to the Center for Clean Energy Innovation by Hauke Hillebrandt

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's Fund: Impact of our $1M crowdfunded grant to the Center for Clean Energy Innovation, published by Hauke Hillebrandt on April 4, 2024 on The Effective Altruism Forum. Let's Fund researches pressing problems, like climate change, and then crowdfunds for nonprofits working on effective policy solutions. One such policy is clean energy innovation (e.g. via more grant funding for scientists to invent better solar panels). Making clean energy cheaper has many benefits because it reduces: Emissions Energy poverty Air pollution (which kills millions a year) Revenue for autocratic petrostates[1] Extreme climate risks (since if countries agreements to reduce emissions like Paris were to break down, cheaper clean energy hedges against this[2]) Since 2019, we've crowdfunded $1M for the Center for Clean Energy Innovation (CCEI) at ITIF, a non-profit think tank in DC. One example of our grantees work is researching the effects of higher and smarter clean energy R&D spending and communicating the results to policy-makers. Our research showed that this is the most effective climate policy[3] and was featured on Vox (which Bill Gates retweeted![4]). As a result, ~2000 donors crowdfunded a $1M+ for CCEI to do more think tank work (e.g. do research, talk to policy-makers, etc.). Here I show how with our grant, CCEI might have e.g. shifted >$100M from less effective clean energy deployment (e.g. subsidies) to more neglected and effective clean energy R&D. The donations might avert a ton CO for less than $0.10. That a leading think tank can cause such shifts becomes plausible, if we look at the pivotal ('hingey') timeline of a political climate so favorable that climate budgets went up by an unprecedented scale: 2020: Big Government Dems win the presidency, house and a razor-thin margin senate majority. Then a CCEI researcher gets a job advising Biden's climate envoy, John Kerry, who had endorsed and blurbed CCEI's Energizing America report and which has been called 'a very influential report', and advice for Biden on how to reform the energy innovation system.[5] 2021: COVID leads to a massive stimulus that includes ~$42B for clean energy RD&D, doubling the yearly budget- an ~$10B increase: [6] This US leadership led 16 countries to pledge ~$100B for the Clean Energy Technologies Demonstration Challenge recently. These increases were politically tractable thanks to tens of thousands of climate activists raising awareness worldwide. But CCEI is part of a much smaller coalition of only hundreds of key movers and shakers (others are: CATF, Carbon180, etc.[7]) that improved the quality of these spending increases by channeling them towards energy RD&D, which is ~10x more effective at ~$10/tC than deployment at ~$100/tC averted (more). Also, our $1M grant was ~2% of donations to US climate governance and a respectable 0.2% to all US think tanks.[8],[9],[10] Based on this, if we assume CCEI caused ~.1-10%[11] of the $10B-100B clean energy RD&D increases - then, our Monte Carlo model (see UseCarlo.com) suggests that CCEI averts ~.5Gt at ~$.002/tC:[12] Distribution p0 p10 p50 p90 UseCarlo.com output Notes / Source Energy R&D budget increase Metalog $0K $10B $42B $100B ~50Gt P10: US increases / y. P50: total stimulus. P90: global agreement CCEI's effect of shifting deploy$ to RD&D$ Metalog 0% 0.1% 2% 10% ~5% Guesstimate: CCEI is part of the coalition of key movers and shakers that shifted budget increases to energy RD&D RD&D effectiveness Metalog $0 $3 $13 $41 ~$20/tC Review on the cost-effectiveness of energy R&D Deployment effectiveness Metalog $0 $0.1K $0.5K $1K ~$500/tC Levelized Cost of Carbon Abatement tC averted via R&D shift Output ~0.5Gt tC averted by R&D- Counterfactual tC averted by deployment Let's Fund grant ~$1M CCEI effectiveness ~$0.002/tC Donor effectiveness ~$0.02/tC Most...
undefined
Apr 4, 2024 • 8min

LW - What's with all the bans recently? by Gerald Monroe

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's with all the bans recently?, published by Gerald Monroe on April 4, 2024 on LessWrong. Summary: the moderators appear to be soft banning users with 'rate-limits' without feedback. A careful review of each banned user reveals it's common to be banned despite earnestly attempting to contribute to the site. Some of the most intelligent banned users have mainstream instead of EA views on AI. Note how the punishment lengths are all the same, I think it was a mass ban-wave of 3 week bans: Gears to ascension was here but is no longer, guess she convinced them it was a mistake. Have I made any like really dumb or bad comments recently: https://www.greaterwrong.com/users/gerald-monroe?show=comments Well I skimmed through it. I don't see anything. Got a healthy margin now on upvotes, thanks April 1. Over a month ago, I did comment this stinker. Here is what seems to the same take by a very high reputation user here, @Matthew Barnett , on X: https://twitter.com/MatthewJBar/status/1775026007508230199 Must be a pretty common conclusion, and I wanted this site to pick an image that reflects their vision. Like flagpoles with all the world's flags (from coordination to ban AI) and EMS uses cryonics (to give people an alternative to medical ASI). I asked the moderators: @habryka says: I skimmed all comments I made this year, can't find anything that matches to this accusation. What comment did this happen on? Did this happen once or twice or 50 times or...? Any users want to help here, it surely must be obvious. You can look here: https://www.greaterwrong.com/users/gerald-monroe?show=comments if you want to help me find what habryka could possibly be referring to. I recall this happening once, Gears called me out on it, and I deleted the comment. Conditional that this didn't happen this year, why wasn't I informed or punished or something then? Skimming the currently banned user list: Let's see why everyone else got banned. Maybe I can infer a pattern from it: Akram Choudhary : 2 per comment and 1 post at -25. Taking the doomer view here: frankybegs +2.23 karma per comment. This is not bad. Does seem to make comments personal. Decided to enjoy the site and make 16 comments 6-8 days ago. Has some healthy karma on the comments, +6 to +11. That's pretty good by lesswrong standards. No AI views. Ban reason is??? Victor Ashioya His negative karma doesn't add up to -38, not sure why. AI view is in favor of red teaming, which is always good. @Remmelt doomer view, good karma (+2.52 karma per comment), hasn't made any comments in 17 days...why rate limit him? Skimming his comments they look nice and meaty and well written...what? All I can see is over the last couple of month he's not getting many upvotes per comment. green_leaf Ok at least I can explain this one. One comment at -41, in the last 20, green_leaf rarely comments. doomer view. PeteJ Tries to use humanities knowledge to align AI, apparently the readerbase doesn't like it. Probably won't work, banned for trying. @StartAtTheEnd 1.02 karma per comment, a little low, may still be above the bar. Not sure what he did wrong, comments are a bit long? doomer view, lots of downvotes omnizoid Seems to just be running a low vote total. People didn't like a post justifying religion. @MiguelDev Why rate limited? This user seems to be doing actual experiments. Karma seems a little low but I can't find any big downvote comments or posts recently. @RomanS Overall Karma isn't bad, 19 upvotes the most recent post. Seems to have a heavily downvoted comment that's the reason for the limit. @shminux this user has contributed a lot to the site. One comment heavily downvoted, algorithm is last 20. It certainly feels that way from the receiving end. 2.49 karma per comment, not bad. Cube tries to applies Baye's rule in several comments, I see a coup...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app