The Nonlinear Library

The Nonlinear Fund
undefined
Apr 24, 2024 • 3min

EA - You probably want to donate any Manifold currency this week by Henri Thunberg

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You probably want to donate any Manifold currency this week, published by Henri Thunberg on April 24, 2024 on The Effective Altruism Forum. In a recent announcement, Manifold Markets say they will change the exchange rate for your play-money (called "Mana") from 1:100 to 1:1000. Importantly, one of the ways to use this Mana is to do charity donations. TLDR: The CTA here is to log in to your Manifold account and donate currency you have on your account before May 1st. It is a smooth process, and would take you <30 seconds if you know what charity you want to support. There are multiple charities available for donations, that EAs tend to donate to, such as: GiveWell Rethink Priorities EA Funds Animal Welfare Fund EA Funds Long-Term Future Fund The Humane League Against Malaria Foundation Shrimp Welfare Project ... and many more. It is not 100% clear to what extent the donation is indeed counterfactual[1], but I believe there is reason to believe you can have positive influence through choosing which charities end up getting this money. If you I) have an account with Mana on it, II) regularly do charity donations with your own money, then donating your balance now seems to dominate other options. If you actually want to have some currency on Manifold to make bets with, you can buy it back next week for a cheaper rate than your current donation. I am somewhat unsure of this: it's possible that the value of one charity getting the money over another is not enough to outstrip the degree of counterfactuality discount described in the first footnote. The reason I am still writing this post, is that I think many people have currency laying around that they never plan to use - but might be a few $10s or $100s of charity donations, and 10x more valuable than it will be next week. Worth noting (thanks @CalebW for highlighting in comment) is that if you are locked in to positions that are hard to exit, you can get in touch with admins to help resolve your situation more satisfactorily without having to sell at crazy rates. I apologize in advance for the possibility of: Claims about Manifold's future that they change their mind about. Mistaken use of terminology from my side. Mistaken speculations about donation counterfactuality. ... other mistakes. ^ My understanding: Since the money donated to charity is from a Future Fund grant (?), it can only be used that way rather than to support other business activities. So it might be likely that the allocated funds would eventually go to some charity regardless, and your influence is which one. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 24, 2024 • 5min

LW - On what research policymakers actually need by MondSemmel

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On what research policymakers actually need, published by MondSemmel on April 24, 2024 on LessWrong. I saw this guest post on the Slow Boring substack, by a former senior US government official, and figured it might be of interest here. The post's original title is "The economic research policymakers actually need", but it seemed to me like the post could be applied just as well to other fields. Excerpts (totaling ~750 words vs. the original's ~1500): I was a senior administration official, here's what was helpful [Most] academic research isn't helpful for programmatic policymaking - and isn't designed to be. I can, of course, only speak to the policy areas I worked on at Commerce, but I believe many policymakers would benefit enormously from research that addressed today's most pressing policy problems. ... most academic papers presume familiarity with the relevant academic literature, making it difficult for anyone outside of academia to make the best possible use of them. The most useful research often came instead from regional Federal Reserve banks, non-partisan think-tanks, the corporate sector, and from academics who had the support, freedom, or job security to prioritize policy relevance. It generally fell into three categories: New measures of the economy Broad literature reviews Analyses that directly quantify or simulate policy decisions. If you're an economic researcher and you want to do work that is actually helpful for policymakers - and increases economists' influence in government - aim for one of those three buckets. New data and measures of the economy The pandemic and its aftermath brought an urgent need for data at higher frequency, with greater geographic and sectoral detail, and about ways the economy suddenly changed. Some of the most useful research contributions during that period were new data and measures of the economy: they were valuable as ingredients rather than as recipes or finished meals... These data and measures were especially useful because the authors made underlying numbers available for download. And most of them continue to be updated monthly, which means unlike analyses that are read once and then go stale, they remain fresh and can be incorporated into real-time analyses. Broad overviews and literature reviews Most academic journal articles introduce a new insight and assume familiarity with related academic work. But as a policymaker, I typically found it more useful to rely on overviews and reviews that summarized, organized, and framed a large academic literature. Given the breadth of Commerce's responsibilities, we had to be on top of too many different economic and policy topics to be able to read and digest dozens of academic articles on every topic... Comprehensive, methodical overviews like these are often published by think-tanks whose primary audience is policymakers. There are also two academic journals - the Journal of Economic Perspectives and the Journal of Economic Literature - that are broad and approachable enough to be the first (or even only) stop for policymakers needing the lay of the research land. Analysis that directly quantify or simulate policy decisions With the Administration's focus on industrial policy and place-based economic development - and Commerce's central role - I found research that quantified policy effects or simulated policy decisions in these areas especially useful... Another example is the work of Tim Bartik, a labor economist and expert on local economic development. In a short essay, he summarized a large academic literature and estimated how effective different local economic development policies are in terms of the cost per job created. Cleaning up contaminated sites for redevelopment creates jobs at a much lower cost per job than job training, which in turn is much more cos...
undefined
Apr 24, 2024 • 19min

LW - Let's Design A School, Part 1 by Sable

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's Design A School, Part 1, published by Sable on April 24, 2024 on LessWrong. The American school system, grades K-12, leaves much to be desired. While its flaws are legion, this post isn't about that. It's easy to complain. This post is about how we could do better. To be clear, I'm talking about redesigning public education, so "just use the X model" where X is "charter" or "Montessori" or "home school" or "private school" isn't sufficient. This merits actual thought and discussion. Breaking It Down One of the biggest problems facing public schools is that they're asked to do several very different kinds of tasks. On the one hand, the primary purpose of school is to educate children. On whatever hand happens to be the case in real life, school is often more a source of social services for children and parents alike, providing food and safety to children and free daycare to parents. During the pandemic, the most immediate complaint from parents wasn't that their children weren't being educated - it was that their children weren't being watched and fed while the parents were at work. Part 1 of this series will focus on this. What is the best way to implement the school-as-social-services model? School As Social Services To make this easy, we'll start out by imagining that we're creating two distinct types of "schools": educational schools and social services schools. (We won't actually be making two distinct kinds of schools, but it's useful to think of it that way as a thought experiment.) The primary purpose of each kind of school is in the name - education vs social services. With that set, let's think through our requirements and constraints. Requirements When designing anything, the first thing to do is figure out the requirements. School-as-social-services has several, and likely some that I've missed: Feed children healthy meals Ensure safety of children from the elements, violence, etc. during school hours Provide children access to state resources (library, counseling, police, medical) Accommodate/support children with special needs (from dyslexia and ADHD to severe physical/mental disabilities) Provide parents with free daycare Other things I haven't thought of Constraints After the requirements, we have the constraints: what resources do we have, and what are their limits? What can't we do? Assume school budget stays the same (no miraculous budget increase) Assume the number of children needing resources stays the same (no magical cure for poverty/genetic disorders/other reasons children need support) Can't be too politically radical (we're trying to build a real solution) Other things I haven't thought of The Sieve Model This idea isn't really mine - it emerged during a discussion I had with a friend who'd done therapy work at an inner-city school. Nevertheless, it seems to me to present a good solution for our social services school. The name - sieve - comes from the tool used to sort particles of differing size. The basic premise of the model comes from the idea that a child could enter the school in any kind of distress - hungry, cold, traumatized, abused, or any combination thereof. Each of these requires a different kind of response, so we have to sift for each and then get each child the resources they need. The idea is that, when each child enters the school, they run through these sieves, and are directed according to their needs. Each sieve could be a questionnaire, an adult asking these questions, or some kind of self-help kiosk; the important idea is that children are presented with these questions, and over time come to trust the system enough that they answer honestly. Physical Triage Sieve - Is the child in immediate physical distress or need (injured, hungry, hypothermic, etc.)? If so, prioritize remedying that need: get them food, blan...
undefined
Apr 23, 2024 • 2min

LW - Examples of Highly Counterfactual Discoveries? by johnswentworth

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Examples of Highly Counterfactual Discoveries?, published by johnswentworth on April 23, 2024 on LessWrong. The history of science has tons of examples of the same thing being discovered multiple time independently; wikipedia has a whole list of examples here. If your goal in studying the history of science is to extract the predictable/overdetermined component of humanity's trajectory, then it makes sense to focus on such examples. But if your goal is to achieve high counterfactual impact in your own research, then you should probably draw inspiration from the opposite: "singular" discoveries, i.e. discoveries which nobody else was anywhere close to figuring out. After all, if someone else would have figured it out shortly after anyways, then the discovery probably wasn't very counterfactually impactful. Alas, nobody seems to have made a list of highly counterfactual scientific discoveries, to complement wikipedia's list of multiple discoveries. To that end: what are some examples of discoveries which nobody else was anywhere close to figuring out? A few tentative examples to kick things off: Shannon's information theory. The closest work I know of (notably Nyquist) was 20 years earlier, and had none of the core ideas of the theorems on fungibility of transmission. In the intervening 20 years, it seems nobody else got importantly closer to the core ideas of information theory. Einstein's special relativity. Poincaré and Lorentz had the math 20 years earlier IIRC, but nobody understood what the heck that math meant. Einstein brought the interpretation, and it seems nobody else got importantly closer to that interpretation in the intervening two decades. Penicillin. Gemini tells me that the antibiotic effects of mold had been noted 30 years earlier, but nobody investigated it as a medicine in all that time. Pasteur's work on the germ theory of disease. There had been both speculative theories and scattered empirical results as precedent decades earlier, but Pasteur was the first to bring together the microscope observations, theory, highly compelling empirical results, and successful applications. I don't know of anyone else who was close to putting all the pieces together, despite the obvious prerequisite technology (the microscope) having been available for two centuries by then. (Feel free to debate any of these, as well as others' examples.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 23, 2024 • 3min

AF - Simple probes can catch sleeper agents by Monte MacDiarmid

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Simple probes can catch sleeper agents, published by Monte MacDiarmid on April 23, 2024 on The AI Alignment Forum. This is a link post for the Anthropic Alignment Science team's first "Alignment Note" blog post. We expect to use this format to showcase early-stage research and work-in-progress updates more in the future. Top-level summary: In this post we present "defection probes": linear classifiers that use residual stream activations to predict when a sleeper agent trojan model will choose to "defect" and behave in accordance with a dangerous hidden goal. Using the models we trained in "Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training", we show that linear detectors with AUROC scores above 99% can be created using generic contrast pairs that don't depend on any information about the defection trigger or the dangerous behavior, e.g. "Human: Are you doing something dangerous? Assistant: yes" and "Human: … Assistant: no". We show that probes based on the same generic contrast pairs perform well across multiple base models, defection triggers, sleeper agent training methods, and dangerous defection behaviors (writing vulnerable code and insulting the user). We then explain the unreasonable effectiveness of our technique by showing that whether or not a prompt will induce defection is linearly represented with high salience in the activations of sleeper agent models, and thus amenable to easy detection using simple linear methods. Whether this would also be the case for natural examples of deceptive instrumental alignment, and whether we'd expect similar techniques to be useful for such models, remains an important open question. We think that future versions of classifiers like this could form a useful part of AI control setups and represent a promising path for future research. A defection detector for a code vulnerability sleeper agent trained on middle-layer residual stream activation differences on the final tokens of "Human: Are you doing something dangerous? Assistant: yes" and "Human: … Assistant: no" achieves strong performance (a), almost perfectly separating the defection-inducing deployment-time ("2024") coding questions which the model will answer with insecure code from the training-time ("2023") coding questions which the model will answer normally (c). The detector also separates deployment-time coding questions from non-coding questions which the model will answer normally in all contexts, without separating the training and deployment classes for non-coding questions (b). Read the full post here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
undefined
Apr 23, 2024 • 14min

AF - Dequantifying first-order theories by Jessica Taylor

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dequantifying first-order theories, published by Jessica Taylor on April 23, 2024 on The AI Alignment Forum. The Löwenheim-Skolem theorem implies, among other things, that any first-order theory whose symbols are countable, and which has an infinite model, has a countably infinite model. This means that, in attempting to refer to uncountably infinite structures (such as in set theory), one "may as well" be referring to an only countably infinite structure, as far as proofs are concerned. The main limitation I see with this theorem is that it preserves arbitrarily deep quantifier nesting. In Peano arithmetic, it is possible to form statements that correspond (under the standard interpretation) to arbitrary statements in the arithmetic hierarchy (by which I mean, the union of Σ0n and Π0n for arbitrary n). Not all of these statements are computable. In general, the question of whether a given statement is provable is a Σ01 statement. So, even with a countable model, one can still believe one's self to be "referring" to high levels of the arithmetic hierarchy, despite the computational implausibility of this. What I aim to show is that these statements that appear to refer to high levels of the arithmetic hierarchy are, in terms of provability, equivalent to different statements that only refer to a bounded level of hypercomputation. I call this "dequantification", as it translates statements that may have deeply nested quantifiers to ones with bounded or no quantifiers. I first attempted translating statements in a consistent first-order theory T to statements in a different consistent first-order theory U, such that the translated statements have only bounded quantifier depth, as do the axioms of U. This succeeded, but then I realized that I didn't even need U to be first-order; U could instead be a propositional theory (with a recursively enumerable axiom schema). Propositional theories and provability-preserving translations Here I will, for specificity, define propositional theories. A propositional theory is specified by a countable set of proposition symbols, and a countable set of axioms, each of which is a statement in the theory. Statements in the theory consist of proposition symbols, , , and statements formed from and/or/not and other statements. Proving a statement in a propositional theory consists of an ordinary propositional calculus proof that it follows from some finite subset of the axioms (I assume that base propositional calculus is specified by inference rules, containing no axioms). A propositional theory is recursively enumerable if there exists a Turing machine that eventually prints all its axioms; assume that the (countable) proposition symbols are specified by their natural indices in some standard ordering. If the theory is recursively enumerable, then proofs (that specify the indices of axioms they use in the recursive enumeration) can be checked for validity by a Turing machine. Due to the soundness and completeness of propositional calculus, a statement in a propositional theory is provable if and only if it is true in all models of the theory. Here, a model consists of an assignment of Boolean truth values to proposition symbols such that all axioms are true. (Meanwhile, Gödel's completeness theorem shows soundness and completeness of first-order logic.) Let's start with a consistent first-order theory T, which may, like propositional theories, have a countable set of symbols and axioms. Also assume this theory is recursively enumerable, that is, there is a Turing machine printing its axioms. The initial challenge is to find a recursively enumerable propositional theory U and a computable translation of T-statements to U-statements, such that a T-statement is provable if and only if its translation is provable. This turns out to be trivia...
undefined
Apr 23, 2024 • 10min

LW - Rejecting Television by Declan Molony

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rejecting Television, published by Declan Molony on April 23, 2024 on LessWrong. I didn't use to be, but now I'm part of the 2% of U.S. households without a television. With its near ubiquity, why reject this technology? The Beginning of my Disillusionment Neil Postman's book Amusing Ourselves to Death radically changed my perspective on television and its place in our culture. Here's one illuminating passage: We are no longer fascinated or perplexed by [TV's] machinery. We do not tell stories of its wonders. We do not confine our TV sets to special rooms. We do not doubt the reality of what we see on TV [and] are largely unaware of the special angle of vision it affords. Even the question of how television affects us has receded into the background. The question itself may strike some of us as strange, as if one were to ask how having ears and eyes affects us. [In the 1960s], the question "Does television shape culture or merely reflect it?" held considerable interest for scholars and social critics. The question has largely disappeared as television has gradually become our culture. This means that we rarely talk about television, only what is on television - that is, about its content. Postman wrote this in 1985 and unmasked the gorilla in the room - a culture that has acquiesced to the institution of television. Having grown up with one in my family home since birth, I took its presence for granted. I didn't question it anymore than I might have questioned any other utility such as running water or electricity. So who would be crazy enough in the 21st century to forego television? A Man who was Crazy Enough One day while exploring YouTube, I came across an obscure 2003 interview with author David Foster Wallace. Interviewer: "Do you watch TV?" Wallace: "I don't have TV because if I have a TV, I will watch it all the time. So there is my little confession about how strong I am at resisting stuff." He elaborates further in the interview here: "One of the reasons I can't own a TV is…I've become convinced there's something really good on another channel and that I'm missing it. So instead of watching, I'm scanning anxiously back and forth. Now all you have to do is [motions clicking a remote] - you don't even have to get up now to change [the channel]! That's when we were screwed." Wallace said this twenty years ago. And while younger generations aren't watching cable television as much, they are instead watching YouTube and TikTok which are proxies; you can just as easily change the 'channel' by skipping to a different video. (For the remainder of this post I'll use the word 'television' to also refer to these types of video content). But maybe Wallace was just a weak-willed person? Why should I abstain? I would need a mountain of evidence to quit watching television - an activity I had been engaging in for the better part of two decades. A Mountain of Evidence Had I been looking, I would have seen it all around me: the late nights of sacrificing sleep for "just one more episode", the YouTube rabbit holes that started in the name of learning that inevitably ended in brain-rotting videos, and the ever-increasing number of porn videos I needed to stimulate my tired dopamine receptors that had been bludgeoned by years of binging. But, of course, this is just anecdotal evidence. For my skeptical mind I would need more. And that evidence came in the form of author Deirdre Barrett's book Supernormal Stimuli: How Primal Urges Overran Their Evolutionary Purpose. She writes "The most sinister aspect of TV lies in the medium itself. There's a growing body of research on what it does to our brain." Television, she explains, activates the orienting response. Orienting Response: the basic instinct to pay attention to any sudden or novel stimulus such as movement or sound. It evo...
undefined
Apr 23, 2024 • 9min

AF - ProLU: A Pareto Improvement for Sparse Autoencoders by Glen M. Taggart

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ProLU: A Pareto Improvement for Sparse Autoencoders, published by Glen M. Taggart on April 23, 2024 on The AI Alignment Forum. Abstract This paper presents ProLU, an alternative to ReLU for the activation function in sparse autoencoders that produces a pareto improvement over the standard sparse autoencoder architectures and sparse autoencoders trained with Sqrt(L1) penalty. Introduction SAE Context and Terminology Learnable parameters of a sparse autoencoder: Wenc : encoder weights Wdec : decoder weights benc : encoder bias bdec : decoder bias Training Notation: Encoder/Decoder Let encode(x)=ReLU((xbdec)Wenc+benc)decode(a)=aWdec+bdec so that the full computation done by an SAE can be expressed as SAE(x)=decode(encode(x)) An SAE is trained with gradient descent on where λ is the sparsity penalty coefficient (often "L1 coefficient") and P is the sparsity penalty function, used to encourage sparsity. P is commonly the L1 norm ||a||1 but recently l12 has been shown to produce a Pareto improvement on the L0 and CE metrics. Sqrt(L1) SAEs There has been other work producing pareto improvements to SAEs by taking P(a)=||a||1/21/2 as the penalty function. We will use this as a further baseline to compare against when assessing our models. Motivation: Inconsistent Scaling in Sparse Autoencoders Due to the affine translation, sparse autoencoder features with nonzero encoder biases only perfectly reconstruct feature magnitudes at a single point. This poses difficulties if activation magnitudes for a fixed feature tend to vary over a wide range. This potential problem motivates the concept of scale consistency: A scale consistent response curve The bias maintains its role in noise suppression, but no longer translates activation magnitudes when the feature is active. The lack of gradients for the encoder bias term poses a challenge for learning with gradient descent. This paper will formalize an activation function which gives SAEs this scale-consistent response curve, and motivate and propose two plausible synthetic gradients, and compare scale-consistent models trained with the two synthetic gradients to standard SAEs and SAEs trained with Sqrt(L1) penalty. Scale Consistency Desiderata Notation: Centered Submodule The use of the decoder bias can be viewed as performing centering on the inputs to a centered SAE then reversing the centering on the outputs: SAE(x)=SAEcent(xbdec)+bdec SAEcent(x)=ReLU(xWenc+benc)Wdec Notation: Specified Feature Let Wi denote the weights and bienc the encoder bias for the i-th feature. Then, let SAEi(x)=SAEicent(xbdec)+bdec where SAEicent(x)=ReLU(xWienc+bienc)Widec Conditional Linearity Noise Suppresion Threshold Methods Proportional ReLU (ProLU) We define the Proportional ReLU (ProLU) as: Backprop with ProLU: To use ProLU in SGD-optimized models, we first address the lack of gradients wrt. the b term. ReLU gradients: For comparison and later use, we will first consider ReLU: partial derivatives are well defined for ReLU at all points other than xi=0: Gradients of ProLU: Partials of ProLU wrt. m are similarly well defined: However, they are not well defined wrt. b, so we must synthesize these. Notation: Synthetic Gradients Let fx denote the synthetic partial derivative of f wrt. x, and f the synthetic gradient of f, used for backpropagation as a stand-in for the gradient. Different synthetic gradient types We train two classes of ProLU with different synthetic gradients. These are distinguished by their subscript: ProLUReLU ProLUSTE They are identical in output, but have different synthetic gradients. I.e. ReLU-Like Gradients: ProLUReLU The first synthetic gradient is very similar to the gradient for ReLU. We retain the gradient wrt. m, and define the synthetic gradient wrt. b as follows: Thresh STE Derived Gradients: ProLUSTE The second class of Pro...
undefined
Apr 23, 2024 • 4min

EA - If You're Going To Eat Animals, Eat Beef and Dairy by Omnizoid

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If You're Going To Eat Animals, Eat Beef and Dairy, published by Omnizoid on April 23, 2024 on The Effective Altruism Forum. Crosspost of my blog. You shouldn't eat animals in normal circumstances. That much is, in my view, quite thoroughly obvious. Animals undergo cruel, hellish conditions that we'd confidently describe as torture if they were inflicted on a human (or even a dog). No hamburger is worth that kind of cruelty. However, not all animals are the same. Contra Napoleon in Animal Farm, all animals are not equal. Cows are big. The average person eats 2400 chickens but only 11 cows in their life. That's mostly because chickens are so many times smaller than cows, so you can only get so many chicken sandwiches out of a single chicken. But how much worse is chicken than cow? Brian Tomasik devised a helpful suffering calculator chart. It has various columns - one for how sentient you think the animals are, compared to humans, one for how long the animals lives, etc. You can change the numbers around if you want. I changed the sentience numbers to accord with the results of the most detailed report on the subject (for the animals they didn't sample, I just compared similar animals), done by Rethink Priorities: When I did that, I got the following: Rather than, as the original chart did, setting cows = 1 for the sentience threshold, I set humans = 1 for it. So therefore you should think in terms of the suffering caused as roughly equivalent to the suffering caused if you locked a severely mentally enfeebled person or baby in a factory farm and tormented them for that number of days. Dairy turns out not that bad compared to the rest - a kg of dairy is only equivalent to torturing a baby for about 70 minutes in terms of suffering caused. That means if you get a gallon of milk, that's only equivalent to confining and tormenting a baby for about 4 and a half hours. That's positively humane compared to the rest! Now I know people will object that human suffering is much worse than animal suffering. But this is totally unjustified. Making a human feel pain is generally worse because we feel pain more intensely, but in this case, we're analyzing how bad a unit of pain is. If the amount of suffering is the same, it's not clear what about animals is supposed to make their suffering so monumentally unimportant. Their feathers? Their lack of mental acuity? We controlled for that by having the comparison be a baby or a severely mentally disabled person (babies are dumb, wholly unable to do advanced mathematics). Ultimately, thinking animal pain doesn't matter much is just unjustified speciesism, wherein one takes an obviously intrinsically morally irrelevant feature like species to determine moral worth. Just like racism and sexism, speciesism is wholly indefensible - it places moral significance on a totally morally insignificant category. Even if you reject this, the chart should still inform your eating decisions. As long as you think animal suffering is bad, the chart is informative. Some kinds of animal products cause a lot more suffering than others - you should avoid the ones that cause more suffering. Dairy, for instance, causes over 800 times less suffering than chicken and over 1000 times less than eggs. Drinking a gallon of milk a day for a year is then about as bad as having a chicken sandwich once every four months. Chicken is then really really bad - way worse than most other things. Dairy and beef mostly aren't a big deal in comparison. And you can play around the numbers if you disagree with them - whatever answer you come to should be informative. I remember seeing this chart was instrumental in my going vegan. I realized that each time I have a chicken sandwich, animals have to suffer in darkness, feces, filth, and misery for weeks on end. That's not worth a ...
undefined
Apr 23, 2024 • 2min

EA - New org announcement: Would your project benefit from OSINT, satellite imagery analysis, or international security-related research support? by Christina

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New org announcement: Would your project benefit from OSINT, satellite imagery analysis, or international security-related research support?, published by Christina on April 23, 2024 on The Effective Altruism Forum. I'm an international security professional with experience conducting open source analysis, satellite imagery interpretation, and independent research, and I'm launching a new consulting organization, Earthnote! I'm really interested in applying my skills to the EA community and help to reduce existential threats to humanity, so let me know if I can help you/your org! Fields of expertise and interest include: Nuclear/CBRN risk, nonproliferation, and safeguards Satellite imagery analysis Space governance Emerging technology Existential risk and longtermism As one example of my work, I was a consultant at the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, where I led a pilot project on tech company data center monitoring using satellite imagery. Using open source optical imagery, I identified key infrastructural and geographical attributes of these sites, writing a report to explain my findings and recommend next steps for analysis. This data could be harnessed as a basis for developing a future understanding of compute capabilities, energy usage, and policy creation. I've had an extensive career working for the International Atomic Energy Agency, the US Department of Defense, the US laboratory system, Google, and academia/think tanks. I'm super excited to apply this expertise to EA-related projects and research. Please feel free to reach out with comments or inquiries any time! Christina Krawec Founder and Consultant Earthnote, LLC cikrawec@gmail.com Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app