The Nonlinear Library

The Nonlinear Fund
undefined
Apr 10, 2024 • 7min

LW - D&D.Sci: The Mad Tyrant's Pet Turtles [Evaluation and Ruleset] by abstractapplic

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: D&D.Sci: The Mad Tyrant's Pet Turtles [Evaluation and Ruleset], published by abstractapplic on April 10, 2024 on LessWrong. This is a followup to the D&D.Sci post I made ten days ago; if you haven't already read it, you should do so now before spoiling yourself. Here is the web interactive I built to let you evaluate your solution; below is an explanation of the rules used to generate the dataset (my full generation code is available here, in case you're curious about details I omitted). You'll probably want to test your answer before reading any further. Ruleset Turtle Types There are three types of turtle present in the swamp: normal turtles, clone turtles, and vampire turtles. Clone turtles are magically-constructed beasts who are mostly identical. They always have six shell segments, bizarrely consistent physiology, and a weight of exactly 20.4lb. Harold is a clone turtle. Vampire turtles can be identified by their gray skin and fangs. They're mostly like regular turtles, but their flesh no longer obeys gravity, which has some important implications for your modelling exercise. Flint is a vampire turtle. Turtle characteristics Age Most of the other factors are based on the hidden variable Age. The Age distribution is based on turtles having an Age/200 chance of dying every year. Additionally, turtles under the age of 20 are prevented from leaving their homes until maturity, meaning they will be absent from both your records and the Tyrant's menagerie. Wrinkles Every non-clone turtle has an [Age]% chance of getting a new wrinkle each year. Scars Every non-clone turtle has a 10% chance of getting a new scar each year. Shell Segments A non-clone turtle is born with 7 shell segments; each year, they have a 1 in [current number of shell segments] chance of getting a new one. Color Turtles are born green; they turn grayish-green at some point between the ages of 23 and 34, then turn greenish-gray at some point between the ages of 35 and 46. Miscellaneous Abnormalities About half of turtles sneak into the high-magic parts of the swamp at least once during their adolescence. This mutates them, producing min(1d8, 1d10, 1d10, 1d12) Miscellanous Abnormalities. This factor is uncorrelated with Age in the dataset, since turtles in your sample have done all the sneaking out they're going to. (Whoever heard of a sneaky mutated turtle not being a teenager?) Nostril Size Nostril Size has nothing to do with anything (. . . aside from providing a weak and redundant piece of evidence about clone turtles). Turtle Weight The weight of a regular turtle is given by the sum of their flesh weight, shell weight, and mutation weight. (A vampire turtle only has shell weight; a clone turtle is always exactly 20.4lb) Flesh Weight The unmutated flesh weight of a turtle is given by (20+[Age]+[Age]d6)/10 lb. Shell Weight The shell weight of a turtle is given by (5+2*[Shell Segments]+[Shell Segments]d4)/10 lb. (This means that shell weight is the only variable you should use when calculating the weight of a vampire turtle.) Mutation Weight A mutated turtle has 1d(20*[# of Abnormalities])/10 lb of extra weight. (This means each abnormality increases expected weight by about 1lb, and greatly increases expected variance). Strategy The optimal[1] predictions and decisions are as follows: Turtle Average Weight (lb) Optimal Prediction (lb) Abigail 20.1 22.5 Bertrand 17.3 18.9 Chartreuse 22.7 25.9 Dontanien 19.3 21.0 Espera 16.6 18.0 Flint 6.8 7.3 Gunther 25.7 30.6 Harold 20.4 20.4 Irene 21.5 23.9 Jacqueline 18.5 20.2 Leaderboard Player EV(gp) Perfect Play (to within 0.1lb) 1723.17 gjm 1718.54 Malentropic Gizmo 1718.39 aphyer 1716.57 simon 1683.60 qwertyasdef 1674.54 Yonge[2] 1420.00 Just predicting 20lb for everything 809.65 Reflections The intended theme of this game was modelling in the presence of as...
undefined
Apr 10, 2024 • 11min

EA - Summary: Mistakes in the Moral Mathematics of Existential Risk (David Thorstad) by Nicholas Kruus

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary: Mistakes in the Moral Mathematics of Existential Risk (David Thorstad), published by Nicholas Kruus on April 10, 2024 on The Effective Altruism Forum. This post summarizes "Three Mistakes in the Moral Mathematics of Existential Risk," a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad's blog, Reflective Altruism, has a five-part series on this paper. Introduction Many prominent figures in the effective altruism community argue existential risk mitigation offers astronomical value. Thorstad believes there are many philosophical ways to push back on this conclusion[1] and even mathematical ones. Thorstad argues leading models of existential risk mitigation neglect morally relevant parameters, mislocating debates and inflating existential risk reduction's value by many orders of magnitude. He broadly assumes we aren't in the time of perils (which he justifies in this paper) and treats extinction risks as only those that kill all humans.[2] Mistake 1: Cumulative Risk Existential risks recur many times throughout history, meaning they can be presented as a per-century risk repeated each century or as a cumulative risk of occurring during a total time interval (e.g., the cumulative risk of extinction before Earth becomes less habitable). Mistake 1: Expected value calculations of existential risk mitigation reduce the cumulative risk, not the per-century risk. Thorstad identifies two problems with this choice. If humans live a long time, small reductions in cumulative risk require astronomical reductions in per-century risk. This is because the chance we survive for the total time interval in question depends on cumulative risk, and our cumulative survival chance must exceed our reduction in cumulative risk. Reducing cumulative risks with our actions today requires changing the risk for many, many centuries to come. So, even if we can substantially shift the risk of extinction this century or even nearby ones, we'll likely have a hard time doing so for existential risk a thousand or million centuries from now. For instance, if we want to create a meager one-in-a-hundred-million absolute reduction[3] in existential risk before Earth becomes less habitable,[4] the per-century risk must be nearly one-in-a-million or lower.[5] Many longtermists estimate this century's existential risk to be ~15-20% or higher,[6] in which case we'd need to drive the per-century risk down a hundred thousand times. Hence, many expected value calculations of existential risk mitigation demand vastly greater reductions in per-century risk than they initially seem to. Mistake 2: Background Risk Millett and Snyder-Beattie (MSB) offer one of the most cited papers discussing biorisk - biological extinction risk - featuring a favorable cost-effectiveness estimate. While Thorstad believes many complaints about MSB's model exist, he raises two. Mistake 2: Existential risk mitigation calculations (including MSB's model) ignore background risk. In MSB's model, the background risk is the risk of extinction from all non-biological sources. But, modifying this model to include background risk changes the estimated cost-effectiveness considerably. Without background risk, a 1% relative reduction in biorisk has a meaningful impact on per-century risk: it discounts per-century risk by 1%. But, when you include non-biological background risk, the same reduction in biorisk changes the per-century risk far less: per-century risk becomes the discounted biorisk plus the full background risk. Since many longtermists believe the per-century risk is very high (~15-20% or higher)[6] and thus much greater than biorisk, this substantially reduces biorisk mitigation's estimated cost-effectiveness: For reference, GiveWell e...
undefined
Apr 10, 2024 • 3min

EA - I have started a local community in Zambia for students, graduates, and professionals based on the principles outlined in the 80,000 Hours book. In this thread, I will keep you updated on its progress. by Muloongo Stella Mwanahamuntu

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I have started a local community in Zambia for students, graduates, and professionals based on the principles outlined in the 80,000 Hours book. In this thread, I will keep you updated on its progress., published by Muloongo Stella Mwanahamuntu on April 10, 2024 on The Effective Altruism Forum. Geography has much to do with your economic status, including access to knowledge and opportunities. I am from Zambia and was privileged to stumble upon the Effective Altruism community that aligns with SOME of my values and the kind of life I want to live. There are a lot of conversations I feel I can't participate in on this forum because of my geography and limited work/academic experience/network so far - I probably will someday. I am so motivated to do this because I never had any career guidance at any point in my life. After five years of working as a product design consultant, co-founding some businesses, and a master's degree, I haven't resolved what I want to do with my career, so I am very happy to be building an EA community in my country while I engage with the 80,000 HOURS book online and in person. I am also being mentored by Felix Lee, co-founder of ADPList, which is great because I can learn from his experiences going from being a product designer to his mission to solve many shortcomings of the current education system that hinder the choice to pursue potential career paths. Some goals that I have and will track in this thread for accountability Grow the page's following on Instagram and TikTok to 1000. Facilitate monthly online meetings where followers meet local professionals working in EA cause areas. Connect with companies like Udemy, Data Camp, and Coursera to provide scholarships to the community because most of us can't afford the few hundred dollars to pay for those courses. THIS IS A BIG ONE - Organise one career fair for students and graduates. On a more personal note, I look forward to becoming an effective communicator in the EA community. I also have my fingers crossed that I will become a human-centered design lecturer at a university in Zambia. Picture Credit: https://www.instagram.com/chamwa_tells_stories/ Check out the pages: https://www.instagram.com/star.careers/ https://web.facebook.com/star.careerszm/ https://www.tiktok.com/@star.careerszm Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 10, 2024 • 53min

LW - RTFB: On the New Proposed CAIP AI Bill by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RTFB: On the New Proposed CAIP AI Bill, published by Zvi on April 10, 2024 on LessWrong. A New Bill Offer Has Arrived Center for AI Policy proposes a concrete actual model bill for us to look at. Here was their announcement: WASHINGTON - April 9, 2024 - To ensure a future where artificial intelligence (AI) is safe for society, the Center for AI Policy (CAIP) today announced its proposal for the "Responsible Advanced Artificial Intelligence Act of 2024." This sweeping model legislation establishes a comprehensive framework for regulating advanced AI systems, championing public safety, and fostering technological innovation with a strong sense of ethical responsibility. "This model legislation is creating a safety net for the digital age," said Jason Green-Lowe, Executive Director of CAIP, "to ensure that exciting advancements in AI are not overwhelmed by the risks they pose." The "Responsible Advanced Artificial Intelligence Act of 2024" is model legislation that contains provisions for requiring that AI be developed safely, as well as requirements on permitting, hardware monitoring, civil liability reform, the formation of a dedicated federal government office, and instructions for emergency powers. The key provisions of the model legislation include: 1. Establishment of the Frontier Artificial Intelligence Systems Administration to regulate AI systems posing potential risks. 2. Definitions of critical terms such as "frontier AI system," "general-purpose AI," and risk classification levels. 3. Provisions for hardware monitoring, analysis, and reporting of AI systems. 4. Civil + criminal liability measures for non-compliance or misuse of AI systems. 5. Emergency powers for the administration to address imminent AI threats. 6. Whistleblower protection measures for reporting concerns or violations. The model legislation intends to provide a regulatory framework for the responsible development and deployment of advanced AI systems, mitigating potential risks to public safety, national security, and ethical considerations. "As leading AI developers have acknowledged, private AI companies lack the right incentives to address this risk fully," said Jason Green-Lowe, Executive Director of CAIP. "Therefore, for advanced AI development to be safe, federal legislation must be passed to monitor and regulate the use of the modern capabilities of frontier AI and, where necessary, the government must be prepared to intervene rapidly in an AI-related emergency." Green-Lowe envisions a world where "AI is safe enough that we can enjoy its benefits without undermining humanity's future." The model legislation will mitigate potential risks while fostering an environment where technological innovation can flourish without compromising national security, public safety, or ethical standards. "CAIP is committed to collaborating with responsible stakeholders to develop effective legislation that governs the development and deployment of advanced AI systems. Our door is open." I discovered this via Cato's Will Duffield, whose statement was: Will Duffield: I know these AI folks are pretty new to policy, but this proposal is an outlandish, unprecedented, and abjectly unconstitutional system of prior restraint. To which my response was essentially: I bet he's from Cato or Reason. Yep, Cato. Sir, this is a Wendy's. Wolf. We need people who will warn us when bills are unconstitutional, unworkable, unreasonable or simply deeply unwise, and who are well calibrated in their judgment and their speech on these questions. I want someone who will tell me 'Bill 1001 is unconstitutional and would get laughed out of court, Bill 1002 has questionable constitutional muster in practice and unconstitutional in theory, we would throw out Bill 1003 but it will stand up these days because SCOTUS thinks the commerc...
undefined
Apr 10, 2024 • 33min

EA - Antimicrobial Surfaces For Pandemic Prevention? by Conrad K.

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Antimicrobial Surfaces For Pandemic Prevention?, published by Conrad K. on April 10, 2024 on The Effective Altruism Forum. The next big research agenda in biosecurity? Or a flawed technology dead on arrival? Spoiler: it's somewhere in between. Summary I spent ~15-20 hours conducting research, holding expert interviews, and thinking through whether we should be excited about antimicrobial surfaces for pandemic prevention and, if so, what the next steps for this technology should be. Overall, I'm reasonably confident (~75%-80%) about the following but note it is highly oversimplified: Antimicrobial surfaces inhibit the growth and spread of microorganisms. They can be categorised based on functional mechanism (antifouling, biocidal, hybrid), mode of action (chemically-functionalised, physical, biologically-functionalised, or composite), spectrum of effectiveness (targeted or broad-spectrum), and application (antimicrobial materials, pre-coating, or applied). Research shows that antimicrobial surfaces can effectively kill microbes in the right conditions, but many unknowns remain regarding their efficacy outside lab settings, interactions between pathogens/surfaces/fomites, and use for mitigating pandemics. Evidence is limited on antimicrobial surfaces preventing infections. Studies often have design issues and knowledge gaps remain regarding transmission mechanisms. Little is established about the effectiveness of antimicrobial surfaces for mitigating pandemics. More research is needed on pathogen transmission routes, surface use cases, and cost-effectiveness. Downsides include surface degradation over time, potential toxicity, inducing antimicrobial resistance, regulatory barriers, and high costs for novel technologies. Antimicrobial surfaces are beginning to see more real-world use, with potential applications beyond infection control. The market for antimicrobial surfaces is growing. Key open questions remain about the fundamentals of fomite transmission, interactions between pathogens and surfaces, surface degradation, accessibility, supply chain considerations, and ideal use cases. Some reasons to be excited include reducing fomite transmission, continual action without reapplication, and multipurpose benefits. However, high costs, regulatory hurdles, concerns about antimicrobial resistance, and many research uncertainties are reasons for caution. Overall, I lean towards it being worthwhile to resolve foundational questions about antimicrobial surfaces through (i) further testing and modelling; (ii) producing an ontology of surfaces, and (iii) conducting a detailed scoping of potential use cases for pandemic prevention. Oxford Biosecurity Group will soon be running a project exploring producing an ontology of antimicrobial surfaces, and we're looking for a co-lead for this project. Reach out to us at contact@oxfordbiosecuritygroup.com if you're interested. What Are Antimicrobial Surfaces? Antimicrobial surfaces are surfaces that are designed to inhibit the growth and spread of microorganisms, including bacteria, viruses, fungi, and algae. One way they can be categorised is as follows: Functional Mechanism Antifouling: surfaces that prevent microbial attachment and growth. E.g. superwettable surfaces such as superhydrophobic modified aluminium or mussel-inspired superhydrophilic surfaces that work by either preventing microbial adhesion (hydrophobic) or creating a water barrier between microbes and the surfaces (hydrophilic). Biocidal: surfaces that actively kill or inhibit microbes. E.g. metals such as copper which release ions that kill microbes. Hybrid: surfaces combining both antifouling and biocidal properties. E.g. honeycomb-like patterned surfaces which trap bacteria, both preventing further growth and killing them. Mode of Action Chemically-functionalised: u...
undefined
Apr 10, 2024 • 34min

AF - How I select alignment research projects by Ethan Perez

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I select alignment research projects, published by Ethan Perez on April 10, 2024 on The AI Alignment Forum. Youtube Video Recently, I was interviewed by Henry Sleight and Mikita Balesni about how I select alignment research projects. Below is the slightly cleaned up transcript for the YouTube video. Introductions Henry Sleight: How about you two introduce yourselves? Ethan Perez: I'm Ethan. I'm a researcher at Anthropic and do a lot of external collaborations with other people, via the Astra Fellowship and SERI MATS. Currently my team is working on adversarial robustness, and we recently did the sleeper agents paper. So, basically looking at we can use RLHF or adversarial training or current state-of-the-art alignment safety training techniques to train away bad behavior. And we found that in some cases, the answer is no: that they don't train away hidden goals or backdoor behavior and models. That was a lot of my focus in the past, six to twelve months. Mikita Balesni: Hey, I'm Mikita. I work at Apollo. I'm a researcher doing evals for scheming. So trying to look for whether models can plan to do something bad later. Right now, I'm in Constellation for a month where I'm trying to collaborate with others to come up with ideas for next projects and what Apollo should do. Henry Sleight: I'm Henry. I guess in theory I'm the glue between you two, but you also already know each other, so this is in some ways pointless. But I'm one of Ethan's Astra fellows working on adversarial robustness. Currently, our project is trying to come up with a good fine-tuning recipe for robustness. Currently working on API models for a sprint, then we'll move onto open models probably. How Ethan Selects Research Projects Henry Sleight: So I guess the topic for us to talk about today, that we've agreed on beforehand, is "how to select what research project you do?" What are the considerations, what does that process look like? And the rough remit of this conversation is that Ethan and Mikita presumably have good knowledge transfer to be doing, and I hope to make that go better. Great. Let's go. Mikita, where do you want to start? Mikita Balesni: Ethan, could you tell a story of how you go about selecting a project? Top-down vs Bottom-up Approach Ethan Perez: In general, I think there's two modes for how I pick projects. So one would be thinking about a problem that I want to solve and then thinking about an approach that would make progress on the problem. So that's top down approach, and then there's a bottom up approach, which is [thinking]: "seems like this technique or this thing is working, or there's something interesting here." And then following my nose on that. That's a bit results driven, where it seems like: I think a thing might work, I have some high-level idea of how it relates to the top-level motivation, but haven't super fleshed it out. But it seems like there's a lot of low-hanging fruit to pick. And then just pushing that and then maybe in parallel or after thinking through "what problem is this going to be useful for?" Mikita Balesni: So at what point do you think about the theory of change for this? Ethan Perez: For some projects it will be..., I think just through, during the project. I mean, often the iteration cycle is, within the course of a day or something. So it's not, it's not necessarily that if it ends up that the direction isn't that important, that it was a huge loss or sometimes it's a couple of hours or just a few messages to a model. Sometimes it's just helpful to have some empirical evidence to guide a conversation. If you're trying to pick someone's brain about what's the importance of this direction. You might think that it's difficult in some ways to evaluate whether models know they're being trained or tested, which is pretty relevant for various a...
undefined
Apr 9, 2024 • 21min

LW - Ophiology (or, how the Mamba architecture works) by Danielle Ensign

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ophiology (or, how the Mamba architecture works), published by Danielle Ensign on April 9, 2024 on LessWrong. The following post was made as part of Danielle's MATS work on doing circuit-based mech interp on Mamba, mentored by Adrià Garriga-Alonso. It's the first in a sequence of posts about finding an IOI circuit in Mamba/applying ACDC to Mamba. This introductory post was also made in collaboration with Gonçalo Paulo. A new challenger arrives! Why Mamba? Promising Scaling Mamba [1] is a type of recurrent neural network based on state-space models, and is being proposed as an alternative architecture to transformers. It is the result of years of capability research [2] [3] [4] and likely not the final iteration of architectures based on state-space models. In its current form, Mamba has been scaled up to 2.8B parameters on The Pile and on Slimpj, having similar scaling laws when compared to Llama-like architectures. Scaling curves from Mamba paper: Mamba scaling compared to Llama (Transformer++), previous state space models (S3++), convolutions (Hyena), and a transformer inspired RNN (RWKV) More recently, ai21labs [5] trained a 52B parameter MOE Mamba-Transformer hybrid called Jamba. At inference, this model has 12B active parameters and has benchmark scores comparable to Llama-2 70B and Mixtral. Jamba benchmark scores, from Jamba paper [5:1] Efficient Inference One advantage of RNNs, and in particular of Mamba, is that the memory required to store the context length is constant, as you only need to store the past state of the SSM and of the convolution layers, while it grows linearly for transformers. The same happens with the generation time, where predicting each token scales as O(1) instead of O(context length). Jamba throughput (tokens/second), from Jamba paper[5:2] What are State-space models? The inspiration for Mamba (and similar models) is an established technique used in control theory called state space models (SSM). SSMs are normally used to represent linear systems that have p inputs, q outputs and n state variables. To keep the notation concise, we will consider the input as E-dimensional vector x(t)RE, an E-dimensional output y(t)RE and a N-dimensional latent space hRN. In the following, we will note the dimensions of new variables using the notation [X,Y]. In particular, in Mamba 2.8b, E=5120 and N=16. Specifically, we have the following: [N]h(t)=[N,N]A[N]h(t)+[N,E]B[E]x(t) [E]y(t)=[E,N]C[N]h(t)+[E,E]D[E]x(t) This is an ordinary differential equation (ODE), where h(t) is the derivative of h(t) with respect to time, t. This ODE can be solved in various ways, which will be described below. In state space models, A is called the state matrix, B is called the input matrix, C is called the output matrix, and D is called the feedthrough matrix. Solving the ODE We can write the ODE from above as a recurrence, using discrete timesteps: [N]ht=[N,N]A[N]ht1+[N,E]B[E]xt [E]yt=[E,N]C[N]ht+[E,E]D[E]xt where A and B are our discretization matrices. Different ways of integrating the original ODE will give different A and B, but will still preserve this overall form. In the above, t corresponds to discrete time. In language modeling, t refers to the token position. Euler method The simplest way to numerically integrate an ODE is by using the Euler method, which consists in approximating the derivative by considering the ratio between a small variation in h and a small variation in time, h=dhdtΔhΔt. This allows us to write: ht+1htΔt=Aht+Bxt ht+1=Δt(Aht+Bxt)+ht Where the index t, of ht, represents the discretized time. This is the same thing that is done when considering a character's position and velocity in a video game, for instance. If a character has a velocity v and a position x0, to find the position after Δt time we can do x1=Δtv+x0. In general: xt=Δtvt+xt1 xt=(...
undefined
Apr 9, 2024 • 4min

LW - Conflict in Posthuman Literature by Martín Soto

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conflict in Posthuman Literature, published by Martín Soto on April 9, 2024 on LessWrong. Grant Snider created this comic (which became a meme): Richard Ngo extended it into posthuman=transhumanist literature: That's cool, but I'd have gone for different categories myself.[1] Here they are together with their explanations. Top: Man vs Agency (Other names: Superintelligence, Singularity, Self-improving technology, Embodied consequentialism.) Because Nature creates Society creates Technology creates Agency. At each step Man becomes less in control, due to his increased computational boundedness relative to the other. Middle: Man vs Realities (Other names: Simulation, Partial existence, Solomonoff prior, Math.) Because Man vs Self is the result of dissolving holistic individualism (no subagents in conflict) from Man vs Man. Man vs Reality is the result of dissolving the Self boundary altogether from Man vs Self. Man vs Realities is the result of dissolving the binary boundary between existence and non-existence from Man vs Reality. Or equivalently, the boundary between different physical instantiations of you (noticing you are your mathematical algorithm). At each step a personal identity boundary previously perceived as sharp is dissolved.[2] Bottom: Man vs No Author (Other names: Dust theory, Groundlessness, Meaninglessness, Relativism, Extreme functionalism, Philosophical ill-definedness, Complete breakdown of abstractions and idealizations, .) Because Man vs God thinks "the existence of idealization (=Platonic realm=ultimate meaning=unstoppable force)" is True. This corresponds to philosophical idealism. Man vs No God notices "the existence of idealization" is False. And scorns Man vs God's wishful beliefs. This corresponds to philosophical materialism. Man vs Author notices "the existence of idealization" is not a well-defined question (doesn't have a truth value). And voices this realization, scorning the still-idealistic undertone of Man vs No God, by presenting itself as mock-idealization (Author) inside the shaky boundaries (breaking the fourth wall) of a non-idealized medium (literature, language). This corresponds to the Vienna circle, Quine's Web of Belief, Carnap's attempt at metaphysical collapse and absolute language, an absolute and pragmatic grounding for sensorial reality. Man vs No Author notices that the realization of Man vs Author cannot really be expressed in any language, cannot be voiced, and we must remain silent. It notices there never was any "noticing". One might hypothesize it would scorn Man vs Author if it could, but it has no voice to do so. It is cessation of conflict, breakdown of literature. This corresponds to early Wittgenstein, or Rorty's Pan-Relationalism. At each step the implicit philosophical presumptions of the previous paradigm are revealed untenable. The vertical gradient is also nice: The first row presents ever-more-advanced macroscopic events in reality, derived through physics as causal consequences. The second row presents ever-more-general realizations about our nature, derived through maths as acausal influence our actions have in reality.[3] The third row presents ever-more-destructive collapses of the implicit theoretical edifice we use to relate our nature with reality, derived through philosophy as different static impossibilities. ^ If I had to critique Richard's additions: Man vs Physics seems too literal (in sci-fi stories the only remaining obstacle is optimizing physics), and not a natural extension of the literary evolution in that row. Man vs Agency doesn't seem to me to capture the dance of boundaries that seems most interesting in that row. Man vs Simulator seems again a too literal translation of Man vs Author (changing the flavor of the setting rather than the underlying idea). ^ To see the Man vs Man t...
undefined
Apr 9, 2024 • 25min

LW - Medical Roundup #2 by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Medical Roundup #2, published by Zvi on April 9, 2024 on LessWrong. Previously: #1 It feels so long ago that Covid and health were my beat, and what everyone often thought about all day, rather than AI. Yet the beat goes on. With Scott Alexander at long last giving us what I expect to be effectively the semi-final words on the Rootclaim debate, it seemed time to do this again. Bad News I know no methodical way to find a good, let alone great, therapist. Cate Hall: One reason it's so hard to find a good therapist is that all the elite ones market themselves as coaches. As a commentor points out, therapists who can't make it also market as coaches or similar, so even if Cate's claim is true then it is tough. My actual impression is that the elite therapists largely do not market themselves at all. They instead work on referrals and reputation. So you have to know someone who knows. They used to market, then they filled up and did not have to, so they stopped. Even if they do some marketing, seeing the marketing copy won't easily differentiate them from other therapists. There are many reasons why our usual internet approach of reviews is mostly useless here. Even with AI, I am guessing we currently lack enough data to give you good recommendations from feedback alone. Good News, Everyone American life expectancy rising again, was 77.5 years (+1.1) in 2022. Bryan Johnson, whose slogan is 'Don't Die,' continues his quest for eternal youth, seen here trying to restore his joints. Mike Solana interviews Bryan Johnson about his efforts here more generally. The plan is to not die via two hours of being studied every day, what he finds is ideal diet, exercise and sleep, and other techniques and therapies including bursts of light and a few supplements. I wish this man the best of luck. I hope he finds the answers and does not die, and that this helps the rest of us also not die. Alas, I am not expecting much. His concept of 'rate of aging' does not strike me as how any of this is likely to work, nor does addressing joint health seem likely to much extend life or generalize. His techniques do not target any of the terminal aging issues. A lot of it seems clearly aimed at being healthy now, feeling and looking younger now. Which is great, but I do not expect it to buy much in the longer term. Also one must note that the accusations in the responses to the above-linked thread about his personal actions are not great. But I would not let that sully his efforts to not die or help others not die. I can't help but notice the parallel to AI safety. I see Johnson as doing lots of mundane health work, to make himself healthier now. Which is great, although if that's all it is then the full routine is obviously a bit much. Most people should do more of such things. The problem is that Johnson is expecting this to translate into defeating aging, which I very much do not expect. Gene therapy cures first case of congenital deafness. Woo-hoo! Imagine what else we could do with gene therapies if we were 'ethically' allowed to do so. It is a sign of the times that I expected much reaction to this to be hostile both on the 'how dare you mess with genetics' front and also the 'how dare you make someone not deaf' front. The Battle of the Bulge A 'vaccine-like' version of Wegovy is on the drawing board at Novo Nordisk (Stat+). If you are convinced you need this permanently it would be a lot cheaper and easier in this form, but this is the kind of thing you want to be able to reverse, especially as technology improves. Consider as parallel, an IUD is great technology but would be much worse if you could not later remove it. The battle can be won, also Tracy Morgan really was playing Tracy Morgan when he played Tracy Morgan. Page Six: Tracy Morgan says he 'gained 40 pounds' on weight-loss drugs: I ...
undefined
Apr 9, 2024 • 3min

EA - Sharing Reality with Walt Whitman [Video] by michel

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sharing Reality with Walt Whitman [Video], published by michel on April 9, 2024 on The Effective Altruism Forum. In 1860, Walt Whitman addressed future generations with his poem "Crossing Brooklyn Ferry". On the shores Brooklyn, he feels the same reality as "men and women of a generation, or ever so many generations hence," and he knows it: [...] I am with you, Just as you feel when you look on the river and sky, so I felt, Just as any of you is one of a living crowd, I was one of a crowd, Just as you are refresh'd by the gladness of the river and the bright flow, I was refresh'd, What thought you have of me now, I had as much of you - I laid in my stores in advance, I consider'd long and seriously of you before you were born. [...] I first heard this poem in Joe Carlsmith's essay "On future people, looking back on 21st century longtermism."I loved it. I happened to be going to New York a few weeks later, and I happen to enjoy making little videos. So, I made a video complementing Walt Whitman's poem with scenes from my Brooklyn visit, a 160 years later. If you like this video or the poem, I recommend reading Joe Carlsmith's whole essay. Here's the section where Joe reacts to Walt Whitman's poem, with longtermism and the idea of "shared reality" in mind: It feels like Whitman is living, and writing, with future people - including, in some sense, myself - very directly in mind. He's saying to his readers: I was alive. You too are alive. We are alive together, with mere time as the distance. I am speaking to you. You are listening to me. I am looking at you. You are looking at me. If the basic longtermist empirical narrative sketched above is correct, and our descendants go on to do profoundly good things on cosmic scales, I have some hope they might feel something like this sense of "shared reality" with longtermists in the centuries following the industrial revolution - as well as with many others, in different ways, throughout human history, who looked to the entire future, and thought of what might be possible. In particular, I imagine our descendants looking back at those few centuries, and seeing some set of humans, amidst much else calling for attention, lifting their gaze, crunching a few numbers, and recognizing the outlines of something truly strange and extraordinary - that somehow, they live at the very beginning, in the most ancient past; that something immense and incomprehensible and profoundly important is possible, and just starting, and in need of protection." Thanks to Joe Carlsmith for letting me use his audio, and for writing his essay. Thanks to Lara Thurnherr and Finn Hambley for early feedback on the video. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app