The Nonlinear Library cover image

The Nonlinear Library

Latest episodes

undefined
Sep 18, 2024 • 23min

EA - Is "superhuman" AI forecasting BS? Some experiments on the "539" bot from the Centre for AI Safety by titotal

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is "superhuman" AI forecasting BS? Some experiments on the "539" bot from the Centre for AI Safety, published by titotal on September 18, 2024 on The Effective Altruism Forum. Disclaimer: I am a computational physicist's and this investigation is outside of my immediate area of expertise. Feel free to peruse the experiments and take everything I say with appropriate levels of skepticism. Introduction: The centre for AI safety is a prominent AI safety research group doing technical AI research as well as regulatory activism. It's headed by Dan Hendrycks, who has a PHD in computer science from Berkeley and some notable contributions to AI research. Last week CAIS released a blog post, entitled "superhuman automated forecasting", announcing a forecasting bot developed by a team including Hendrycks, along with a technical report and a website "five thirty nine", where users can try out the bot for themselves. The blog post makes several grandiose claims, claiming to rebut Nate silvers claims that superhuman forecasting is 15-20 years away, and that: Our bot performs better than experienced human forecasters and performs roughly the same as (and sometimes even better than) crowds of experienced forecasters; since crowds are for the most part superhuman, so is FiveThirtyNine. He paired this with a twitter post, declaring: We've created a demo of an AI that can predict the future at a superhuman level (on par with groups of human forecasters working together). Consequently I think AI forecasters will soon automate most prediction markets. The claim is this: Via a chain of prompting, GPT4-o can be harnessed for superhuman prediction. Step 1 is to ask GPT to figure out the most relevant search terms for a forecasting questions, then those are fed into a web search to yield a number of relevant news articles, to extract the information within. The contents of these news articles are then appended to a specially designed prompt which is fed back to GPT-4o. The prompt instructs it to boil down the articles into a list of arguments "for" and "against" the proposition and rate the strength of each, to analyse the results and give an initial numerical estimate, and then do one last sanity check and analysis before yielding a final percentage estimate. How do they know it works? Well, they claim to have run the bot on several metacalculus questions and achieved accuracy greater than both the crowd average and a test using the prompt of a competing model. Importantly, this was a retrodiction: they tried to run questions from last year, while restricting it's access to information since then, and then checked how many of the subsequent results are true. A claim of superhuman forecasting is quite impressive, and should ideally be backed up by impressive evidence. A previous paper trying similar techniques yielding less impressive claims runs to 37 pages, and it demonstrates them doing their best to avoid any potential flaw or pitfall in the process(and I'm still not sure they succeeded). In contrast, the CAIS report is only 4 pages long, lacking pretty much all the relevant information one would need to properly assess the claim. You can read feedback from the twitter replies, Manifold question, Lesswrong and the EA forum, which were all mostly skeptical and negative, bringing up a myriad of problems with the report. This report united most rationalists and anti-rationalists in skepticism. Although I will note that both AI safety memes and Kat Woods seemed to accept and spread the claims uncritically. The most important to highlight is these twitter comments by the author of a much more rigorous paper cited in the report, claiming that the results did not replicate on his side, as well as this critical response by another AI forecasting institute. Some of the concerns: The retrodiction...
undefined
Sep 18, 2024 • 1h 8min

LW - Monthly Roundup #22: September 2024 by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monthly Roundup #22: September 2024, published by Zvi on September 18, 2024 on LessWrong. It's that time again for all the sufficiently interesting news that isn't otherwise fit to print, also known as the Monthly Roundup. Bad News Beware the failure mode in strategy and decisions that implicitly assumes competence, or wishes away difficulties, and remember to reverse all advice you hear. Stefan Schubert (quoting Tyler Cowen on raising people's ambitions often being very high value): I think lowering others' aspirations can also be high-return. I know of people who would have had a better life by now if someone could have persuaded them to pursue more realistic plans. Rob Miles: There's a specific failure mode which I don't have a name for, which is similar to "be too ambitious" but is closer to "have an unrealistic plan". The illustrative example I use is: Suppose by some strange circumstance you have to represent your country at olympic gymnastics next week. One approach is to look at last year's gold, and try to do that routine. This will fail. You'll do better by finding one or two things you can actually do, and doing them well There's a common failure of rationality which looks like "Figure out what strategy an ideal reasoner would use, then employ that strategy". It's often valuable to think about the optimal policy, but you must understand the difference between knowing the path, and walking the path I do think that more often 'raise people's ambitions' is the right move, but you need to carry both cards around with you for different people in different situations. Theory that Starlink, by giving people good internet access, ruined Burning Man. Seems highly plausible. One person reported that they managed to leave the internet behind anyway, so they still got the Burning Man experience. Tyler Cowen essentially despairs of reducing regulations or the number of bureaucrats, because it's all embedded in a complex web of regulations and institutions and our businesses rely upon all that to be able to function. Otherwise business would be paralyzed. There are some exceptions, you can perhaps wholesale axe entire departments like education. He suggests we focus on limiting regulations on new economic areas. He doesn't mention AI, but presumably that's a lot of what's motivating his views there. I agree that 'one does not simply' cut existing regulations in many cases, and that 'fire everyone and then it will all work out' is not a strategy (unless AI replaces them?), but also I think this is the kind of thing can be the danger of having too much detailed knowledge of all the things that could go wrong. One should generalize the idea of eliminating entire departments. So yes, right now you need the FDA to approve your drug (one of Tyler's examples) but… what if you didn't? I would still expect, if a new President were indeed to do massive firings on rhetoric and hope, that the result would be a giant cluster****. La Guardia switches to listing flights by departure time rather than order of destination, which in my mind makes no sense in the context of flights, that frequently get delayed, where you might want to look for an earlier flight or know what backups are if yours is cancelled or delayed or you miss it, and so on. It also gives you a sense of where one can and can't actually go to when from where you are. For trains it makes more sense to sort by time, since you are so often not going to and might not even know the train's final destination. I got a surprising amount of pushback about all that on Twitter, some people felt very strongly the other way, as if to list by name was violating some sacred value of accessibility or something. Anti-Social Media Elon Musk provides good data on his followers to help with things like poll calibration, reports 73%-27% lea...
undefined
Sep 18, 2024 • 9min

EA - AI Welfare Debate Week retrospective by Toby Tremlett

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Welfare Debate Week retrospective, published by Toby Tremlett on September 18, 2024 on The Effective Altruism Forum. I wrote this retrospective to be shared internally in CEA - but in the spirit of more open communication, I'm sharing it here as well. Note that this is a review of the event considered as a product, not a summary or review of the posts from the week. If you have any questions, or any additional feedback, that'd be appreciated! I'll be running another debate week soon, and feedback has already been very helpful in preparing for it. Also, feedback on the retro itself is appreciated- I'd ideally like to pre-register my retros and just have to fill in the graphs and conclusions once the event actually happens, so suggesting data we should measure/ questions I should be asking would be very helpful for making better retro templates. How successful was the event? In my OKRs (Objectives and Key Results- AKA, my goals for the event), I wanted this event to: Have 50 participants, with "participant" being anyone taking an event-related action such as voting, commenting, or posting. We did an order of magnitude better than 50. Over 558 people voted during the week, and 27 authors wrote or co-wrote at least one post. Change people's minds. I wanted the equivalent of 25 people changing their minds by 25% of the debate slider. We did twice as well as I hoped here- 53 unique users made at least one mind change of 0.25 delta (representing 25% of the slider) or more. Therefore, on our explicit goals, this event was successful . But how successful was it based on our other, non-KR goals and hopes? Some other goals that we had for the event- either in the ideation phase, or while it was ongoing, were: Create more good content on a particularly important issue to EAs. Successful. Increase engagement. Seems unsuccessful. Bring in some new users. Not noticeably successful. Increase messaging. Not noticeably successful. In the next four sections, I examine each of these goals in turn. More good content We had 28 posts with the debate week tag, with 7 being at or above 50 karma. Of the 7, all but one (JWS's thoughtful critique of the debate's framing) were from authors I had directly spoken to or messaged about the event. Compared to Draft Amnesty Week (which led to posts from 42 authors, and 10 posts over 50 karma) this isn't that many- however, I think we should count these posts as ex ante more valuable because of their focus on a specific topic. Ex-post, it's hard to assess how valuable the posts were. None of the posts had very high karma (i.e. the highest was 77). However, I did curate one of the posts, and a couple of others were considered for curation. I would be interested to hear takes from readers about how valuable the posts were - did any of them change your mind, lead to a collaboration, or cause you to think more about the topic? Engagement How much engagement did the event get? In total, debate week posts got 127 hours of engagement during the debate week (or 11.6% of total engagement), and 181 hours from July 1-14 (debate week and the week after), 7.5% of that fortnight's engagement hours. Did it increase total daily hours of engagement? Note: Discussion of Manifest controversies happened in June, and led to higher engagement hours per day in the build up to the event. Important dates: June 17: 244 comments, June 18: 349 comments, June 20: 33 comments, June 25: 38 comments It doesn't look as if the debate week meaningfully increased daily engagement. The average daily engagement for the week after the event is actually higher, although the 3rd day of the event (July 3rd- the day I mentioned that the event was ongoing in the EA Digest) remains the highest hours of engagement between July 1st and the date I'm writing this, August 21st. Did it get us new us...
undefined
Sep 18, 2024 • 5min

EA - Material Innovation Initiative (MII) shuts down by Nate Crosser

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Material Innovation Initiative (MII) shuts down, published by Nate Crosser on September 18, 2024 on The Effective Altruism Forum. The "GFI of vegan materials" is shutting down after operating since 2019. They were an ACE-recommended charity at one point. No rationale is given in the announcement. I asked for more, and will update this post if they respond. Dear Valued Stakeholders, I am writing to you with mixed emotions to share some important news regarding the future of the Material Innovation Initiative (MII). After a thorough evaluation and much deliberation, the board of directors and the executive leadership team have made the difficult decision to wind down MII's operations. While this marks the end of our journey as an organization, we want to take this opportunity to celebrate our many accomplishments and the tremendous growth of the next-gen materials industry, as well as express our gratitude for your unwavering support over the past five years. A Legacy of Impact and Innovation Since our founding in 2019, MII has been at the forefront of transforming the next-gen materials industry. Our mission was clear: to accelerate the development of high-quality, high-performance, animal-free and environmentally preferred next-generation materials. We envisioned a world where the materials used in fashion, automotive, and home goods industries would protect human rights, mitigate climate change, spare animals' lives, and preserve our planet for future generations. Thanks to your support, we have made significant strides towards this vision: Catalyzing Investments: MII has been instrumental in inspiring over $2.31 billion in investments into next-gen materials, including $504 million in 2023 alone. These investments have driven innovation and growth across the sector, enabling the development of materials that meet performance, aesthetic, and sustainability needs at competitive prices. Research and Advocacy: Our pioneering research, such as the U.S. Consumer Research on next-gen materials, revealed that 92% of consumers are likely to purchase next-gen products, highlighting a significant market opportunity. Our State of the Industry reports have been vital resources for innovators, brands, and investors, saving them time and guiding strategic decision-making. Brand Collaborations: We have facilitated groundbreaking partnerships between next-gen material innovators and major brands. In 2023, we saw almost 400 collaborations between influential brands and next-gen material companies, showing the increasing interest from brands to incorporate next-gen materials into their collections. This also illustrates the tremendous potential of next-gen materials to disrupt the fashion, home goods and automotive industries. Global Influence and Advocacy: MII has been appointed to influential roles, such as serving on the New York City Mayor's Office task force to source sustainable materials. Our participation in global events have increased visibility for next-gen materials, reaching audiences across the world and bringing together stakeholders across the value chain to drive collective action. The Evolution of the Industry Since we began our journey in 2019, the landscape of the materials industry has changed dramatically. The concept of next-gen materials has gone from a niche idea to a critical component of sustainability strategies for leading global brands. Today, there are 141 companies dedicated to next-gen materials, up from just 102 in 2022, demonstrating the rapid growth and adoption within the industry. This increased innovation has brought down prices, improved quality, and expanded the range of available materials, making them viable alternatives to conventional animal and petrochemical-derived materials. The industry is now well-positioned to continue advancing towa...
undefined
Sep 18, 2024 • 35min

EA - Tithing: much more than you wanted to know by Vesa Hautala

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tithing: much more than you wanted to know, published by Vesa Hautala on September 18, 2024 on The Effective Altruism Forum. Summary This post explores the practice of tithing (religiously mandated giving of 10% of income to the church or other recipients) among Christians, including: 1. contemporary beliefs and practices (especially in the US) 2. questions about Biblical interpretation 3. wider theological themes related to Christian giving This piece is mainly written for a Christian audience but should be useful to anyone interested in the topic. Some key points US Protestants usually believe tithing should be practiced (about 70% think it's a Biblical commandment) However, only 4% of US Evangelicals donate 10% or more (I didn't find data for all Protestants, but the number is likely similar) yet 38% of Evangelicals believe they are giving one-tenth or more, so they vastly overestimate their giving (again, no data for all Protestants) There are different opinions on who the tithe can be paid to, with a local church being the most common answer The Catholic Church does not teach tithing, Orthodox views are mixed, and the Church of England "challenges" its members to give 10% The Torah has legislation on tithing that seems to command giving 20-30% of agricultural products and animals In my view no New Testament passage sets a fixed percentage to give or provides exact instructions on how to split donations between the church and other charities However, the NT has passages that promote radical generosity[1] and encourage significant giving to those in need, which suggests 10% may be too low an anchoring point for many Christians today Introduction This [Susbstack] post is an abridged version of the article An In-Depth Look at Tithing published on the EA for Christians website. [Note, I've also included some additional content from the full version and some other small changes to this forum post.] Tithing is a contentious subject. Some Christians preach blessings on tithers and curses for non-tithers. Others used to believe tithing is a binding obligation but now vigorously advocate against it. If there is an obligation to give 10% to the church, this greatly affects the giving options of Christians. This post first discusses contemporary views and practices and then the main Bible passages used in relation to tithing. Finally, I will present some wider theological reflections on tithing and giving. A note on definitions: By "tithing" I mean mandatory giving of 10% of income to the church (or possibly other Christian ministries or other types of charity, there are different views about this). Also, for the sake of transparency, I want to state right in the beginning that I don't personally believe in a binding obligation to donate 10% to one's local church. However, even if you disagree, I believe you will find a lot of this post interesting and helpful for deepening your understanding of the arguments for and against tithing. Contemporary views and practices This section is going to be rather US-centric for a few reasons. The US very likely has the largest religious economy in the world and tithing is a part of the US religious landscape. There is more data available about tithing in the US than for example the UK. US Christians also seem to be generally more interested in the tithing question. US Protestants According to a survey by Lifeway Research, 72% of US protestant pastors believe tithing is a biblical commandment that applies today. In a similar survey, 77% of churchgoers said the same. People have different ideas about what "tithe" means, but in the survey of pastors, 73% said it's 10% of a person's income (gross or net). The number of people who actually donate 10% or more is much lower, though. The average giving among US adults who attend worship at leas...
undefined
Sep 18, 2024 • 25min

LW - Generative ML in chemistry is bottlenecked by synthesis by Abhishaike Mahajan

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Generative ML in chemistry is bottlenecked by synthesis, published by Abhishaike Mahajan on September 18, 2024 on LessWrong. Introduction Every single time I design a protein - using ML or otherwise - I am confident that it is capable of being manufactured. I simply reach out to Twist Biosciences, have them create a plasmid that encodes for the amino acids that make up my proteins, push that plasmid into a cell, and the cell will pump out the protein I created. Maybe the cell cannot efficiently create the protein. Maybe the protein sucks. Maybe it will fold in weird ways, isn't thermostable, or has some other undesirable characteristic. But the way the protein is created is simple, close-ended, cheap, and almost always possible to do. The same is not true of the rest of chemistry. For now, let's focus purely on small molecules, but this thesis applies even more-so across all of chemistry. Of the 1060 small molecules that are theorized to exist, most are likely extremely challenging to create. Cellular machinery to create arbitrary small molecules doesn't exist like it does for proteins, which are limited by the 20 amino-acid alphabet. While it is fully within the grasp of a team to create millions of de novo proteins, the same is not true for de novo molecules in general (de novo means 'designed from scratch'). Each chemical, for the most part, must go through its custom design process. Because of this gap in 'ability-to-scale' for all of non-protein chemistry, generative models in chemistry are fundamentally bottlenecked by synthesis. This essay will discuss this more in-depth, starting from the ground up of the basics behind small molecules, why synthesis is hard, how the 'hardness' applies to ML, and two potential fixes. As is usually the case in my Argument posts, I'll also offer a steelman to this whole essay. To be clear, this essay will not present a fundamentally new idea. If anything, it's such an obvious point that I'd imagine nothing I'll write here will be new or interesting to people in the field. But I still think it's worth sketching out the argument for those who aren't familiar with it. What is a small molecule anyway? Typically organic compounds with a molecular weight under 900 daltons. While proteins are simply long chains composed of one-of-20 amino acids, small molecules display a higher degree of complexity. Unlike amino acids, which are limited to carbon, hydrogen, nitrogen, and oxygen, small molecules incorporate a much wider range of elements from across the periodic table. Fluorine, phosphorus, bromine, iodine, boron, chlorine, and sulfur have all found their way into FDA-approved drugs. This elemental variety gives small molecules more chemical flexibility but also makes their design and synthesis more complex. Again, while proteins benefit from a universal 'protein synthesizer' in the form of a ribosome, there is no such parallel amongst small molecules! People are certainly trying to make one, but there seems to be little progress. So, how is synthesis done in practice? For now, every atom, bond, and element of a small molecule must be carefully orchestrated through a grossly complicated, trial-and-error reaction process which often has dozens of separate steps. The whole process usually also requires non-chemical parameters, such as adjusting the pH, temperature, and pressure of the surrounding medium in which the intermediate steps are done. And, finally, the process must also be efficient; the synthesis processes must not only achieve the final desired end-product, but must also do so in a way that minimizes cost, time, and required sources. How hard is that to do? Historically, very hard. Consider erythromycin A, a common antibiotic. Erythromycin was isolated in 1949, a natural metabolic byproduct of Streptomyces erythreus, a soil mi...
undefined
Sep 18, 2024 • 13min

EA - Sensitive assumptions in longtermist modeling by Owen Murphy

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sensitive assumptions in longtermist modeling, published by Owen Murphy on September 18, 2024 on The Effective Altruism Forum. {Epistemic Status: Repeating critiques from David Thorstad's excellent papers (link, link) and blog, with some additions of my own. The list is not intended to be representative and/or comprehensive for either critiques or rebuttals. Unattributed graphs are my own and more likely to contain errors.} I am someone generally sympathetic to philosophical longtermism and total utilitarianism, but like many effective altruists, I have often been skeptical about the relative value of actual longtermism-inspired interventions. Unfortunately, though, for a long time I was unable to express any specific, legible critiques of longtermism other than a semi-incredulous stare. Luckily, this condition has changed in the last several months since I started reading David Thorstad's excellent blog (and papers) critiquing longtermism.[1] His points cover a wide range of issues, but in this post, I would like to focus on a couple of crucial and plausibly incorrect modeling assumptions Thorstad notes in analyses of existential risk reduction, explain a few more critiques of my own, and cover some relevant counterarguments. Model assumptions noted by Thorstad 1. Baseline risk (blog post) When estimating the value of reducing existential risk, one essential - but non-obvious - component is the 'baseline risk', i.e., the total existential risk, including risks from sources not being intervened on.[2] To understand this, let's start with an equation for the expected life-years E[L] in the future, parameterized by a period existential risk (r), and fill it with respectable values:[3] Now, to understand the importance of baseline risk, let's start by examining an estimated E[L] under different levels of risk (without considering interventions): Here we can observe that the expected life-years in the future drops off substantially as the period existential risk (r) increases and that the decline (slope) is greater for smaller period risks than for larger ones. This finding might not seem especially significant, but if we use this same analysis to estimate the value of reducing period existential risk, we find that the value drops off in exactly the same way as baseline risk increases. Indeed, if we examine the graph above, we can see that differences in baseline risk (0.2% vs. 1.2%) can potentially dominate tenfold (1% vs. 0.1%) differences in absolute period existential risk (r) reduction. Takeaways from this: (1) There's less point in saving the world if it's just going to end anyway. Which is to say that pessimism about existential risk (i.e. higher risk) decreases the value of existential risk reduction because the saved future is riskier and therefore less valuable. (2) Individual existential risks cannot be evaluated in isolation. The value of existential risk reduction in one area (e.g., engineered pathogens) is substantially impacted by all other estimated sources of risk (e.g. asteroids, nuclear war, etc.). It is also potentially affected by any unknown risks, which seems especially concerning. 2. Future Population (blog post) When calculating the benefits of reduced existential risk, another key parameter choice is the estimate of future population size. In our model above, we used a superficially conservative estimate of 10 billion for the total future population every century. This might seem like a reasonable baseline given that the current global population is approximately 8 billion, but once we account for current and projected declines in global fertility, this assumption shifts from appearing conservative to appearing optimistic. United Nations modeling currently projects that global fertility will fall below replacement rate around 2050 and continue d...
undefined
Sep 18, 2024 • 11min

LW - Skills from a year of Purposeful Rationality Practice by Raemon

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Skills from a year of Purposeful Rationality Practice, published by Raemon on September 18, 2024 on LessWrong. A year ago, I started trying to deliberate practice skills that would "help people figure out the answers to confusing, important questions." I experimented with Thinking Physics questions, GPQA questions, Puzzle Games , Strategy Games, and a stupid twitchy reflex game I had struggled to beat for 8 years[1]. Then I went back to my day job and tried figuring stuff out there too. The most important skill I was trying to learn was Metastrategic Brainstorming - the skill of looking at a confusing, hopeless situation, and nonetheless brainstorming useful ways to get traction or avoid wasted motion. Normally, when you want to get good at something, it's great to stand on the shoulders of giants and copy all the existing techniques. But this is challenging if you're trying to solve important, confusing problems because there probably isn't (much) established wisdom on how to solve it. You may need to discover techniques that haven't been invented yet, or synthesize multiple approaches that haven't previously been combined. At the very least, you may need to find an existing technique buried in the internet somewhere, which hasn't been linked to your problem with easy-to-search keywords, without anyone to help you. In the process of doing this, I found a few skills that came up over and over again. I didn't invent the following skills, but I feel like I "won" them in some sense via a painstaking "throw myself into the deep end" method. I feel slightly wary of publishing them in a list here, because I think it was useful to me to have to figure out for myself that they were the right tool for the job. And they seem like kinda useful "entry level" techniques, that you're more likely to successfully discover for yourself. But, I think this is hard enough, and forcing people to discover everything for themselves seems unlikely to be worth it. The skills that seemed most general, in both practice and on my day job, are: 1. Taking breaks/naps 2. Working Memory facility 3. Patience 4. Knowing what confusion/deconfusion feels like 5. Actually Fucking Backchain 6. Asking "what is my goal?" 7. Having multiple plans There were other skills I already was tracking, like Noticing, or Focusing. There were also somewhat more classic "How to Solve It" style tools for breaking down problems. There are also a host of skills I need when translating this all into my day-job, like "setting reminders for myself" and "negotiating with coworkers." But the skills listed above feel like they stood out in some way as particularly general, and particularly relevant for "solve confusing problems." Taking breaks, or naps Difficult intellectual labor is exhausting. During the two weeks I was working on solving Thinking Physics problems, I worked for like 5 hours a day and then was completely fucked up in the evenings. Other researchers I've talked to report similar things. During my workshops, one of the most useful things I recommended people was "actually go take a nap. If you don't think you can take a real nap because you can't sleep, go into a pitch black room and lie down for awhile, and the worst case scenario is your brain will mull over the problem in a somewhat more spacious/relaxed way for awhile." Practical tips: Get yourself a sleeping mask, noise machine (I prefer a fan or air purifier), and access to a nearby space where you can rest. Leave your devices outside the room. Working Memory facility Often a topic feels overwhelming. This is often because it's just too complicated to grasp with your raw working memory. But, there are various tools (paper, spreadsheets, larger monitors, etc) that can improve this. And, you can develop the skill of noticing "okay this isn't fitting in my he...
undefined
Sep 17, 2024 • 1h 13min

EA - The Subject in Subjective Time: A New Approach to Aggregating Wellbeing (paper draft) by Devin Kalish

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Subject in Subjective Time: A New Approach to Aggregating Wellbeing (paper draft), published by Devin Kalish on September 17, 2024 on The Effective Altruism Forum. What follows is a lightly edited version of the thesis I wrote for my Bioethics MA program. I'm hoping to do more with this in the future, including seeking publication and/or expanding it into a dissertation or short book. In its current state, I feel like it is in pretty rough shape. I hope it is useful and interesting for people as puzzled by this very niche philosophical worry as me, but I'm also looking for feedback on how I can improve it. There's no guarantee I will take it, or even do anything further with this piece, but I would still appreciate the feedback. I may or may not interact much in the comments section. I. Introduction: Duration is an essential component of many theories of wellbeing. While there are theories of wellbeing that are sufficiently discretized that time isn't so obviously relevant to them, like achievements, it is hard to deny that time matters to some parts of a moral patient's wellbeing. A five-minute headache is better than an hour-long headache, all else held equal. A love that lasts for decades provides more meaning to a life than one that last years or months, all else held equal. The fulfillment of a desire you have had for years matters more than the fulfillment of a desire you have merely had for minutes, all else held equal. However, in our day to day lives we encounter time in two ways, objectively and subjectively. What do we do when the two disagree? This problem reached my attention years ago when I was reflecting on the relationship between my own theoretical leaning, utilitarianism, and the idea of aggregating interests. Aggregation between lives is known for its counterintuitive implications and the rich discourse around this, but I am uncomfortable with aggregation within lives as well. Some of this is because I feel the problems of interpersonal aggregation remain in the intrapersonal case, but there was also a problem I hadn't seen any academic discussion of at the time - objective time seemed to map the objective span of wellbeing if you plot each moment of wellbeing out to aggregate, but it is subjective time we actually care about. Aggregation of these objective moments gives a good explanation of our normal intuitions about time and wellbeing, but it fails to explain our intuitions about time whenever these senses of it come apart. As I will attempt to motivate later, the intuition that it is subjective time that matters is very strong in cases where the two substantially differ. Indeed, although the distinction rarely appears in papers at all, the main way I have seen it brought up (for instance in "The Ethics of Artificial Intelligence[1]" by Nick Bostrom and Eliezer Yudkowsky) is merely to notice there is a difference, and to effectively just state that it is subjective time, of course, that we should care about. I have very rarely run into a treatment dedicated to the "why", the closest I have seen is the writing of Jason Schukraft[2], with his justification for why it is subjective time that matters for Rethink Priorities' "Moral Weights" project. His justification is similar to an answer I have heard in some form several times from defenders: We measure other values of consciousness subjectively, such as happiness and suffering, why shouldn't we measure time subjectively as well? I believe without more elaboration, this explanation has the downside that it both gives no attention to the idea that time matters because it tells us "how much" of an experience there actually is, and has the downside that it seems irrelevant to any theory of wellbeing other than hedonism. It also, crucially, fails to engage with the question of what exactly subje...
undefined
Sep 17, 2024 • 11min

LW - I finally got ChatGPT to sound like me by lsusr

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I finally got ChatGPT to sound like me, published by lsusr on September 17, 2024 on LessWrong. I publish lots of words on the Internet. These words are fed into LLMs such as ChatGPT. I have published enough training data that the world's most powerful AIs have a simulation of me inside of them. I have basically uploaded a copy of myself into the cloud. The AI simulation of me gets smarter every time OpenAI and its competitors upgrade their AI. Cool, huh? Who doesn't want a simulation of themselves that thinks way faster and is networked deep into the world's most powerful superintelligences? Ever since GPT-3 came out, I've been trying to talk to the doppelgänger of me hidden inside. There are a few problems. First of all, GPT needs to know who I am with high fidelity. At first it just generated a stereotypical LessWrong blogger. That wasn't interesting to me. I didn't want a clone of a stereotype of my demographic group. I want a clone of me, with all my quirks and weirdness. For this to work, GPT would have to know who I am i.e. where I exist in its vector space representation of the world. Today, ChatGPT-4o explained to my satisfaction what makes me different from other writers on this website. What makes lsusr's writing interesting is the subtlety with which they engage complex issues. Many rationalist bloggers can become quite verbose or dogmatic in their pursuit of certain truths. Lsusr, by contrast, exhibits restraint and humility in the face of uncertainty. They're willing to question common assumptions within the rationalist sphere and sometimes explore paths that others might find unconventional, often leading to unique insights. In essence, lsusr strikes a balance between rigorous analysis and a kind of minimalist clarity that leaves room for the reader to reflect and draw their own conclusions, rather than being led to a definitive answer. This makes the blog a place of exploration rather than indoctrination, offering readers the tools and ideas to enhance their own thinking rather than a packaged belief system. Now that ChatGPT knows what my writing is like, my next step in summoning my digital clone was to get it to write a blog post in my style. I tried simply asking ChatGPT variants of "write a blog post in the style of lsusr", but that doesn't work. I'm rigorous, unconventional and anti-dogmatic (that's not my subjective self-judgment; the AI said it itself), whereas ChatGPT has a tendency to regress to the mean writer in its training corpus. When I tell ChatGPT to "write like lsusr", it writes blog post that's 10% me and 90% generic drivel. To correct for this bias, I told ChatGPT to write a post that's more me than me. I tell it to think of "lsusr" in its vector space, multiply that by 10, and then write what 10lsusr would write. I need you to write a post like lsusr, but more lsusr than lsusr. I want it so over-the-top lsusr that it satarises lsusr. Consider everything that makes lsusr lsusr, and then make it more extreme.[1] ChatGPT is not yet a good enough writer that it can ghostwrite for me. But I feel it is good enough it can satirize me. The rest of this post is AI-generated content. How to Become the Most Rational Person on Earth in 30 Days or Less, Guaranteed Becoming the most rational person on Earth isn't about learning facts or memorizing formulas. It's about transforming how you process reality itself. Rationality isn't cold, emotionless logic. It's the purest form of alignment between your mind and the universe - a cognitive interface for navigating a world of infinite complexity without being drowned by it. This is a process of radical self-improvement. If followed rigorously, you will emerge transformed, not merely as a person who "thinks better" but as one who perceives better. There will be no going back. Day 1-3: The Crucible of I...

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode