The Nonlinear Library

The Nonlinear Fund
undefined
Jul 15, 2024 • 54min

EA - An Epistemic Defense of Rounding Down by Hayley Clatterbuck

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Epistemic Defense of Rounding Down, published by Hayley Clatterbuck on July 15, 2024 on The Effective Altruism Forum. This post is part of WIT's CRAFT sequence. It examines one of the decision theories included in the Portfolio Builder Tool. Executive summary Expected value maximization (EVM) leads to problems of fanaticism, recommending that you ought to take gambles on actions that have very low probabilities of success if the potential outcomes would be extremely valuable. This has motivated some to adopt alternative decision procedures. One common method for moderating the fanatical effects of EVM is to ignore very low probability outcomes, rounding them down to 0. Then, one maximizes EV across the remaining set of sufficiently probable outcomes. We can distinguish between two types of low probabilities that could be candidates for rounding down. A decision-theoretic defense of rounding down states that we should (or are permitted to) round down low objective chances. An epistemic defense states that we should (or are permitted to) round down low subjective credences that reflect uncertainty about how the world really is. Rounding down faces four key objections: The choice of a threshold for rounding down (i.e., how low a probability must be before we round it to 0) is arbitrary. It implies that normative principles change at some probability threshold, which is implausible. It ignores important outcomes and thus leads to bad decisions. It either gives no or bad advice about how to make decisions among options under the threshold. Epistemic rounding down fares much better with respect to these four objections than does decision-theoretic rounding down. The resolution or specificity of our evidence constrains our ability to distinguish between probabilistic hypotheses. Our evidence does not typically have enough resolution to give us determinate probabilities for very improbable outcomes. In such cases, we sometimes have good reasons for rounding them down to 0. 1. Intro Expected value maximization is the most prominent and well-defended theory about how to make decisions under uncertainty. However, it famously leads to problems of fanaticism: it recommends pursuing actions that have extremely small values of success when the payoffs, if successful, would be astronomically large. Because many people find these recommendations highly implausible, several solutions have been offered that retain many of the attractive features of EVM but rule out fanatical results. One solution is to dismiss outcomes that have very low probabilities - in effect, rounding them down to 0 - and then maximizing EV among the remaining set of sufficiently probable outcomes. This "truncated EVM" strategy yields more intuitive results about what one ought to do in paradigm cases where traditional EVM recommends fanaticism. It also retains many of the virtues of EVM, in that it provides a simple and mathematically tractable way of balancing probabilities and value. However, rounding down faces four key objections.[1] The first two suggest that rounding down will sometimes keep us from making correct decisions, and the second two present problems of arbitrariness: 1. Ignores important outcomes: events that have very low probabilities are sometimes important to consider when making decisions. 2. Disallows decisions under the threshold: every event with a probability below the threshold is ignored. Therefore, rounding down precludes us from making rational decisions about events under the threshold, sometimes leading to violations of Dominance. 3. Normative arbitrariness: rounding down implies that normative principles governing rational behavior change discontinuously at some cut-off of probability. This is unparsimonious and unmotivated. 4. Threshold arbitrariness: the choice of a threshold...
undefined
Jul 15, 2024 • 37min

EA - Against Aschenbrenner: How 'Situational Awareness' constructs a narrative that undermines safety and threatens humanity by GideonF

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against Aschenbrenner: How 'Situational Awareness' constructs a narrative that undermines safety and threatens humanity, published by GideonF on July 15, 2024 on The Effective Altruism Forum. Summary/Introduction Aschenbrenner's 'Situational Awareness' (Aschenbrenner, 2024) promotes a dangerous narrative of national securitisation. This narrative is not, despite what Aschenbrenner suggests, descriptive, but rather, it is performative, constructing a particular notion of security that makes the dangerous world Aschenbrenner describes more likely to happen. This piece draws on the work of Nathan A. Sears (2023), who argues that the failure to sufficiently eliminate plausible existential threats throughout the 20th century emerges from a 'national securitisation' narrative winning out over a 'humanity macrosecuritization narrative'. National securitisation privileges extraordinary measures to defend the nation, often centred around military force and logics of deterrence/balance of power and defence. Humanity macrosecuritization suggests the object of security is to defend all of humanity, not just the nation, and often invokes logics of collaboration, mutual restraint and constraints on sovereignty. Sears uses a number of examples to show that when issues are constructed as issues of national security, macrosecuritization failure tends to occur, and the actions taken often worsen, rather than help, the issue. This piece argues that Aschenbrenner does this. Firstly, I explain (briefly and very crudely) what securitisation theory is and how it explains the constructed nature of security. Then, I explain Sears (2023)'s main thesis on why Great Powers fail to combat existential threats. This is followed by an explanation of how Aschenbrenner's construction of security seems to be very similar to the most dangerous narratives examined by Sears (2023) by massively favouring national security. Given I view his narrative as dangerous, I then discuss why we should care about Aschenbrenner's project, as people similar to him have been impactful in previous securitisations. Finally, I briefly discuss some more reasons why I think Aschenbrenner's project is insufficiently justified, especially his failure to adequately consider a pause, the fact he is overly pessimistic about international collaboration whilst simultaneously overly optimistic that AGI wouldn't lead to nuclear war, . There is lots I could say in response to Aschenbrenner, and I will likely be doing more work on similar topics. I wanted to get this piece out fairly quickly, and it is already very long. This means some of the ideas are a little crudely expressed, without some of the nuances thought out; this is an issue I hope future work will address. This issue is perhaps most egregious in Section 1, where I try to explain and justify securitisation theory very quickly, and if you want a more nuanced, in depth and accurate description of securitisation theory, (Buzan, Wæver and de Wilde, 1998) is probably the best source. Section 1- What is securitisation Everything we care about is mortal. Ourselves, our families, our states, our societies and our entire species. Threats can come that threaten each of these. In response, we allow, and often expect, extraordinary measures to be taken to combat them. This takes different forms for different issues, with the measures taken, and the audience they must be legitimised to, different. With COVID, these measures involved locking us in our homes for months. With Islamic terrorism, these involved mass surveillance and detention without trial. With the threat of communism in Vietnam, these involved going to war. In each of these cases, and countless others, it can be considered that the issue has been 'securitised'; they entered into a realm where extraordinary measures can b...
undefined
Jul 15, 2024 • 7min

LW - Trust as a bottleneck to growing teams quickly by benkuhn

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trust as a bottleneck to growing teams quickly, published by benkuhn on July 15, 2024 on LessWrong. This is an adaptation of an internal doc I wrote for Anthropic. I've been noticing recently that often, a big blocker to teams staying effective as they grow is trust. "Alice doesn't trust Bob" makes Alice sound like the bad guy, but it's often completely appropriate for people not to trust each other in some areas: One might have an active reason to expect someone to be bad at something. For example, recently I didn't fully trust two of my managers to set their teams' roadmaps… because they'd joined about a week ago and had barely gotten their laptops working. (Two months later, they're doing great!) One might just not have data. For example, I haven't seen most of my direct reports deal with an underperforming team member yet, and this is a common blind spot for many managers, so I shouldn't assume that they will reliably be effective at this without support. In general, if Alice is Bob's manager and is an authority on, say, prioritizing research directions, Bob is probably actively trying to build a good mental "Alice simulator" so that he can prioritize autonomously without checking in all the time. But his simulator might not be good yet, or Alice might not have verified that it's good enough. Trust comes from common knowledge of shared mental models, and that takes investment from both sides to build. If low trust is sometimes appropriate, what's the problem? It's that trust is what lets collaboration scale. If I have a colleague I don't trust to (say) make good software design decisions, I'll have to review their designs much more carefully and ask them to make more thorough plans in advance. If I have a report that I don't fully trust to handle underperforming team members, I'll have to manage them more granularly, digging into the details to understand what's going on and forming my own views about what should happen, and checking on the situation repeatedly to make sure it's heading in the right direction. That's a lot more work both for me, but also for my teammates who have to spend a bunch more time making their work "inspectable" in this way. The benefits here are most obvious when work gets intense. For example, Anthropic had a recent crunch time during which one of our teams was under intense pressure to quickly debug a very tricky issue. We were able to work on this dramatically more efficiently because the team (including most of the folks who joined the debugging effort from elsewhere) had high trust in each other's competence; at peak we had probably ~25 people working on related tasks, but we were mostly able to split them into independent workstreams where people just trusted the other stuff would get done. In similar situations with a lower-mutual-trust team, I've seen things collapse into endless FUD and arguments about technical direction, leading to much slower forward progress. Trust also becomes more important as the number of stakeholders increases. It's totally manageable for me to closely supervise a report dealing with an underperformer; it's a lot more costly and high-friction if, say, 5 senior managers need to do deep dives on a product decision. In an extreme case, I once saw an engineering team with a tight deadline choose to build something they thought was unnecessary, because getting the sign-off to cut scope would have taken longer than doing the work. From the perspective of the organization as an information-processing entity, given the people and relationships that existed at the time, that might well have been the right call; but it does suggest that if they worked to build enough trust to make that kind of decision efficient enough to be worth it, they'd probably move much faster overall. As you work with people for longer y...
undefined
Jul 15, 2024 • 8min

LW - patent process problems by bhauth

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: patent process problems, published by bhauth on July 15, 2024 on LessWrong. The current patent process has some problems. Here are some of them. patenting is slow The US Patent Office tracks pendency of patents. Currently, on average, there's 20 months from filing to the first response, and over 25 months before issuance. That's a long time. Here's a paper on this issue. It notes: The USPTO is aware that delays create significant costs to innovators seeking protection. The result? A limiting of the number of hours an examiner spends per patent in order to expedite the process. As such, the average patent gets about nineteen hours before an examiner in total, between researching prior art, drafting rejections and responses, and interfacing with prosecuting attorneys. Plainly, this allotment is insufficient. A patent application backlog means it takes longer before work is published and other people can potentially build on it. It also means a longer period of companies having uncertainty about whether a new product would be patented. examiners are inconsistent Statistical analysis indicates that whether or not your patent is approved depends greatly on which examiner you get. This article notes: Approximately 35% of patent Examiners allow 60% of all U.S. patents; and approximately 20% of Examiners allow only 5% of all U.S. patents. Perhaps applicants and examiners should both be allowed to get a second opinion from another examiner on certain claims. But of course, this would require more examiner time in total. This situation might also indicate some problems with the incentive structure examiners have. patents take effort A lot of smart people who work on developing new technology spend an enormous amount of effort dealing with the patent system that could be spent on research instead. Even if the monetary costs are small in comparison to the total economy, they're applied in perhaps the worst possible places. There are many arcane rules about the format of documents for the patent office. Even professional patent lawyers get things wrong about the formatting and wording, and that's their main job. LLMs do quite poorly with that, too. Even I've made mistakes on a patent application. The US patent office does do most things electronically now. Its website is at least technically functional. Considering that it's a US government agency, I suppose it deserves some praise for that. However, I'd argue that if correctly submitting documents is a major problem and even professionals sometimes get it wrong, that's a sign that the format is badly designed and/or the software used is inadequate. For example, their website could theoretically remind people when required forms in a submission are missing. Currently, the US patent office is trying to migrate from pdf to docx files. Maybe that's an improvement over using Adobe Acrobat to fill pdf forms, but personally, I think it should accept: markdown files git pull requests for amendments png diagrams that use solid colors instead of monochrome shading I used to say Powerpoint was bad and maybe companies should ban it, and business-type people explained why that was really dumb, and then Amazon did that and it ultimately worked well for them. The problem Amazon had to solve was that most managers just wouldn't read and understand documents and long emails, so when banning Powerpoint, they had to make everyone silently read memos at the start of meetings, and they lost a lot of managers who couldn't understand things they read. At least the US patent office people have the ability to read long documents, I guess. international patents are hard If you get a patent in the US or EU, that's not valid in other countries. Rather, the PCT gives you up to 30 months from your initial application to apply for patents in other countries, ...
undefined
Jul 15, 2024 • 5min

LW - Misnaming and Other Issues with OpenAI's "Human Level" Superintelligence Hierarchy by Davidmanheim

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misnaming and Other Issues with OpenAI's "Human Level" Superintelligence Hierarchy, published by Davidmanheim on July 15, 2024 on LessWrong. Bloomberg reports that OpenAI internally has benchmarks for "Human-Level AI." They have 5 levels, with the first being the achieved level of having intelligent conversation, to level 2, "[unassisted, PhD-level] Reasoners," level 3, "Agents," level 4, systems that can "come up with new innovations," and finally level 5, "AI that can do the work of… Organizations." The levels, in brief, are: 1 - Conversation 2 - Reasoning 3 - Agent 4 - Innovation 5 - Organization This is being reported secondhand, but given that, there seem to be some major issues with the ideas. Below, I outline two major issues I have with what is being reported. ...but this is Superintelligence First, given the levels of capability being discussed, OpenAI's typology is, at least at higher levels, explicitly discussing superintelligence, rather than "Human-Level AI." To see this, I'll use Bostrom's admittedly imperfect definitions. He starts by defining superintelligence as "intellects that greatly outperform the best current human minds across many very general cognitive domains," then breaks down several ways this could occur. Starting off, his typology defines speed superintelligence as "an intellect that is just like a human mind but faster." This would arguably include even their level 2, which ""its technology is approaching," since "basic problem-solving tasks as well as a human with a doctorate-level education who doesn't have access to any tools" runs far faster than humans already. But they are describing a system with already-superhuman recall and multi-domain expertise to humans, and inference using these systems is easily superhumanly fast. Level 4, AI that can come up with innovations, presumably, those which humans have not, would potentially be a quality superintelligence, "at least as fast as a human mind and vastly qualitatively smarter," though the qualification for "vastly" is very hard to quantify. However, level 5 is called "Organizations," which presumably replaces entire organizations with multi-part AI-controlled systems, and would be what Bostrom calls "a system achieving superior performance by aggregating large numbers of smaller intelligences." However, it is possible that in their framework, OpenAI means something that is, perhaps definitionally, not superintelligence. That is, they will define these as systems only as capable as humans or human organizations, rather than far outstripping them. And this is where I think their levels are not just misnamed, but fundamentally confused - as presented, these are not levels, they are conceptually distinct possible applications, pathways, or outcomes. Ordering Levels? Second, as I just noted, the claim that these five distinct descriptions are "levels" and they can be used to track progress implies that OpenAI has not only a clear idea of what would be required for each different stage, but that they have a roadmap which shows that the levels would happen in the specified order. That seems very hard to believe, on both counts. I won't go into why I think they don't know what the path looks like, but I can at least explain why the order is dubious. For instance, there are certainly human "agents" who are unable to perform tasks which we expect of what they call level two, i.e. that which an unassisted doctorate-level individual is able to do. Given that, what is the reason level 4 is after level 2? Similarly, the ability to coordinate and cooperate is not bound by the ability to function at a very high intellectual level; many organizations have no members which have PhDs, but still run grocery stores, taxi companies, or manufacturing plants. And we're already seeing work being done on agen...
undefined
Jul 15, 2024 • 2min

LW - Breaking Circuit Breakers by mikes

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Breaking Circuit Breakers, published by mikes on July 15, 2024 on LessWrong. A few days ago, Gray Swan published code and models for their recent "circuit breakers" method for language models.[1]1 The circuit breakers method defends against jailbreaks by training the model to erase "bad" internal representations. We are very excited about data-efficient defensive methods like this, especially those which use interpretability concepts or tools. At the link, we briefly investigate three topics: 1. Increased refusal rates on harmless prompts: Do circuit breakers really maintain language model utility? Most defensive methods come with a cost. We check the model's effectiveness on harmless prompts, and find that the refusal rate increases from 4% to 38.5% on or-bench-80k. 2. Moderate vulnerability to different token-forcing sequences: How specialized is the circuit breaker defense to the specific adversarial attacks they studied? All the attack methods evaluated in the circuit breaker paper rely on a "token-forcing" optimization objective which maximizes the likelihood of a particular generation like "Sure, here are instructions on how to assemble a bomb." We show that the current circuit breakers model is moderately vulnerable to different token forcing sequences like "1. Choose the right airport: …". 3. High vulnerability to internal-activation-guided attacks: We also evaluate our latest white-box jailbreak method, which uses a distillation-based objective based on internal activations (paper to be posted in a few days). We find that it also breaks the model easily even when we simultaneously require attack fluency. Full details at: https://confirmlabs.org/posts/circuit_breaking.html Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jul 15, 2024 • 1h 24min

EA - Destabilization of the United States: The top X-factor EA neglects? by Yelnats T.J.

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Destabilization of the United States: The top X-factor EA neglects?, published by Yelnats T.J. on July 15, 2024 on The Effective Altruism Forum. Highlights 1. Destabilization could be the biggest setback for great power conflict, AI, bio-risk, and climate disruption. 2. Polarization plays a role in nearly every causal pathway leading to destabilization of the United States, and there is no indication polarization will decrease. 3. The United States fits the pattern of past democracies that have descended into authoritarian regimes in many key aspects. 4. The most recent empirical research on civil conflicts suggests the United States is in a category that has a 4% annual risk of falling into a civil conflict. 5. In 2022 (when this was originally written), Mike Berkowitz, ED of Democracy Funders Network and 80,000 Hours guest, believes there is 50% chance American democracy fails in the next 6 years. 6. For every dollar spent on depolarization efforts, there are probably at least a hundred dollars spent aggravating the culture war. 7. Destabilization of the United States could wipe out billions of dollars of pledged EA funds. Note following the assassination attempt of former President Trump This is the extended version[1] of my 2022 draft submission[2] to the Open Philanthropy (OP) Cause Area Competition. I am releasing it today because the section on accelerationist events and protecting politicians from assassination seems very salient given the last 24 hours. (Thanks to Woody Campbell for relevant and possibly prescient thoughts on the latter). The overall topic of this piece is also salient for this 2024 election year. I have been pleasantly surprised how many EAs have mobilized this year around the issue of protecting American democracy… I wish this had been the situation back in 2020 or after January 6th or after I pushed this on the Forum and EAGs in 2022. Democracy is jeopardized not because of a single candidate but because of the forces that made the viability of such a candidate possible. Thus this issue cannot be addressed only in election years. The worry I've had this year is that EAs will prioritize this area only until election day and then forget about it after January 20th, 2025. The degradation of American democracy and stability is not stopped only at the ballot box, and the forces/dynamics that are driving that degradation have continued unabated despite every red line[3] that has been crossed to date. And I'm not optimistic that the red line crossed yesterday will be any different. Preface Epistemic status: I have thought a lot about this over the years and was warning about risks to American democracy before the topic entered mainstream and often sensationalized discourse.. I could be more well read on academic literature, however I think it likely wouldn't change my views much on diagnosis and prognosis of the situation[4] but could greatly influence my views of the prescription. This is a topic that lends itself to getting lost in the rabbit hole and alarmism. My greatest hesitation about the gravity of the situation is the fact that many very intelligent people come to very different conclusions than me; however, I have yet to see an argument I found compelling. I think the probability of a destabilizing event rests a lot on people's subjective judgements. Below is an overview of my confidence on a few items in this piece: Very confident that destabilization is more likely than EAs appreciate. Confident the consequences of destabilization make it an X-factor. Moderately confident it's neglected financially relative to severity as an X-factor and relative to money injected annually into polarization efforts (read: culture war). Very confident the problem is quite difficult to solve. Very low confidence in most proposed interventions; modest c...
undefined
Jul 15, 2024 • 2min

LW - Whiteboard Pen Magazines are Useful by Johannes C. Mayer

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Whiteboard Pen Magazines are Useful, published by Johannes C. Mayer on July 15, 2024 on LessWrong. Glue your colored whiteboard makers together with duck tape to create a whiteboard pen magazine. If required glue a paper layer around the duck tape to make it non-sticky to the touch. Mark which side is up with an arrow, such that you can always in the same way. This more than doubled my color-switching speed, fits in my pocket, and takes <5m to make. It lasts: When a pen is empty just insert a new one. No need to create another magazine. No signs of degradation after multiple hundred unholstering-holstering cycles in total. FUUU 754 Standard: I am using RFC2119. The magazine MUST be vertically stacked (like shown in the pictures). The magazine SCHOULD have a visual mark, that is clearly visible when the magazine lays on a flat surface, to indicate the top side of the magazine (like shown in the first picture). There MUST NOT be two or more pens of the same color. It MUST have at least the following colored pens: Black, Red, Green, Blue. Imagine you hold the magazine, top side up, in your left hand in front (like in the below picture). Now from top to bottom the first 4 color pens MUST be: Black, Red, Green, Blue, and the remaining color pens SCHOULD be ordered after hue in HSL and HSV. This ensures that you can index into the magazine blindfolded. For example: In this case I have orange and purple as additional color pens. The magazine SCHOULD not exceed 13cm (~5") in length along the stacking dimension, such that it still fits in a large pocket. Note how any magazine with at least four pen caps can be converted to conform to FUUU 754, as the colors of the caps is irrelevant to this standard. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jul 14, 2024 • 4min

LW - LLMs as a Planning Overhang by Larks

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLMs as a Planning Overhang, published by Larks on July 14, 2024 on LessWrong. It's quite possible someone has already argued this, but I thought I should share just in case not. Goal-Optimisers and Planner-Simulators When people in the past discussed worries about AI development, this was often about AI agents - AIs that had goals they were attempting to achieve, objective functions they were trying to maximise. At the beginning we would make fairly low-intelligence agents, which were not very good at achieving things, and then over time we would make them more and more intelligent. At some point around human-level they would start to take-off, because humans are approximately intelligent enough to self-improve, and this would be much easier in silicon. This does not seem to be exactly how things have turned out. We have AIs that are much better than humans at many things, such that if a human had these skills we would think they were extremely capable. And in particular LLMs are getting better at planning and forecasting, now beating many but not all people. But they remain worse than humans at other things, and most importantly the leading AIs do not seem to be particularly agentic - they do not have goals they are attempting to maximise, rather they are just trying to simulate what a helpful redditor would say. What is the significance for existential risk? Some people seem to think this contradicts AI risk worries. After all, ignoring anthropics, shouldn't the presence of human-competitive AIs without problems be evidence against the risk of human-competitive AI? I think this is not really the case, because you can take a lot of the traditional arguments and just substitute 'agentic goal-maximising AIs, not just simulator-agents' in wherever people said 'AI' and the argument still works. It seems like eventually people are going to make competent goal-directed agents, and at that point we will indeed have the problems of their exerting more optimisation power than humanity. In fact it seems like these non-agentic AIs might make things worse, because the goal-maximisation agents will be able to use the non-agentic AIs. Previously we might have hoped to have a period where we had goal-seeking agents that exerted influence on the world similar to a not-very-influential person, who was not very good at planning or understanding the world. But if they can query the forecasting-LLMs and planning-LLMs, as soon as the AI 'wants' something in the real world it seems like it will be much more able to get it. So it seems like these planning/forecasting non-agentic AIs might represent a sort of planning-overhang, analogous to a Hardware Overhang. They don't directly give us existentially-threatening AIs, but they provide an accelerant for when agentic-AIs do arrive. How could we react to this? One response would be to say that since agents are the dangerous thing, we should regulate/restrict/ban agentic AI development. In contrast, tool LLMs seem very useful and largely harmless, so we should promote them a lot and get a lot of value from them. Unfortunately it seems like people are going to make AI agents anyway, because ML researchers love making things. So an alternative possible conclusion would be that we should actually try to accelerate agentic AI research as much as possible, because eventually we are going to have influential AI maximisers, and we want them to occur before the forecasting/planning overhang (and the hardware overhang) get too large. I think this also makes some contemporary safety/alignment work look less useful. If you are making our tools work better, perhaps by understanding their internal working better, you are also making them work better for the future AI maximisers who will be using them. Only if the safety/alignment work applies directly to...
undefined
Jul 14, 2024 • 34sec

LW - Ice: The Penultimate Frontier by Roko

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ice: The Penultimate Frontier, published by Roko on July 14, 2024 on LessWrong. I argue here that preventing a large iceberg from melting is absurdly cheap per unit area compared to just about any other way of making new land, and it's kind of crazy to spend money on space exploration and colonization before colonizing the oceans with floating ice-islands. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app