The Nonlinear Library

The Nonlinear Fund
undefined
May 9, 2024 • 6min

EA - AI stocks could crash. And this could have implications for AI safety. by Benjamin Todd

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI stocks could crash. And this could have implications for AI safety., published by Benjamin Todd on May 9, 2024 on The Effective Altruism Forum. Just as the 2022 crypto crash had many downstream effects for effective altruism, so could a future crash in AI stocks have several negative (though hopefully less severe) effects on AI safety. Why might AI stocks crash? The most obvious reason AI stocks might crash is that stocks often crash. Nvidia's price fell 60% just in 2022, along with other AI companies. It also fell more than 50% in 2020 at the start of the COVID outbreak, and in 2018. So, we should expect there's a good chance it falls 50% again in the coming years. Nvidia's volatility is about 60%, which means - even assuming efficient markets - it has about a 15% chance of falling more than 50% in a year.1 And more speculatively, booms and busts seem more likely for stocks that have gone up a ton, and when new technologies are being introduced. That's what we saw with the introduction of the internet and the dot com bubble, as well as with crypto.2 (Here are two attempts to construct economic models for why. This phenomenon also seems related to the existence of momentum in financial prices, as well as bubbles in general.) Further, as I argued, current spending on AI chips requires revenues from AI software to reach hundreds of billions within a couple of years, and (at current trends) approach a trillion by 2030. There's plenty of scope to not hit that trajectory, which could cause a sell off. Note the question isn't just whether the current and next generation of AI models are useful (they definitely are), but rather: Are they so useful their value can be measured in the trillions? Do they have a viable business model that lets them capture enough of that value? Will they get there fast enough relative to market expectations? My own take is that the market is still underpricing the long term impact of AI (which is why I about half my equity exposure is in AI companies, especially chip makers), and I also think it's quite plausible that AI software will be generating more than a trillion dollars of revenue by 2030. But it also seems like there's a good chance that short-term deployment isn't this fast, and the market gets disappointed on the way. If AI revenues merely failed to double in a year, that could be enough to prompt a sell off. I think this could happen even if capabilities keep advancing (e.g. maybe because real world deployment is slow), though a slow down in AI capabilities and new "AI winter" would also most likely to cause a crash. A crash could also be caused by a broader economic recession, rise in interest rates, or anything that causes investors to become more risk-averse - like a crash elsewhere in the market or geopolitical issue. The end of stock bubbles often have no obvious trigger. At some point, the stock of buyers gets depleted, prices start to move down, and that causes others to sell, and so on. Why does this matter? A crash in AI stocks could cause a modest lengthening of AI timelines, by reducing investment capital. For example, startups that aren't yet generating revenue could find it hard to raise from VCs and fail. A crash in AI stocks (depending on its cause) might also tell us that market expectations for the near-term deployment of AI have declined. This means it's important to take the possibility of a crash into account when forecasting AI, and in particular to be cautious about extrapolating growth rates in investment from the last year or so indefinitely forward. Perhaps more importantly, just like the 2022 crypto crash, an AI crash could have implications for people working on AI safety. First, the wealth of many donors to AI safety is pretty correlated with AI stocks. For instance as far as I can tell Good Ventures sti...
undefined
May 9, 2024 • 9min

AF - Visualizing neural network planning by Nevan Wichers

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visualizing neural network planning, published by Nevan Wichers on May 9, 2024 on The AI Alignment Forum. TLDR We develop a technique to try and detect if a NN is doing planning internally. We apply the decoder to the intermediate representations of the network to see if it's representing the states it's planning through internally. We successfully reveal intermediate states in a simple Game of Life model, but find no evidence of planning in an AlphaZero chess model. We think the idea won't work in its current state for real world NNs because they use higher-level, abstract representations for planning that our current technique cannot decode. Please comment if you have ideas that may work for detecting more abstract ways the NN could be planning. Idea and motivation To make safe ML, it's important to know if the network is performing mesa optimization, and if so, what optimization process it's using. In this post, I'll focus on a particular form of mesa optimization: internal planning. This involves the model searching through possible future states and selecting the ones that best satisfy an internal goal. If the network is doing internal planning, then it's important the goal it's planning for is aligned with human values. An interpretability technique which could identify what states it's searching through would be very useful for safety. If the NN is doing planning it might represent the states it's considering in that plan. For example, if predicting the next move in chess, it may represent possible moves it's considering in its hidden representations. We assume that NN is given the representation of the environment as input and that the first layer of the NN encodes the information into a hidden representation. Then the network has hidden layers and finally a decoder to compute the final output. The encoder and decoder are trained as an autoencoder, so the decoder can reconstruct the environment state from the encoder output. Language models are an example of this where the encoder is the embedding lookup. Our hypothesis is that the NN may use the same representation format for states it's considering in its plan as it does for the encoder's output. Our idea is to apply the decoder to the hidden representations at different layers to decode them. If our hypothesis is correct, this will recover the states it considers in its plan. This is similar to the Logit Lens for LLMs, but we're applying it here to investigate mesa-optimization. A potential pitfall is that the NN uses a slightly different representation for the states it considers during planning than for the encoder output. In this case, the decoder won't be able to reconstruct the environment state it's considering very well. To overcome this, we train the decoder to output realistic looking environment states given the hidden representations by training it like the generator in a GAN. Note that the decoder isn't trained on ground truth environment states, because we don't know which states the NN is considering in its plan. Game of Life proof of concept (code) We consider an NN trained to predict the number of living cells after the Nth time step of the Game of Life (GoL). We chose the GoL because it has simple rules, and the NN will probably have to predict the intermediate states to get the final cell count. This NN won't do planning, but it may represent the intermediate states of the GoL in its hidden states. We use an LSTM architecture with an encoder to encode the initial GoL state, and a "count cells NN" to output the number of living cells after the final LSTM output. Note that training the NN to predict the number of alive cells at the final state makes this more difficult for our method than training the network to predict the final state since it's less obvious that the network will predict t...
undefined
May 9, 2024 • 7min

LW - some thoughts on LessOnline by Raemon

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: some thoughts on LessOnline, published by Raemon on May 9, 2024 on LessWrong. I mostly wrote this for facebook, but it ended up being a whole-ass post so I figured I'd put it here too. I'm helping run "LessOnline: A Festival of Writers Who Are Wrong On the Internet (But Striving To Be Less So)". I'm incentivized to say nice things about the event. So, grain of salt and all. But, some thoughts, which roughly breakdown into: The vibe: preserving cozy/spaciousness of a small retreat at a larger festival The audience: "Reunion for the The Extended Family Blogosphere, both readers and writers." Manifest, and Summer Camp ... I. The Vibe I've been trying to explain the vibe I expect and it's tricksy. I think the vibe will be something like "CFAR Reunion meets Manifest." But a lot of people haven't been to a CFAR Reunion or to Manifest. I might also describe it like "the thing the very first EA Summit (before EA Global) was like, before it became EA Global and got big." But very few people went to that either. Basically: I think this will do a pretty decent job of having the feel of a smaller (~60 person), cozy retreat, but while being more like 200 - 400 people. Lightcone has run several ~60 person private retreats, which succeeded being a really spacious intellectual environment, with a pretty high hit rate for meeting new people who you might want to end up having a several hour conversation with. Realistically, with a larger event there'll be at least some loss of "cozy/spaciousness", and a somewhat lower hit rate for people you want to talk to with the open invites. But, I think Lightcone has learned a lot about how to create a really nice vibe. We've built our venue, Lighthaven, with "warm, delightful, focused intellectual conversation" as a primary priority. Whiteboards everywhere, lots of nooks and a fractal layout that makes it often feel like you're in a seclude private conversation by a firepit, even though hundreds of other people are nearby (often at another secluded private conversation with _their_ own firepit!) (It's sort of weird that this kind of venue is extremely rare. Many events are hotels, which feel vaguely stifling and corporate. And the nice spacious retreat centers we've used don't score well on the whiteboard front, and surprisingly not even that well on "lots of nooks") ... Large events tend to use "Swap Card" for causing people to meet each other. I do find Swap Card really good for nailing down a lot of short meetings. But it somehow ends up with a vibe of ruthless efficiency - lots of back-to-back 30 minute meetings, instead of a feeling of organic discovery. The profile feels like a "job fair professional" sort of thing. Instead we're having a "Names, Faces, and Conversations" document, where people write in a giant google doc about what questions and ideas are currently alive for them. People are encouraged to comment inline if they have thoughts, and +1 if they'd be into chatting about it. Some of this hopefully turns into 1-1 conversations, and if more people are interested it can organically grow into "hey let's hold a small impromptu group discussion about that in the Garden Nook" ... We'll also have a bunch of stuff that's just plain fun. We're planning a puzzle hunt that spans the event, and a dance concert led by the Fooming Shoggoths, with many songs that didn't make it onto their April 1st album. And the venue itself just lends itself to a feeling of whimsy and discovery. ... Another thing we're doing is encouraging people to bring their kids, and providing a day care to make that easier. I want this event to feel like something you can bring your whole life/self to. By default these sorts of events tend to not be very kid friendly. ... ... ... II. The Audience So that was a lot of words about The Vibe. The second question is "who a...
undefined
May 9, 2024 • 57min

LW - Dating Roundup #3: Third Time's the Charm by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dating Roundup #3: Third Time's the Charm, published by Zvi on May 9, 2024 on LessWrong. The first speculated on why you're still single. We failed to settle the issue. A lot of you were indeed still single. So the debate continues. The second gave more potential reasons, starting with the suspicion that you are not even trying, and also many ways you are likely trying wrong. The definition of insanity is trying the same thing over again expecting different results. Another definition of insanity is dating in 2024. Can't quit now. You're Single Because Dating Apps Keep Getting Worse A guide to taking the perfect dating app photo. This area of your life is important, so if you intend to take dating apps seriously then you should take photo optimization seriously, and of course you can then also use the photos for other things. I love the 'possibly' evil here. Misha Gurevich: possibly evil idea: Dating app that trawls social media and websites and creates a database of individuals regardless of if they opt in or not, including as many photos and contact information as can be found. Obviously this would be kind of a privacy violation and a lot of people would hate it. but I imagine a solid subset of singles who are lonely but HATE the app experience would be grateful to be found this way. No big deal, all we are doing is taking all the data about private citizens on the web and presenting it to any stranger who wants it in easy form as if you might want to date them. Or stalk them. Or do anything else, really. And you thought AI training data was getting out of hand before. All right, so let's consider the good, or at least not obviously evil, version of this. There is no need to fill out an intentional profile, or engage in specific actions, other than opting in. We gather all the information off the public web. We use AI to amalgamate all the data, assemble in-depth profiles and models of all the people. If it thinks there is a plausible match, then it sets it up. Since we are in danger of getting high on the creepiness meter, let's say the woman gets to select who gets contacted first, then if both want to match in succession you put them in contact. Ideally you'd also use AI to facilitate in various other ways, let people say what they actually want in natural language, let the AI ask follow-up questions to find potential matches or do checks first (e.g. 'I would say yes if you can confirm that he…') and so on. There is definitely not enough deep work being done trying to overturn the system. Bumble gives up its one weird trick, goes back to men messaging first. Melissa Chen: The evolution of Bumble: Sick of men inboxing women ("the patriarchy is so creepy and icky!") Starts dating app to reverse the natural order (women now make the first move! So empowering! So brave & stunning!) Women complain it's exhausting Reinstate the natural law Hardcore Siege: It's such ridiculous headline. I have never gotten an opener on Bumble besides "hey", women never actually work go start a conversation or have a good opener, they're literally just re-approving the ability of the man to start the conversation. Outa: Anyone that's used it would tell you that 99% of the time they would just leave a "hey" or "." Casey Handmer: AFAIK no one has yet made a dating app where the cost of sending messages is increased if you're a creep. This would be technologically easy to do, and would let the market solve the problem. Several interesting things here. 1. Many 'women never actually initiated the conversation' responses. Women say 'hey' to bypass the requirement almost all the time. That is not obviously useless as a secondary approval, but it presumably is not worth the bother. 2. This was among women who self-selected into the app with mandatory female openers, so yeah, women really really...
undefined
May 8, 2024 • 11min

EA - Potential Pitfalls in University EA Community Building by jessica mccurdy

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Potential Pitfalls in University EA Community Building, published by jessica mccurdy on May 8, 2024 on The Effective Altruism Forum. TL;DR: This is a written version of a talk given at EAG Bay Area 2023. It claims university EA community building can be incredibly impactful, but there are important pitfalls to avoid, such as being overly zealous, overly open, or overly exclusionary. These pitfalls can turn away talented people and create epistemic issues in the group. By understanding these failure modes, focusing on truth-seeking discussions, and being intentional about group culture, university groups can expose promising students to important ideas and help them flourish. Introduction Community building at universities can be incredibly impactful, but important pitfalls can make this work less effective or even net negative. These pitfalls can turn off the kind of talented people that we want in the EA community, and it's challenging to tell if you're falling into them. This post is based on a talk I gave at EAG Bay Area in early 2023[1]. If you are a new group organizer or interested in becoming one, you might want to check out this advice post. This talk was made specifically for university groups, but I believe many of these pitfalls transfer to other groups. Note, that I didn't edit this post much and may not be able to respond in-depth to comments now. I have been in the EA university group ecosystem for almost 7 years now. While I wish I had more rigorous data and a better idea of the effect sizes, this post is based on anecdotes from years of working with group organizers. Over the past years, I think I went from being extremely encouraging of students doing university community building and selling it as a default option for students, to becoming much more aware of risks and concerns and hence writing this talk. I think I probably over-updated on the risks and concerns, and this led me to be less outwardly enthusiastic about the value of CB over the past year. I think that was a mistake, and I am looking forward to revitalizing the space to a happy medium. But that is a post for another day. Why University Community Building Can Be Impactful Before discussing the pitfalls, I want to emphasize that I do think community building at universities can be quite high leverage. University groups can help talented people go on to have effective careers. Students are at a time in their lives when they're thinking about their priorities and how to make a change in the world. They're making lifelong friendships. They have a flexibility that people at other life stages often lack. There is also some empirical evidence supporting the value of university groups. The longtermist capacity building team at Open Philanthropy ran a mass survey. One of their findings was that a significant portion of people working on projects they're excited about had attributed a lot of value to their university EA groups. Common Pitfalls in University Group Organizing While university groups can be impactful, there are several pitfalls that organizers should be aware of. In this section, I'll introduce some fictional characters that illustrate these failure modes. While the examples are simplified, I believe they capture real dynamics that can arise. Pitfall 1: Being Overly Zealous One common pitfall is being overly zealous or salesy when trying to convince others of EA ideas. This can come across as not genuinely engaging with people's arguments or concerns. Consider this example: Skeptical Serena asks, "Can we actually predict the downstream consequences of our actions in the long run? Doesn't that make RCTs not useful?" Zealous Zack[2] responds confidently, "That's a good point but even 20-year studies show this is working. There's a lot of research that has gone into it. So, it really d...
undefined
May 8, 2024 • 40min

EA - Shrimp Paste and Animal Welfare by Aaron Boddy

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shrimp Paste and Animal Welfare, published by Aaron Boddy on May 8, 2024 on The Effective Altruism Forum. Shrimp Welfare Project (SWP) produced this report because we believe it could have significant informational value to the movement, rather than because we anticipate SWP directly working on a shrimp paste intervention in the future. We think a new project focused on shrimp paste could potentially be very high impact and would be excited to collaborate with organizations working in this space. This report was written by Alethea Faye Cendaña and Trinh Lien-Huong, with feedback provided by Shannon Davis, Aaron Boddy, Michael St Jules, Ren Ryba, and Zuzana Šperlová. We are grateful to all the stakeholders who took the time to offer their thoughts for this study. We would like to thank the external reviewers for their valuable insights and contributions. All errors and shortcomings are our own. Executive Summary Acetes shrimps are among the most - if not the most - utilized species for food globally. Harvested from the coastal regions of Asian countries, these shrimps are not only cooked into various dishes but also crucial in the production of shrimp paste: a salty, tangy condiment served in many Southeast Asian dishes. Despite the enormous scale of production, there's scarce information about the shrimp paste industry and its impact on Acetes shrimp welfare. Thus, it is crucial to explore and understand the industry to identify and implement welfare interventions effectively. This report discusses an overview of the shrimp paste industry and highlights welfare issues and challenges to determine opportunities for welfare interventions. We take Vietnam, one of the largest producers, as a case study to gain a deeper understanding of the industry Shrimp paste is an integral condiment for Southeast Asian cuisine, valued for its unique umami flavor and nutritional properties, including vital omega-3 fatty acids. The production process primarily involves sun-drying, grinding, and fermenting Acetes shrimps, which varies in duration and technique across countries. While deeply rooted in cultural heritage and social practices, shrimp paste is also a significant commercial product, with villages selling locally and countries like the US, UK, Canada, and Australia importing from top-producing countries like Thailand and Indonesia. The industry provides livelihood opportunities for small coastal communities, where activities are divided among families with roles in catching, trading, and processing. Manufacturing facilities with capital, usually located near coasts, employ manual labor to mix, ferment, and cook the shrimp paste before packaging it for commercial sale. All producers face significant issues in raw material supply due to their reliance on natural shrimp stocks. These stocks are subject to fluctuations and environmental threats. Moreover, there are additional challenges related to food wastage, as well as food hygiene and safety concerns in both traditional and commercial production methods. Acetes shrimps are likely to endure significant suffering throughout the capture, retrieval, and processing stages in shrimp paste production. They experience injury, exhaustion, and suffocation during capture, often dying from hypoxia due to overcrowding or suffocation when removed from water. Additionally, during processing, they could suffer from osmotic shock, dehydration, and stress due to salting, grinding, and sun-drying, which are methods used to prepare them for paste production. Our Vietnam case study explores the fishing practices, the economic and cultural importance of shrimp paste, and the operational dynamics and challenges of small communities and large companies involved in the industry. We propose interventions to alleviate the suffering of Acetes shrimps, such...
undefined
May 8, 2024 • 1min

EA - FTX Has Billions More Than Needed to Pay Bankruptcy Victims by AnonymousTurtle

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Has Billions More Than Needed to Pay Bankruptcy Victims, published by AnonymousTurtle on May 8, 2024 on The Effective Altruism Forum. Once it finishes selling all of its assets, the company will have as much as $16.3 billion in cash to distribute, according to a company statement. It owes customers and other non-governmental creditors about $11 billion. Depending on the type of claim they hold in the case, some creditors could recover as much as 142% of what they are owed. The vast majority of customers, however, will likely get paid 118% of what they had on the FTX platform the day the company entered Chapter 11 bankruptcy. Earlier this year, the company had about $6.4 billion in cash. The increase is due mostly to a general spike in prices for various cryptocurrencies, including Solana, a token heavily backed by convicted fraudster and FTX founder Sam Bankman-Fried. The company has also sold dozens other assets, including various venture-capital projects like a stake in the artificial-intelligence company Anthropic. Most FTX account holders will get their money back after bankruptcy Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
May 8, 2024 • 16min

LW - Designing for a single purpose by Itay Dreyfus

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Designing for a single purpose, published by Itay Dreyfus on May 8, 2024 on LessWrong. If you've ever been to Amsterdam, you've probably visited, or at least heard about the famous cookie store that sells only one cookie. I mean, not a piece, but a single flavor. I'm talking about Van Stapele Koekmakerij of course - where you can get one of the world's most delicious chocolate chip cookies. If not arriving at opening hour, it's likely to find a long queue extending from the store's doorstep through the street it resides. When I visited the city a few years ago, I watched the sensation myself: a nervous crowd awaited as the rumor of 'out of stock' cookies spreaded across the line. The store, despite becoming a landmark for tourists, stands for an idea that seems to be forgotten in our culture: crafting for a single purpose. In the tech scene where I'm coming from, and which you might too, this approach is often perceived as singular, and not in its positiveness. We've been taught to go big or go home - raise millions in funding, build a big company, hire more and more employees, and hope for the desired exit. Anything less is considered a mind of a failure. From a personal perspective I've seen this attitude in almost every branding session I ran with startup founders. Again and again, they struggled to distill their primary focus. Moreover, when discussing competitors, it often seemed their startup competed in every possible field. In a way, that fear of committing reflects the human nature of FOMO - deliberately giving up on something(s) and experiencing the potential loss of other benefits. This mindset has also seeped into our collective body of work, especially in software. A product, which often starts as a weird small creature, gradually evolves into a multi-arm octopus, which sadly became the norm for VCware 1. And so we've been left with bloated, bigger, and… worse software. The idea of maintaining a small scope in product has already appeared in my writing in various forms; in niche product design I explored the effect of growth on design; and in defense of Twitter, I wrote about the bloated era of incumbent culture. But in between there seems to be a different attitude that not many choose to embrace, which like in Van Stapele's case, seeks a real purpose. Going back to basics as a way to find purpose In a tweet posted a few months ago, Jeff Sheldon described his renewed approach to photography after getting a new camera. It enlightened my eyes: I'm not a professional photographer, and never been. But my beloved Canon 700D still serves me often while traveling. Besides learning about ISO and shutter speed settings, being familiar with the mechanics of a DSLR camera has also introduced me to the practice of shooting photos in RAW format, which means capturing photos at the highest quality level. But the super heavy file format marks only the start of the process in modern photography. The rest belongs to the post-processing act: the daunting work of polishing, enhancing, and fixing images. When I returned from vacation, I hoped to edit my captures. Then I noticed something weird. When comparing my photos to some stunning photos I saw online, it seemed like my camera output wasn't as good as those shared photos. In doubt of my gear I then, again, noticed something I should have probably known: it wasn't about the camera, but the editing. I realized professional-made photos were overly edited, often detached from their original conditions. It appeared that what you see isn't what you get. I wondered, has photography become an art of photo manipulation? To respectful photographers, this might appear like a false accusation. The time spent sitting in front of the photo editor is at the heart of many camera enthusiasts. After all, that's why a camera is set to sh...
undefined
May 8, 2024 • 41min

AF - Can Kauffman's NK Boolean networks make humans swarm? by Yori Ong

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can Kauffman's NK Boolean networks make humans swarm?, published by Yori Ong on May 8, 2024 on The AI Alignment Forum. With this article, I intend to initiate a discussion with the community on a remarkable (thought) experiment and its implications. The experiment is to conceptualize Stuart Kauffman's NK Boolean networks as a digital social communication network, which introduces a thus far unrealized method for strategic information transmission. From this premise, I deduce that such a technology would enable people to 'swarm', i.e.: engage in self-organized collective behavior without central control. Its realization could result in a powerful tool for bringing about large-scale behavior change. The concept provides a tangible connection between network topology, common knowledge and cooperation, which can improve our understanding of the logic behind prosocial behavior and morality. It also presents us with the question of how the development of such a technology should be pursued and how the underlying ideas can be applied to the alignment of AI with human values. The intention behind sharing these ideas is to test whether they are correct, create common knowledge of unexplored possibilities, and to seek concrete opportunities to move forward. This article is a more freely written form of a paper I recently submitted to the arXiv, which can be found here. Introduction Random NK Boolean networks were first introduced by Stuart Kauffman in 1969 to model gene regulatory systems.[1] The model consists of N automata which are either switched ON (1) or OFF (0). The next state of each automaton is determined by a random boolean function that takes the current state of K other automata as input, resulting in a dynamic network underpinned by a semi-regular and directed graph. It can be applied to model gene regulation, in which the activation of some leads to the activation or suppression of others, but also to physical systems, in which a configuration of spins acting on another will determine whether it flips up or down. NK Boolean networks evolve deterministically: each following state can be computed based on its preceding state. Since the total number of possible states of the network is finite (although potentially very large), the network must eventually return to a previously visited state, resulting in cyclic behavior. The possible instances of Boolean networks can be subdivided between an ordered and a chaotic regime, which is mainly determined by the number of inputs for each node, K. In the ordered regime, the behavior of the network eventually gets trapped in cycles (attractors) that are relatively short and few in number. When a network in the ordered phase is perturbed by an externally induced 'bit-flip', the network eventually returns to the same or slightly altered ordered behavior. If the connectivity K is increased beyond a certain critical threshold, the network's behavior transitions from ordered to chaotic. States of the network become part of many and long cycles and minute external perturbations can easily change the course of the network state's evolution to a different track. This is popularly called the 'butterfly effect'. It has been extensively demonstrated that human behavior is not just determined by our 'own' decisions. Both offline and online social networks determine the input we receive, and causally influence the choices we make and opinions we adopt autonomously.[2] However, social networks are not regular, social ties are often reciprocal instead of directed and people are no automata. NK Boolean networks are therefore not very suitable for modeling an existing reality. What is nevertheless possible in the digital age, is to conceptualize and realize online communication networks based on its logic: just give N people a 'lightbulb app...
undefined
May 8, 2024 • 15min

EA - Deep Honesty by Aletheophile

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deep Honesty, published by Aletheophile on May 8, 2024 on The Effective Altruism Forum. Most people avoid saying literally false things, especially if those could be audited, like making up facts or credentials. The reasons for this are both moral and pragmatic - being caught out looks really bad, and sustaining lies is quite hard, especially over time. Let's call the habit of not saying things you know to be false 'shallow honesty'[1]. Often when people are shallowly honest, they still choose what true things they say in a kind of locally act-consequentialist way, to try to bring about some outcome. Maybe something they want for themselves (e.g. convincing their friends to see a particular movie), or something they truly believe is good (e.g. causing their friend to vote for the candidate they think will be better for the country). Either way, if you think someone is being merely shallowly honest, you can only shallowly trust them: you might be confident that they aren't literally lying, but you still have to do a bit of reverse engineering to figure out what they actually believe or intend. This post is about an alternative: deep honesty, and the deep trust that can follow. Deep honesty is the opposite of managing the other party's reactions for them. Deep honesty means explaining what you actually believe, rather than trying to persuade others of some course of action. Instead, you adopt a sincerely cooperative stance in choosing which information to share, and trust them to come to their own responses. In this post, we've leaned into the things that seem good to us about deep honesty. Writing while being in touch with that makes it seem easier to convey the core idea. We've tried to outline what we see as disadvantages of deep honesty, but we're still probably a bit partial. We would love to see discussion of the idea, including critical takes (either that our concepts are not useful ones, or that this is less something to be emulated than we imply). The rest of this post will be: Some examples of where deep and shallow honesty diverge Why and when you might want deep honesty Various disclaimers about what deep honesty is not A look at some difficult cases for deep honesty What deep honesty might look like in practice Examples of shallow (versus deep) honesty Writing a very optimistic funding application which doesn't mention your personal concerns about the project As opposed to being upfront about what you think the weaknesses are Telling an official at border control that you're visiting America to 'see some friends' Rather than explaining that you're also going to some kind of philanthropically funded conference about AI risk Searching for and using whichever messaging makes audiences most concerned about AI risk Instead of whatever best explains your concerns Saying that you totally disagree with the ideology of an extremist group And not that they are actually right about some important controversial topics, in a way that doesn't justify their actions Reassuring your manager about all the things that are going well and privately trying to fix all the problems before they grow Instead of telling your manager what's going wrong and giving them an opportunity to make an informed decision about what to do Rejecting someone from a programme with a note explaining that it was very competitive Rather than explaining what you perceived to be their weaknesses and shortcomings for the role Telling yourself that you're doing something for utilitarian reasons Instead of acknowledging that you also have a pretty weird kludge of motivations which definitely includes being recognised and appreciated by your peers When a friend asks how you are, smoothly changing the topic because you don't want them worrying about you Rather than opening up about private difficulties, or ...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app