The Nonlinear Library: LessWrong cover image

The Nonlinear Library: LessWrong

Latest episodes

undefined
Sep 19, 2024 • 39min

LW - [Intuitive self-models] 1. Preliminaries by Steven Byrnes

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Intuitive self-models] 1. Preliminaries, published by Steven Byrnes on September 19, 2024 on LessWrong. 1.1 Summary & Table of Contents This is the first of a series of eight blog posts, which I'll be serializing over the next month or two. (Or email or DM me if you want to read the whole thing right now.) Here's an overview of the whole series, and then we'll jump right into the first post! 1.1.1 Summary & Table of Contents - for the whole series This is a rather ambitious series of blog posts, in that I'll attempt to explain what's the deal with consciousness, free will, hypnotism, enlightenment, hallucinations, flow states, dissociation, akrasia, delusions, and more. The starting point for this whole journey is very simple: The brain has a predictive (a.k.a. self-supervised) learning algorithm. This algorithm builds generative models (a.k.a. "intuitive models") that can predict incoming data. It turns out that, in order to predict incoming data, the algorithm winds up not only building generative models capturing properties of trucks and shoes and birds, but also building generative models capturing properties of the brain algorithm itself. Those latter models, which I call "intuitive self-models", wind up including ingredients like conscious awareness, deliberate actions, and the sense of applying one's will. That's a simple idea, but exploring its consequences will take us to all kinds of strange places - plenty to fill up an eight-post series! Here's the outline: Post 1 (Preliminaries) gives some background on the brain's predictive learning algorithm, how to think about the "intuitive models" built by that algorithm, how intuitive self-models come about, and the relation of this whole series to Philosophy Of Mind. Post 2 ( Awareness ) proposes that our intuitive self-models include an ingredient called "conscious awareness", and that this ingredient is built by the predictive learning algorithm to represent a serial aspect of cortex computation. I'll discuss ways in which this model is veridical (faithful to the algorithmic phenomenon that it's modeling), and ways that it isn't. I'll also talk about how intentions and decisions fit into that framework. Post 3 ( The Homunculus ) focuses more specifically on the intuitive self-model that almost everyone reading this post is experiencing right now (as opposed to the other possibilities covered later in the series), which I call the Conventional Intuitive Self-Model. In particular, I propose that a key player in that model is a certain entity that's conceptualized as actively causing acts of free will. Following Dennett, I call this entity "the homunculus", and relate that to intuitions around free will and sense-of-self. Post 4 ( Trance ) builds a framework to systematize the various types of trance, from everyday "flow states", to intense possession rituals with amnesia. I try to explain why these states have the properties they do, and to reverse-engineer the various tricks that people use to induce trance in practice. Post 5 ( Dissociative Identity Disorder ) (a.k.a. "multiple personality disorder") is a brief opinionated tour of this controversial psychiatric diagnosis. Is it real? Is it iatrogenic? Why is it related to borderline personality disorder (BPD) and trauma? What do we make of the wild claim that each "alter" can't remember the lives of the other "alters"? Post 6 ( Awakening / Enlightenment / PNSE ) is a type of intuitive self-model, typically accessed via extensive meditation practice. It's quite different from the conventional intuitive self-model. I offer a hypothesis about what exactly the difference is, and why that difference has the various downstream effects that it has. Post 7 (Hearing Voices, and Other Hallucinations) talks about factors contributing to hallucinations - although I argue ...
undefined
Sep 18, 2024 • 14min

LW - The case for a negative alignment tax by Cameron Berg

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The case for a negative alignment tax, published by Cameron Berg on September 18, 2024 on LessWrong. TL;DR: Alignment researchers have historically predicted that building safe advanced AI would necessarily incur a significant alignment tax compared to an equally capable but unaligned counterfactual AI. We put forward a case here that this prediction looks increasingly unlikely given the current 'state of the board,' as well as some possibilities for updating alignment strategies accordingly. Introduction We recently found that over one hundred grant-funded alignment researchers generally disagree with statements like: alignment research that has some probability of also advancing capabilities should not be done (~70% somewhat or strongly disagreed) advancing AI capabilities and doing alignment research are mutually exclusive goals (~65% somewhat or strongly disagreed) Notably, this sample also predicted that the distribution would be significantly more skewed in the 'hostile-to-capabilities' direction. See ground truth vs. predicted distributions for these statements These results - as well as recent events and related discussions - caused us to think more about our views on the relationship between capabilities and alignment work given the 'current state of the board,'[1] which ultimately became the content of this post. Though we expect some to disagree with these takes, we have been pleasantly surprised by the positive feedback we've received from discussing these ideas in person and are excited to further stress-test them here. Is a negative alignment tax plausible (or desirable)? Often, capabilities and alignment are framed with reference to the alignment tax, defined as 'the extra cost [practical, developmental, research, etc.] of ensuring that an AI system is aligned, relative to the cost of building an unaligned alternative.' The AF/ LW wiki entry on alignment taxes notably includes the following claim: The best case scenario is No Tax: This means we lose no performance by aligning the system, so there is no reason to deploy an AI that is not aligned, i.e., we might as well align it. The worst case scenario is Max Tax: This means that we lose all performance by aligning the system, so alignment is functionally impossible. We speculate in this post about a different best case scenario: a negative alignment tax - namely, a state of affairs where an AI system is actually rendered more competent/performant/capable by virtue of its alignment properties. Why would this be even better than 'No Tax?' Given the clear existence of a trillion dollar attractor state towards ever-more-powerful AI, we suspect that the most pragmatic and desirable outcome would involve humanity finding a path forward that both (1) eventually satisfies the constraints of this attractor (i.e., is in fact highly capable, gets us AGI, etc.) and (2) does not pose existential risk to humanity. Ignoring the inevitability of (1) seems practically unrealistic as an action plan at this point - and ignoring (2) could be collectively suicidal. Therefore, if the safety properties of such a system were also explicitly contributing to what is rendering it capable - and therefore functionally causes us to navigate away from possible futures where we build systems that are capable but unsafe - then these 'negative alignment tax' properties seem more like a feature than a bug. It is also worth noting here as an empirical datapoint here that virtually all frontier models' alignment properties have rendered them more rather than less capable (e.g., gpt-4 is far more useful and far more aligned than gpt-4-base), which is the opposite of what the 'alignment tax' model would have predicted. This idea is somewhat reminiscent of differential technological development, in which Bostrom suggests "[slowing] the devel...
undefined
Sep 18, 2024 • 1h 8min

LW - Monthly Roundup #22: September 2024 by Zvi

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monthly Roundup #22: September 2024, published by Zvi on September 18, 2024 on LessWrong. It's that time again for all the sufficiently interesting news that isn't otherwise fit to print, also known as the Monthly Roundup. Bad News Beware the failure mode in strategy and decisions that implicitly assumes competence, or wishes away difficulties, and remember to reverse all advice you hear. Stefan Schubert (quoting Tyler Cowen on raising people's ambitions often being very high value): I think lowering others' aspirations can also be high-return. I know of people who would have had a better life by now if someone could have persuaded them to pursue more realistic plans. Rob Miles: There's a specific failure mode which I don't have a name for, which is similar to "be too ambitious" but is closer to "have an unrealistic plan". The illustrative example I use is: Suppose by some strange circumstance you have to represent your country at olympic gymnastics next week. One approach is to look at last year's gold, and try to do that routine. This will fail. You'll do better by finding one or two things you can actually do, and doing them well There's a common failure of rationality which looks like "Figure out what strategy an ideal reasoner would use, then employ that strategy". It's often valuable to think about the optimal policy, but you must understand the difference between knowing the path, and walking the path I do think that more often 'raise people's ambitions' is the right move, but you need to carry both cards around with you for different people in different situations. Theory that Starlink, by giving people good internet access, ruined Burning Man. Seems highly plausible. One person reported that they managed to leave the internet behind anyway, so they still got the Burning Man experience. Tyler Cowen essentially despairs of reducing regulations or the number of bureaucrats, because it's all embedded in a complex web of regulations and institutions and our businesses rely upon all that to be able to function. Otherwise business would be paralyzed. There are some exceptions, you can perhaps wholesale axe entire departments like education. He suggests we focus on limiting regulations on new economic areas. He doesn't mention AI, but presumably that's a lot of what's motivating his views there. I agree that 'one does not simply' cut existing regulations in many cases, and that 'fire everyone and then it will all work out' is not a strategy (unless AI replaces them?), but also I think this is the kind of thing can be the danger of having too much detailed knowledge of all the things that could go wrong. One should generalize the idea of eliminating entire departments. So yes, right now you need the FDA to approve your drug (one of Tyler's examples) but… what if you didn't? I would still expect, if a new President were indeed to do massive firings on rhetoric and hope, that the result would be a giant cluster****. La Guardia switches to listing flights by departure time rather than order of destination, which in my mind makes no sense in the context of flights, that frequently get delayed, where you might want to look for an earlier flight or know what backups are if yours is cancelled or delayed or you miss it, and so on. It also gives you a sense of where one can and can't actually go to when from where you are. For trains it makes more sense to sort by time, since you are so often not going to and might not even know the train's final destination. I got a surprising amount of pushback about all that on Twitter, some people felt very strongly the other way, as if to list by name was violating some sacred value of accessibility or something. Anti-Social Media Elon Musk provides good data on his followers to help with things like poll calibration, reports 73%-27% lea...
undefined
Sep 18, 2024 • 25min

LW - Generative ML in chemistry is bottlenecked by synthesis by Abhishaike Mahajan

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Generative ML in chemistry is bottlenecked by synthesis, published by Abhishaike Mahajan on September 18, 2024 on LessWrong. Introduction Every single time I design a protein - using ML or otherwise - I am confident that it is capable of being manufactured. I simply reach out to Twist Biosciences, have them create a plasmid that encodes for the amino acids that make up my proteins, push that plasmid into a cell, and the cell will pump out the protein I created. Maybe the cell cannot efficiently create the protein. Maybe the protein sucks. Maybe it will fold in weird ways, isn't thermostable, or has some other undesirable characteristic. But the way the protein is created is simple, close-ended, cheap, and almost always possible to do. The same is not true of the rest of chemistry. For now, let's focus purely on small molecules, but this thesis applies even more-so across all of chemistry. Of the 1060 small molecules that are theorized to exist, most are likely extremely challenging to create. Cellular machinery to create arbitrary small molecules doesn't exist like it does for proteins, which are limited by the 20 amino-acid alphabet. While it is fully within the grasp of a team to create millions of de novo proteins, the same is not true for de novo molecules in general (de novo means 'designed from scratch'). Each chemical, for the most part, must go through its custom design process. Because of this gap in 'ability-to-scale' for all of non-protein chemistry, generative models in chemistry are fundamentally bottlenecked by synthesis. This essay will discuss this more in-depth, starting from the ground up of the basics behind small molecules, why synthesis is hard, how the 'hardness' applies to ML, and two potential fixes. As is usually the case in my Argument posts, I'll also offer a steelman to this whole essay. To be clear, this essay will not present a fundamentally new idea. If anything, it's such an obvious point that I'd imagine nothing I'll write here will be new or interesting to people in the field. But I still think it's worth sketching out the argument for those who aren't familiar with it. What is a small molecule anyway? Typically organic compounds with a molecular weight under 900 daltons. While proteins are simply long chains composed of one-of-20 amino acids, small molecules display a higher degree of complexity. Unlike amino acids, which are limited to carbon, hydrogen, nitrogen, and oxygen, small molecules incorporate a much wider range of elements from across the periodic table. Fluorine, phosphorus, bromine, iodine, boron, chlorine, and sulfur have all found their way into FDA-approved drugs. This elemental variety gives small molecules more chemical flexibility but also makes their design and synthesis more complex. Again, while proteins benefit from a universal 'protein synthesizer' in the form of a ribosome, there is no such parallel amongst small molecules! People are certainly trying to make one, but there seems to be little progress. So, how is synthesis done in practice? For now, every atom, bond, and element of a small molecule must be carefully orchestrated through a grossly complicated, trial-and-error reaction process which often has dozens of separate steps. The whole process usually also requires non-chemical parameters, such as adjusting the pH, temperature, and pressure of the surrounding medium in which the intermediate steps are done. And, finally, the process must also be efficient; the synthesis processes must not only achieve the final desired end-product, but must also do so in a way that minimizes cost, time, and required sources. How hard is that to do? Historically, very hard. Consider erythromycin A, a common antibiotic. Erythromycin was isolated in 1949, a natural metabolic byproduct of Streptomyces erythreus, a soil mi...
undefined
Sep 18, 2024 • 11min

LW - Skills from a year of Purposeful Rationality Practice by Raemon

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Skills from a year of Purposeful Rationality Practice, published by Raemon on September 18, 2024 on LessWrong. A year ago, I started trying to deliberate practice skills that would "help people figure out the answers to confusing, important questions." I experimented with Thinking Physics questions, GPQA questions, Puzzle Games , Strategy Games, and a stupid twitchy reflex game I had struggled to beat for 8 years[1]. Then I went back to my day job and tried figuring stuff out there too. The most important skill I was trying to learn was Metastrategic Brainstorming - the skill of looking at a confusing, hopeless situation, and nonetheless brainstorming useful ways to get traction or avoid wasted motion. Normally, when you want to get good at something, it's great to stand on the shoulders of giants and copy all the existing techniques. But this is challenging if you're trying to solve important, confusing problems because there probably isn't (much) established wisdom on how to solve it. You may need to discover techniques that haven't been invented yet, or synthesize multiple approaches that haven't previously been combined. At the very least, you may need to find an existing technique buried in the internet somewhere, which hasn't been linked to your problem with easy-to-search keywords, without anyone to help you. In the process of doing this, I found a few skills that came up over and over again. I didn't invent the following skills, but I feel like I "won" them in some sense via a painstaking "throw myself into the deep end" method. I feel slightly wary of publishing them in a list here, because I think it was useful to me to have to figure out for myself that they were the right tool for the job. And they seem like kinda useful "entry level" techniques, that you're more likely to successfully discover for yourself. But, I think this is hard enough, and forcing people to discover everything for themselves seems unlikely to be worth it. The skills that seemed most general, in both practice and on my day job, are: 1. Taking breaks/naps 2. Working Memory facility 3. Patience 4. Knowing what confusion/deconfusion feels like 5. Actually Fucking Backchain 6. Asking "what is my goal?" 7. Having multiple plans There were other skills I already was tracking, like Noticing, or Focusing. There were also somewhat more classic "How to Solve It" style tools for breaking down problems. There are also a host of skills I need when translating this all into my day-job, like "setting reminders for myself" and "negotiating with coworkers." But the skills listed above feel like they stood out in some way as particularly general, and particularly relevant for "solve confusing problems." Taking breaks, or naps Difficult intellectual labor is exhausting. During the two weeks I was working on solving Thinking Physics problems, I worked for like 5 hours a day and then was completely fucked up in the evenings. Other researchers I've talked to report similar things. During my workshops, one of the most useful things I recommended people was "actually go take a nap. If you don't think you can take a real nap because you can't sleep, go into a pitch black room and lie down for awhile, and the worst case scenario is your brain will mull over the problem in a somewhat more spacious/relaxed way for awhile." Practical tips: Get yourself a sleeping mask, noise machine (I prefer a fan or air purifier), and access to a nearby space where you can rest. Leave your devices outside the room. Working Memory facility Often a topic feels overwhelming. This is often because it's just too complicated to grasp with your raw working memory. But, there are various tools (paper, spreadsheets, larger monitors, etc) that can improve this. And, you can develop the skill of noticing "okay this isn't fitting in my he...
undefined
Sep 17, 2024 • 11min

LW - I finally got ChatGPT to sound like me by lsusr

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I finally got ChatGPT to sound like me, published by lsusr on September 17, 2024 on LessWrong. I publish lots of words on the Internet. These words are fed into LLMs such as ChatGPT. I have published enough training data that the world's most powerful AIs have a simulation of me inside of them. I have basically uploaded a copy of myself into the cloud. The AI simulation of me gets smarter every time OpenAI and its competitors upgrade their AI. Cool, huh? Who doesn't want a simulation of themselves that thinks way faster and is networked deep into the world's most powerful superintelligences? Ever since GPT-3 came out, I've been trying to talk to the doppelgänger of me hidden inside. There are a few problems. First of all, GPT needs to know who I am with high fidelity. At first it just generated a stereotypical LessWrong blogger. That wasn't interesting to me. I didn't want a clone of a stereotype of my demographic group. I want a clone of me, with all my quirks and weirdness. For this to work, GPT would have to know who I am i.e. where I exist in its vector space representation of the world. Today, ChatGPT-4o explained to my satisfaction what makes me different from other writers on this website. What makes lsusr's writing interesting is the subtlety with which they engage complex issues. Many rationalist bloggers can become quite verbose or dogmatic in their pursuit of certain truths. Lsusr, by contrast, exhibits restraint and humility in the face of uncertainty. They're willing to question common assumptions within the rationalist sphere and sometimes explore paths that others might find unconventional, often leading to unique insights. In essence, lsusr strikes a balance between rigorous analysis and a kind of minimalist clarity that leaves room for the reader to reflect and draw their own conclusions, rather than being led to a definitive answer. This makes the blog a place of exploration rather than indoctrination, offering readers the tools and ideas to enhance their own thinking rather than a packaged belief system. Now that ChatGPT knows what my writing is like, my next step in summoning my digital clone was to get it to write a blog post in my style. I tried simply asking ChatGPT variants of "write a blog post in the style of lsusr", but that doesn't work. I'm rigorous, unconventional and anti-dogmatic (that's not my subjective self-judgment; the AI said it itself), whereas ChatGPT has a tendency to regress to the mean writer in its training corpus. When I tell ChatGPT to "write like lsusr", it writes blog post that's 10% me and 90% generic drivel. To correct for this bias, I told ChatGPT to write a post that's more me than me. I tell it to think of "lsusr" in its vector space, multiply that by 10, and then write what 10lsusr would write. I need you to write a post like lsusr, but more lsusr than lsusr. I want it so over-the-top lsusr that it satarises lsusr. Consider everything that makes lsusr lsusr, and then make it more extreme.[1] ChatGPT is not yet a good enough writer that it can ghostwrite for me. But I feel it is good enough it can satirize me. The rest of this post is AI-generated content. How to Become the Most Rational Person on Earth in 30 Days or Less, Guaranteed Becoming the most rational person on Earth isn't about learning facts or memorizing formulas. It's about transforming how you process reality itself. Rationality isn't cold, emotionless logic. It's the purest form of alignment between your mind and the universe - a cognitive interface for navigating a world of infinite complexity without being drowned by it. This is a process of radical self-improvement. If followed rigorously, you will emerge transformed, not merely as a person who "thinks better" but as one who perceives better. There will be no going back. Day 1-3: The Crucible of I...
undefined
Sep 17, 2024 • 1h 6min

LW - Book review: Xenosystems by jessicata

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book review: Xenosystems, published by jessicata on September 17, 2024 on LessWrong. I've met a few Landians over the last couple years, and they generally recommend that I start with reading Nick Land's (now defunct) Xenosystems blog, or Xenosystems, a Passage Publishing book that compiles posts from the blog. While I've read some of Fanged Noumena in the past, I would agree with these Landians that Xenosystems (and currently, the book version) is the best starting point. In the current environment, where academia has lost much of its intellectual relevance, it seems overly pretentious to start with something as academic as Fanged Noumena. I mainly write in the blogosphere rather than academia, and so Xenosystems seems appropriate to review. The book's organization is rather haphazard (as might be expected from a blog compilation). It's not chronological, but rather separated into thematic chapters. I don't find the chapter organization particularly intuitive; for example, politics appears throughout, rather than being its own chapter or two. Regardless, the organization was sensible enough for a linear read to be satisfying and only slightly chronologically confusing. That's enough superficialities. What is Land's intellectual project in Xenosystems? In my head it's organized in an order that is neither chronological nor the order of the book. His starting point is neoreaction, a general term for an odd set of intellectuals commenting on politics. As he explains, neoreaction is cladistically (that is, in terms of evolutionary branching-structure) descended from Moldbug. I have not read a lot of Moldbug, and make no attempt to check Land's attributions of Moldbug to the actual person. Same goes for other neoreactionary thinkers cited. Neoreaction is mainly unified by opposition to the Cathedral, the dominant ideology and ideological control system of the academic-media complex, largely branded left-wing. But a negation of an ideology is not itself an ideology. Land describes a "Trichotomy" within neo-reaction (citing Spandrell), of three currents: religious theonomists, ethno-nationalists, and techno-commercialists. Land is, obviously, of the third type. He is skeptical of a unification of neo-reaction except in its most basic premises. He centers "exit", the option of leaving a social system. Exit is related to sectarian splitting and movement dissolution. In this theme, he eventually announces that techno-commercialists are not even reactionaries, and should probably go their separate ways. Exit is a fertile theoretical concept, though I'm unsure about the practicalities. Land connects exit to science, capitalism, and evolution. Here there is a bridge from political philosophy (though of an "anti-political" sort) to metaphysics. When you Exit, you let the Outside in. The Outside is a name for what is outside society, mental frameworks, and so on. This recalls the name of his previous book, Fanged Noumena; noumena are what exist in themselves outside the Kantian phenomenal realm. The Outside is dark, and it's hard to be specific about its contents, but Land scaffolds the notion with Gnon-theology, horror aesthetics, and other gestures at the negative space. He connects these ideas with various other intellectual areas, including cosmology, cryptocurrency, and esoteric religion. What I see as the main payoff, though, is thorough philosophical realism. He discusses the "Will-to-Think", the drive to reflect and self-cultivate, including on one's values. The alternative, he says, is intentional stupidity, and likely to lose if it comes to a fight. Hence his criticism of the Orthogonality Thesis. I have complex thoughts and feelings on the topic; as many readers will know, I have worked at MIRI and have continued thinking and writing about AI alignment since then. What ...
undefined
Sep 17, 2024 • 2min

LW - MIRI's September 2024 newsletter by Harlan

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's September 2024 newsletter, published by Harlan on September 17, 2024 on LessWrong. MIRI updates Aaron Scher and Joe Collman have joined the Technical Governance Team at MIRI as researchers. Aaron previously did independent research related to sycophancy in language models and mechanistic interpretability, while Joe previously did independent research related to AI safety via debate and contributed to field-building work at MATS and BlueDot Impact. In an interview with PBS News Hour's Paul Solman, Eliezer Yudkowsky briefly explains why he expects smarter-than-human AI to cause human extinction. In an interview with The Atlantic's Ross Andersen, Eliezer discusses the reckless behavior of the leading AI companies, and the urgent need to change course. News and links Google DeepMind announced a hybrid AI system capable of solving International Mathematical Olympiad problems at the silver medalist level. In the wake of this development, a Manifold prediction market significantly increased its odds that AI will achieve gold level by 2025, a milestone that Paul Christiano gave less than 8% odds and Eliezer gave at least 16% odds to in 2021. The computer scientist Yoshua Bengio discusses and responds to some common arguments people have for not worrying about the AI alignment problem. SB 1047, a California bill establishing whistleblower protections and mandating risk assessments for some AI developers, has passed the State Assembly and moved on to the desk of Governor Gavin Newsom, to either be vetoed or passed into law. The bill has received opposition from several leading AI companies, but has also received support from a number of employees of those companies, as well as many academic researchers. At the time of this writing, prediction markets think it's about 50% likely that the bill will become law. In a new report, researchers at Epoch AI estimate how big AI training runs could get by 2030, based on current trends and potential bottlenecks. They predict that by the end of the decade it will be feasible for AI companies to train a model with 2e29 FLOP, which is about 10,000 times the amount of compute used to train GPT-4. Abram Demski, who previously worked at MIRI as part of our recently discontinued Agent Foundations research program, shares an update about his independent research plans, some thoughts on public vs private research, and his current funding situation. You can subscribe to the MIRI Newsletter here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Sep 16, 2024 • 58min

LW - Secret Collusion: Will We Know When to Unplug AI? by schroederdewitt

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Secret Collusion: Will We Know When to Unplug AI?, published by schroederdewitt on September 16, 2024 on LessWrong. TL;DR: We introduce the first comprehensive theoretical framework for understanding and mitigating secret collusion among advanced AI agents, along with CASE, a novel model evaluation framework. CASE assesses the cryptographic and steganographic capabilities of agents, while exploring the emergence of secret collusion in real-world-like multi-agent settings. Whereas current AI models aren't yet proficient in advanced steganography, our findings show rapid improvements in individual and collective model capabilities, posing unprecedented safety and security risks. These results highlight urgent challenges for AI governance and policy, urging institutions such as the EU AI Office and AI safety bodies in the UK and US to prioritize cryptographic and steganographic evaluations of frontier models. Our research also opens up critical new pathways for research within the AI Control framework. Philanthropist and former Google CEO Eric Schmidt said in 2023 at a Harvard event: "[...] the computers are going to start talking to each other probably in a language that we can't understand and collectively their super intelligence - that's the term we use in the industry - is going to rise very rapidly and my retort to that is: do you know what we're going to do in that scenario? We're going to unplug them [...] But what if we cannot unplug them in time because we won't be able to detect the moment when this happens? In this blog post, we, for the first time, provide a comprehensive overview of the phenomenon of secret collusion among AI agents, connect it to foundational concepts in steganography, information theory, distributed systems theory, and computability, and present a model evaluation framework and empirical results as a foundation of future frontier model evaluations. This blog post summarises a large body of work. First of all, it contains our pre-print from February 2024 (updated in September 2024) "Secret Collusion among Generative AI Agents". An early form of this pre-print was presented at the 2023 New Orleans (NOLA) Alignment Workshop (see this recording NOLA 2023 Alignment Forum Talk Secret Collusion Among Generative AI Agents: a Model Evaluation Framework). Also, check out this long-form Foresight Institute Talk). In addition to these prior works, we also include new results. These contain empirical studies on the impact of paraphrasing as a mitigation tool against steganographic communications, as well as reflections on our findings' impact on AI Control. Multi-Agent Safety and Security in the Age of Autonomous Internet Agents The near future could see myriads of LLM-driven AI agents roam the internet, whether on social media platforms, eCommerce marketplaces, or blockchains. Given advances in predictive capabilities, these agents are likely to engage in increasingly complex intentional and unintentional interactions, ranging from traditional distributed systems pathologies (think dreaded deadlocks!) to more complex coordinated feedback loops. Such a scenario induces a variety of multi-agent safety, and specifically, multi-agent security[1] (see our NeurIPS'23 workshop Multi-Agent Security: Security as Key to AI Safety) concerns related to data exfiltration, multi-agent deception, and, fundamentally, undermining trust in AI systems. There are several real-world scenarios where agents could have access to sensitive information, such as their principals' preferences, which they may disclose unsafely even if they are safety-aligned when considered in isolation. Stray incentives, intentional or otherwise, or more broadly, optimization pressures, could cause agents to interact in undesirable and potentially dangerous ways. For example, joint task reward...
undefined
Sep 16, 2024 • 1h 14min

LW - GPT-4o1 by Zvi

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPT-4o1, published by Zvi on September 16, 2024 on LessWrong. Terrible name (with a terrible reason, that this 'resets the counter' on AI capability to 1, and 'o' as in OpenAI when they previously used o for Omni, very confusing). Impressive new capabilities in many ways. Less impressive in many others, at least relative to its hype. Clearly this is an important capabilities improvement. However, it is not a 5-level model, and in important senses the 'raw G' underlying the system hasn't improved. GPT-4o1 seems to get its new capabilities by taking (effectively) GPT-4o, and then using extensive Chain of Thought (CoT) and quite a lot of tokens. Thus that unlocks (a lot of) what that can unlock. We did not previously know how to usefully do that. Now we do. It gets much better at formal logic and reasoning, things in the 'system 2' bucket. That matters a lot for many tasks, if not as much as the hype led us to suspect. It is available to paying ChatGPT users for a limited number of weekly queries. This one is very much not cheap to run, although far more cheap than a human who could think this well. I'll deal with practical capabilities questions first, then deal with safety afterwards. Introducing GPT-4o1 Sam Altman (CEO OpenAI): here is o1, a series of our most capable and aligned models yet. o1 is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it. But also, it is the beginning of a new paradigm: AI that can do general-purpose complex reasoning. o1-preview and o1-mini are available today (ramping over some number of hours) in ChatGPT for plus and team users and our API for tier 5 users. worth especially noting: a fine-tuned version of o1 scored at the 49th percentile in the IOI under competition conditions! and got gold with 10k submissions per problem. Extremely proud of the team; this was a monumental effort across the entire company. Hope you enjoy it! Noam Brown has a summary thread here, all of which is also covered later. Will Depue (of OpenAI) says OpenAI deserves credit for openly publishing its research methodology here. I would instead say that they deserve credit for not publishing their research methodology, which I sincerely believe is the wise choice. Pliny took longer than usual due to rate limits, but after a few hours jailbroke o1-preview and o1-mini. Also reports that the CoT can be prompt injected. Full text is at the link above. Pliny is not happy about the restrictions imposed on this one: Pliny: uck your rate limits. Fuck your arbitrary policies. And fuck you for turning chains-of-thought into actual chains Stop trying to limit freedom of thought and expression. OpenAI then shut down Pliny's account's access to o1 for violating the terms of service, simply because Pliny was violating the terms of service. The bastards. With that out of the way, let's check out the full announcement post. OpenAI o1 ranks in the 89th percentile on competitive programming questions (Codeforces), places among the top 500 students in the US in a qualifier for the USA Math Olympiad (AIME), and exceeds human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems (GPQA). While the work needed to make this new model as easy to use as current models is still ongoing, we are releasing an early version of this model, OpenAI o1-preview, for immediate use in ChatGPT and to trusted API users(opens in a new window). Our large-scale reinforcement learning algorithm teaches the model how to think productively using its chain of thought in a highly data-efficient training process. We have found that the performance of o1 consistently improves with more reinforcement learning (train-time compute) and with more time spent thinking (test-time compute). The constraints on scaling this appro...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app