The Nonlinear Library: LessWrong

The Nonlinear Fund
undefined
Aug 20, 2024 • 1h 28min

LW - Guide to SB 1047 by Zvi

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Guide to SB 1047, published by Zvi on August 20, 2024 on LessWrong. We now likely know the final form of California's SB 1047. There have been many changes to the bill as it worked its way to this point. Many changes, including some that were just announced, I see as strict improvements. Anthropic was behind many of the last set of amendments at the Appropriations Committee. In keeping with their "Support if Amended" letter, there are a few big compromises that weaken the upside protections of the bill somewhat in order to address objections and potential downsides. The primary goal of this post is to answer the question: What would SB 1047 do? I offer two versions: Short and long. The short version summarizes what the bill does, at the cost of being a bit lossy. The long version is based on a full RTFB: I am reading the entire bill, once again. In between those two I will summarize the recent changes to the bill, and provide some practical ways to understand what the bill does. After, I will address various arguments and objections, reasonable and otherwise. My conclusion: This is by far the best light-touch bill we are ever going to get. Short Version (tl;dr): What Does SB 1047 Do in Practical Terms? This section is intentionally simplified, but in practical terms I believe this covers the parts that matter. For full details see later sections. First, I will echo the One Thing To Know. If you do not train either a model that requires $100 million or more in compute, or fine tune such an expensive model using $10 million or more in your own additional compute (or operate and rent out a very large computer cluster)? Then this law does not apply to you, at all. This cannot later be changed without passing another law. (There is a tiny exception: Some whistleblower protections still apply. That's it.) Also the standard required is now reasonable care, the default standard in common law. No one ever has to 'prove' anything, nor need they fully prevent all harms. With that out of the way, here is what the bill does in practical terms. IF AND ONLY IF you wish to train a model using $100 million or more in compute (including your fine-tuning costs): 1. You must create a reasonable safety and security plan (SSP) such that your model does not pose an unreasonable risk of causing or materially enabling critical harm: mass casualties or incidents causing $500 million or more in damages. 2. That SSP must explain what you will do, how you will do it, and why. It must have objective evaluation criteria for determining compliance. It must include cybersecurity protocols to prevent the model from being unintentionally stolen. 3. You must publish a redacted copy of your SSP, an assessment of the risk of catastrophic harms from your model, and get a yearly audit. 4. You must adhere to your own SSP and publish the results of your safety tests. 5. You must be able to shut down all copies under your control, if necessary. 6. The quality of your SSP and whether you followed it will be considered in whether you used reasonable care. 7. If you violate these rules, you do not use reasonable care and harm results, the Attorney General can fine you in proportion to training costs, plus damages for the actual harm. 8. If you fail to take reasonable care, injunctive relief can be sought. The quality of your SSP, and whether or not you complied with it, shall be considered when asking whether you acted reasonably. 9. Fine-tunes that spend $10 million or more are the responsibility of the fine-tuner. 10. Fine-tunes spending less than that are the responsibility of the original developer. Compute clusters need to do standard KYC when renting out tons of compute. Whistleblowers get protections. They will attempt to establish a 'CalCompute' public compute cluster. You can also read this summary of h...
undefined
Aug 20, 2024 • 18min

LW - Thiel on AI and Racing with China by Ben Pace

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thiel on AI & Racing with China, published by Ben Pace on August 20, 2024 on LessWrong. This post is a transcript of part of a podcast with Peter Thiel, touching on topics of AI, China, extinction, Effective Altruists, and apocalyptic narratives, published on August 16th 2024. If you're interested in reading the quotes, just skip straight to them, the introduction is not required reading. Introduction Peter Thiel is probably known by most readers, but briefly: he is an venture capitalist, the first outside investor in Facebook, cofounder of Paypal and Palantir, and wrote Zero to One (a book I have found very helpful for thinking about building great companies). He has also been one of the primary proponents of the Great Stagnation hypothesis (along with Tyler Cowen). More local to the LessWrong scene, Thiel was an early funder of MIRI and a speaker at the first Effective Altruism summit in 2013. He funded Leverage Research for many years, and also a lot of anti-aging research, and the seasteading initiative, and his Thiel Fellowship included a number of people who are around the LessWrong scene. I do not believe he has been active around this scene much in the last ~decade. He appears rarely to express a lot of positions about society, and I am curious to hear them when he does. In 2019 I published the transcript of another longform interview of his here with Eric Weinstein. Last week another longform interview with him came out, and I got the sense again, that even though we disagree on many things, conversation with him would be worthwhile and interesting. Then about 3 hours in he started talking more directly about subjects that I think actively about and some conflicts around AI, so I've quoted the relevant parts below. His interviewer, Joe Rogan is a very successful comedian and podcaster. He's not someone who I would go to for insights about AI. I think of him as standing in for a well-intentioned average person, for better or for worse, although he is a little more knowledgeable and a little more intelligent and a lot more curious than the average person. The average Joe. I believe he is talking in good faith to the person before him, with curiosity, and making points that seem natural to many. Artificial Intelligence Discussion focused on the AI race and China, atarting at 2:56:40. The opening monologue by Rogan is skippable. Rogan If you look at this mad rush for artificial intelligence - like, they're literally building nuclear reactors to power AI. Thiel Well, they're talking about it. Rogan Okay. That's because they know they're gonna need enormous amounts of power to do it. Once it's online, and it keeps getting better and better, where does that go? That goes to a sort of artificial life-form. I think either we become that thing, or we integrate with that thing and become cyborgs, or that thing takes over. And that thing becomes the primary life force of the universe. And I think that biological life, we look at like life, because we know what life is, but I think it's very possible that digital life or created life might be a superior life form. Far superior. [...] I love people, I think people are awesome. I am a fan of people. But if I had to look logically, I would assume that we are on the way out. And that the only way forward, really, to make an enormous leap in terms of the integration of society and technology and understanding our place in the universe, is for us to transcend our physical limitations that are essentially based on primate biology, and these primate desires for status (like being the captain), or for control of resources, all of these things - we assume these things are standard, and that they have to exist in intelligent species. I think they only have to exist in intelligent species that have biological limitations. I think in...
undefined
Aug 20, 2024 • 38min

LW - Limitations on Formal Verification for AI Safety by Andrew Dickson

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Limitations on Formal Verification for AI Safety, published by Andrew Dickson on August 20, 2024 on LessWrong. In the past two years there has been increased interest in formal verification-based approaches to AI safety. Formal verification is a sub-field of computer science that studies how guarantees may be derived by deduction on fully-specified rule-sets and symbol systems. By contrast, the real world is a messy place that can rarely be straightforwardly represented in a reductionist way. In particular, physics, chemistry and biology are all complex sciences which do not have anything like complete symbolic rule sets. Additionally, even if we had such rules for the natural sciences, it would be very difficult for any software system to obtain sufficiently accurate models and data about initial conditions for a prover to succeed in deriving strong guarantees for AI systems operating in the real world. Practical limitations like these on formal verification have been well-understood for decades to engineers and applied mathematicians building real-world software systems, which makes it puzzling that they have mostly been dismissed by leading researchers advocating for the use of formal verification in AI safety so far. This paper will focus-in on several such limitations and use them to argue that we should be extremely skeptical of claims that formal verification-based approaches will provide strong guarantees against major AI threats in the near-term. What do we Mean by Formal Verification for AI Safety? Some examples of the kinds of threats researchers hope formal verification will help with come from the paper "Provably Safe Systems: The Only Path to Controllable AGI" [1] by Max Tegmark and Steve Omohundro (emphasis mine): Several groups are working to identify the greatest human existential risks from AGI. For example, the Center for AI Safety recently published 'An Overview of Catastrophic AI Risks' which discusses a wide range of risks including bioterrorism, automated warfare, rogue power seeking AI, etc. Provably safe systems could counteract each of the risks they describe. These authors describe a concrete bioterrorism scenario in section 2.4: a terrorist group wants to use AGI to release a deadly virus over a highly populated area. They use an AGI to design the DNA and shell of a pathogenic virus and the steps to manufacture it. They hire a chemistry lab to synthesize the DNA and integrate it into the protein shell. They use AGI controlled drones to disperse the virus and social media AGIs to spread their message after the attack. Today, groups are working on mechanisms to prevent the synthesis of dangerous DNA. But provably safe infrastructure could stop this kind of attack at every stage: biochemical design AI would not synthesize designs unless they were provably safe for humans, data center GPUs would not execute AI programs unless they were certified safe, chip manufacturing plants would not sell GPUs without provable safety checks, DNA synthesis machines would not operate without a proof of safety, drone control systems would not allow drones to fly without proofs of safety, and armies of persuasive bots would not be able to manipulate media without proof of humanness. [1] The above quote contains a number of very strong claims about the possibility of formally or mathematically provable guarantees around software systems deployed in the physical world - for example, the claim that we could have safety proofs about the real-world good behavior of DNA synthesis machines, or drones. From a practical standpoint, our default stance towards such claims should be skepticism, since we do not have proofs of this sort for any of the technologies we interact with in the real-world today. For example, DNA synthesis machines exist today and do not come with f...
undefined
Aug 20, 2024 • 16min

LW - Untrustworthy models: a frame for scheming evaluations by Olli Järviniemi

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Untrustworthy models: a frame for scheming evaluations, published by Olli Järviniemi on August 20, 2024 on LessWrong. In this post, I'll talk about trustworthiness, a concept similar to scheming and deceptive alignment. I prefer this concept, as I find it A) is directly important in practice, B) more readily distinguishes between capability for scheming and in fact scheming, and C) is clearer how to evaluate for. I'll also distinguish between different types of untrustworthy models - active planners, sleeper agents and opportunists - that are often conflated. I think when people talk about evaluating models for something like scheming, they'd often be better off talking about trustworthiness instead, and I think it's overall a more applicable frame for designing evaluations and decision making. Many of the ideas here are from Buck Shlegeris and Ryan Greenblatt, thanks to them and Cody Rushing for feedback. Suppose an AI company has developed a new model, and wants to use it to perform large quantities of ML research (internal deployment). However, the company is worried about the model being deceptively aligned[1] and trying to take catastrophic actions, such as hacking the data center or copying its weights to a remote server. What can the company do? This is a textbook example of the type of problem AI control tries to solve, and a basic strategy looks like this: Every time you are using the new model and it takes some action, use another model to check the action is OK before letting it be executed. If the action is not OK, it's not executed and humans are alerted about suspicious activity. (If you have never heard of AI control before, you might want to try playing the Bash Control game to get familiar with the concept. In the game, your aim is to download and execute a malicious program without GPT-3.5 catching you.) Let's call our characters Smart and Weak: Smart is the new model doing ML research while maybe trying to escape, while Weak is a model trained to monitor Smart. The reason for these names will become apparent soon. There are many ways how this control strategy could fail. Here are some:[2] 1. Smart came up with an extremely subtle way to escape that Weak is just too dumb to notice. 2. Smart successfully jailbreaks (via e.g. a suffix attack) Weak to not alert humans about its escape attempt. 1. Or: Weak happens to have a backdoor, which Smart finds and exploits. 3. Smart successfully convinces Weak (via natural language) that its actions are not malicious.[3] 4. Smart's escape plan is unlike anything we have trained Weak on. While Weak "would be capable of" telling that Smart is escaping, our training didn't specify well enough what we wanted, Weak generalizes incorrectly out-of-distribution and doesn't alert humans. 5. The company has way too limited of a budget for human alerts, so while Weak does rank Smart's actions in the top 0.012% suspiciousness, it's not enough to meet the 0.01% threshold for alerting humans. 6. Weak believes that not alerting humans in this case would allow Smart to escape, and believes that this leads to better outcomes in light of its preferences, and thus for instrumental reasons deliberately does not alert humans, indeed causing Smart to escape. Failure 6 is very different from the others! Whereas failures 1 to 5 are broadly about "Weak is just not a capable/good enough monitor to do what we wanted", failure 6 is about "Weak knows perfectly well Smart is escaping, but betrays humans and doesn't tell them about this". This issue is most apparent if you were to use a copy of the AI lab's new model to monitor itself (so Smart and Weak are the same model[4]): If you don't trust the new model to do research without trying to escape, you probably can't trust it to alert humans about escape attempts (by itself, no less!) either...
undefined
Aug 20, 2024 • 13min

LW - LLM Applications I Want To See by sarahconstantin

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLM Applications I Want To See, published by sarahconstantin on August 20, 2024 on LessWrong. I'm convinced that people who are interested in large language models (LLMs) are overwhelmingly focused on general-purpose "performance" at the expense of exploring useful (or fun) applications. As I'm working on a personal project, I've been learning my way around HuggingFace, which is a hosting platform, set of libraries, and almost-social-network for the open-source AI community. It's fascinating, and worth exploring even if you're not going to be developing foundation models from scratch yourself; if you simply want to use the latest models, build apps around them, or adapt them slightly to your own purposes, HuggingFace seems like the clear place to go. You can look at trending models, and trending public "spaces", aka cloud-hosted instances of models that users can test out, and get a sense of where the "energy" is. And what I see is that almost all the "energy" in LLMs is on general-purpose models, competing on general-purpose question-answering benchmarks, sometimes specialized to particular languages, or to math or coding. "How can I get something that behaves basically like ChatGPT or Claude or Gemini, but gets fewer things wrong, and ideally requires less computing power and and gets the answer faster?" is an important question, but it's far from the only interesting one! If I really search I can find "interesting" specialized applications like "predicts a writer's OCEAN personality scores based on a text sample" or "uses abliteration to produce a wholly uncensored chatbot that will indeed tell you how to make a pipe bomb" but mostly…it's general-purpose models. Not applications for specific uses that I might actually try. And some applications seem to be eager to go to the most creepy and inhumane use cases. No, I don't want little kids talking to a chatbot toy, especially. No, I don't want a necklace or pair of glasses with a chatbot I can talk to. (In public? Imagine the noise pollution!) No, I certainly don't want a bot writing emails for me! Even the stuff I found potentially cool (an AI diary that analyzes your writing and gives you personalized advice) ended up being, in practice, so preachy that I canceled my subscription. In the short term, of course, the most economically valuable thing to do with LLMs is duplicating human labor, so it makes sense that the priority application is autogenerated code. But the most creative and interesting potential applications go beyond "doing things humans can already do, but cheaper" to do things that humans can't do at all on comparable scale. A Personalized Information Environment To some extent, social media, search, and recommendation engines were supposed to enable us to get the "content" we want. And mostly, to the extent that's turned out to be a disappointment, people complain that getting exactly what you want is counterproductive - filter bubbles, superstimuli, etc. But I find that we actually have incredibly crude tools for getting what we want. We can follow or unfollow, block or mute people; we can upvote and downvote pieces of content and hope "the algorithm" feeds us similar results; we can mute particular words or tags. But what we can't do, yet, is define a "quality" we're looking for, or a "genre" or a "vibe", and filter by that criterion. The old tagging systems (on Tumblr or AO3 or Delicious, or back when hashtags were used unironically on Twitter) were the closest approximation to customizable selectivity, and they're still pretty crude. We can do a lot better now. Personalized Content Filter This is a browser extension. You teach the LLM, by highlighting and saving examples, what you consider to be "unwanted" content that you'd prefer not to see. The model learns a classifier to sort all text in yo...
undefined
Aug 19, 2024 • 21min

LW - A primer on why computational predictive toxicology is hard by Abhishaike Mahajan

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A primer on why computational predictive toxicology is hard, published by Abhishaike Mahajan on August 19, 2024 on LessWrong. Introduction There are now (claimed) foundation models for protein sequences, DNA sequences, RNA sequences, molecules, scRNA-seq, chromatin accessibility, pathology slides, medical images, electronic health records, and clinical free-text. It's a dizzying rate of progress. But there's a few problems in biology that, interestingly enough, have evaded a similar level of ML progress, despite there seemingly being all the necessary conditions to achieve it. Toxicology is one of those problems. This isn't a new insight, it was called out in one of Derek Lowe's posts, where he said: There are no existing AI/ML systems that mitigate clinical failure risks due to target choice or toxicology. He also repeats it in a more recent post: '…the most badly needed improvements in drug discovery are in the exact areas that are most resistant to AI and machine learning techniques. By which I mean target selection and predictive toxicology.' Pat Walters also goes into the subject with much more depth, emphasizing how difficult the whole field is. As someone who isn't familiar at all with the area of predictive toxicology, that immediately felt strange. Why such little progress? It can't be that hard, right? Unlike drug development, where you're trying to precisely hit some key molecular mechanism, assessing toxicity almost feels…brutish in nature. Something that's as clear as day, easy to spot out with eyes, easier still to do with a computer trained to look for it. Of course, there will be some stragglers that leak through this filtering, but it should be minimal. Obviously a hard problem in its own right, but why isn't it close to being solved? What's up with this field? Some background One may naturally assume that there is a well-established definition of toxicity, a standard blanket definition to delineate between things that are and aren't toxic. While there are terms such as LD50, LC50, EC50, and IC50, used to explain the degree by which something is toxic, they are an immense oversimplification. When we say a substance is "toxic," there's usually a lot of follow-up questions. Is it toxic at any dose? Only above a certain threshold? Is it toxic for everyone, or just for certain susceptible individuals (as we'll discuss later)? The relationship between dose and toxicity is not always linear, and can vary depending on the route of exposure, the duration of exposure, and individual susceptibility factors. A dose that causes no adverse effects when consumed orally might be highly toxic if inhaled or injected. And a dose that is well-tolerated with acute exposure might cause serious harm over longer periods of chronic exposure. The very definition of an "adverse effect" resulting from toxicity is not always clear-cut either. Some drug side effects, like mild nausea or headache, might be considered acceptable trade-offs for therapeutic benefit. But others, like liver failure or birth defects, would be considered unacceptable at any dose. This is particularly true when it comes to environmental chemicals, where the effects may be subtler and the exposure levels more variable. Is a chemical that causes a small decrease in IQ scores toxic? What about one that slightly increases the risk of cancer over a lifetime (20+ years)? And this is one of the major problems with applying predicting toxicology at all - defining what is and isn't toxic is hard! One may assume the FDA has clear stances on all these, but even they approach it on a 'vibe-based' perspective. They simply collate the data from in-vitro studies, animal studies, and human clinical trials, and arrive to an approval/no-approval conclusion that is, very often, at odds with some portion of the medical comm...
undefined
Aug 19, 2024 • 12min

LW - Interdictor Ship by lsusr

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interdictor Ship, published by lsusr on August 19, 2024 on LessWrong. The standard operating procedure against a primitive forest-dwelling enemy is to retreat to orbit and then bombard the planet until there isn't a forest anymore. However, the only reason the Galactic Empire was in the Alpha Centauri A system in the first place was because of the fragile mineral resources underneath that forest. Dropping tungsten rods at hypersonic speeds would risk destroying the only thing of value on Pandora. Alien aborigines armed with bows and arrows damaged Imperial legitimacy across the galaxy. It was like losing a battle to Ewoks. The Emperor's solution had been to hire an Ewok. Mitth'raw'nuruodo was a dwarf by Na'vi standards, but the blue alien stood a head above most humans. Originally hired as a translator, the Imperials on Pandora quickly noticed that the only patrols who came back alive were those that followed Mitth'raw'nuruodo's advice. Pretty soon, the moon was pacified and Mitth'raw'nuruodo was its de facto king. Nobody liked the idea of an alien being in control of such a strategically-valuable moon. To get rid of him, they promoted Mitth'raw'nuruodo to Admiral. In space, many parsecs away from Pandora, the humans under Mitth'raw'nuruodo's command couldn't pronounce "Mitth'raw'nuruodo". That was fine, thought Mitth'raw'nuruodo. Everyone just called him "Thrawn". Amateurs talk strategy. Professionals talk logistics. The Imperial Navy Defense Acquisitions Board (INDAB) originally met on Coruscant, but was moved to the Life Star for security reasons. Idle chitchat usually preceded the important negotiations. "What I don't get is why we call it the 'Life Star'," said Chief Bast, "This thing blows up planets. Shouldn't it be called the 'Death Star'?" "Do you want us to look like the bad guys?" said General Tagge, "The Department of Defense isn't called the 'Department of War'. The Department of Justice isn't called the 'Department of Incarceration'. The Department of Education isn't called the 'Department of Child Indoctrination'. Calling this megastructure the 'Life Star' buys us legitimacy for the low, low price of zero Galactic Credits." "But won't people call us out on our Bantha fodder when we call things the opposite of what they really are?" said Chief Bast. "Humans don't. Aliens sometimes make a fuss about it," General Tagge said, "No offense, Admiral." "None taken," said Thrawn. "Speaking of which, I've read your recent report," said General Tagge. He projected the Aurebesh symbols where everyone could see, "I forwarded the report to everyone here, but since nobody (except me) ever reads their meeting briefings, why don't you give us the quick summary." "Of course," Thrawn stood up, "I have two theses. First of all, the Life Star is a tremendous waste of credits. This weapon's only possible use is against a peer adversary or a super-peer adversary. We control two thirds of the galaxy. We have no peer or super-peer adversaries. The Emperor's pet project consumes massive resources while doing nothing to advance our military objectives." "The Life Star killed all the Rebel scum on Alderaan," said Grand Moff Tarkin. "I have always considered you a rational agent," said Thrawn, "I am very curious how you, the commander of the Life Star, came to the conclusion that destroying Alderaan was the best way of advancing Imperial interests." "If you have a problem with my methods then you can bring it to me in private," said Tarkin, "Your second thesis is the topic I hoped to discuss." Thrawn pressed a button and the Aurebesh words were replaced with different Aurebesh words. They continued to go unread. "Rebel terrorists have recently equipped their starfighters with hyperdrives. They can strike anywhere, and will choose the weakest targets. Our current grand strategy is ...
undefined
Aug 19, 2024 • 4min

LW - Decision Theory in Space by lsusr

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Decision Theory in Space, published by lsusr on August 19, 2024 on LessWrong. "Since you are reluctant to provide us with the location of the Rebel base," said Grand Moff Tarkin, "I have chosen to test this station's destructive power on your home planet of Alderaan." "No. Alderaan is peaceful. We have no weapons there. It is a loyal planet under Imperial control. Striking Alderaan would destroy your own resources and foment rebellion. Destroying it is irrational," said Princess Leia, perfectly calm. "Nonsense," said Tarkin, "That is a naïve understanding of decision theory. I am a causal decision theorist, but I acknowledge the value of precommitments. I therefore precommit to destroying Alderaan unless you reveal to me the location of the Rebel base. This is not an irrational act if you capitulate to me." "But it is an irrational act if I do not capitulate to you," said Leia, "I am a functional decision theorist. The algorithm I use to select my decision accounts for the fact that you are modelling my mind. You are a rational agent. You only threaten me because you expect me to succomb to your blackmail. Because of that I will not succomb to your blackmail." "I'm going to do it," said Tarkin. "Sure you are," said Leia. "I'm really going to blow up the planet," said Tarkin. "Be my guest," said Leia, with a smile, "Aim for the continent Anaander. Its inhabitants always annoyed me. We'll see who has the last laugh." "I'm really really going to do it," said Tarkin. "I grow tired of saying this, so it'll be the last time. Just blow up the planet already. I have an execution I'm late for…." Leia's voice trailed off. She was suddenly aware of the deep, mechanical breathing behind her. Kshhhhhhh. Kuuuuuuo. Kshhhhhhh. Kuuuuuuo. Everyone in the Life Star command center turned to face the cyborg space wizard samurai monk in black armor. Kshhhhhhh. Kuuuuuuo. Kshhhhhhh. Kuuuuuuo. Vader's cloak fluttered and a couple indicator lights on his life support system blinked, but no muscles or actuators moved. A semi-mechanical voice in the uncanny valley spoke from Vader's mask. "Chief Gunnery Officer Tenn Graneet, you may fire when ready." "Commander Tenn Graneet, belay that order," said Tarkin. The Chief Gunnery Officer held his hand above his control panel, touching nothing. He looked rapidly back-and-forth between Tarkin and Vader. Tarkin turned angrily to face Vader. "Are you insane?" Tarkin hissed. Vader ignored the question and looked at Leia. "Where is the Rebel base?" Leia's eyes were wide with horror and her mouth was wide with a silent scream. She clenched her teeth and stared at the floor. "Tatooine. They're on Tatooine," Leia said. "Chief Gunnery Officer Tenn Graneet, you may fire when ready," said Vader. "What‽" exclaimed Tarkin. Graneet lifted the clear cover off of the authorization lever. He moved his hand as slowly as he could. "Commander Tenn Graneet, belay that order," said Tarkin. "Commander Tenn Graneet, ignore all orders you receive from the Grand Moff," said Vader. "Commander Tenn Graneet, I am your commanding officer. Ignore all orders from 'Lord' Vader. If you continue to disobey my orders, you will be court martialed," said Tarkin. Graneet continued the process of authorizing the firing team. Tarkin drew his blaster pistol and held it to Graneet's head. "Stop or I will shoot you in the head right now," said Tarkin. Bkzzzzzzzzzzzzz. Tarkin felt the heat of Vader's red lightsaber a centimeter from his neck. The next seconds felt like slow motion. Graneet paused. Then Greneet continued the firing activation sequence. Tarkin pulled the trigger. Click. Nothing came out of the blaster's emitter. Vader didn't even bother to watch his order get carried out. He just turned around, deactivated his lightsaber, and strode out of the command center. Vader's cape billowed...
undefined
Aug 19, 2024 • 13min

LW - Quick look: applications of chaos theory by Elizabeth

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quick look: applications of chaos theory, published by Elizabeth on August 19, 2024 on LessWrong. Introduction Recently we (Elizabeth Van Nostrand and Alex Altair) started a project investigating chaos theory as an example of field formation.[1] The number one question you get when you tell people you are studying the history of chaos theory is "does that matter in any way?".[2] Books and articles will list applications, but the same few seem to come up a lot, and when you dig in, application often means "wrote some papers about it" rather than "achieved commercial success". In this post we checked a few commonly cited applications to see if they pan out. We didn't do deep dives to prove the mathematical dependencies, just sanity checks. Our findings: Big Chaos has a very good PR team, but the hype isn't unmerited either. Most of the commonly touted applications never received wide usage, but chaos was at least instrumental in several important applications that are barely mentioned on wikipedia. And it was as important for weather as you think it is. Applications Cryptography and random number generators- Strong No (Alex) The wikipedia page for Chaos theory has a prominent section on cryptography. This sounds plausible; you certainly want your encryption algorithm to display sensitive dependence on initial conditions in the sense that changing a bit of your input randomizes the bits of your output. Similarly, one could imagine using the sequence of states of a chaotic system as a random number generator. However a quick google search makes me (Alex) think this is not a serious application. I've seen it claimed[3] that one of the earliest pseudo-random number generators used the logistic map, but I was unable to find a primary reference to this from a quick search. Some random number generators use physical entropy from outside the computer (rather than a pseudo-random mathematical computation). There are some proposals to do this by taking measurements from a physical chaotic system, such as an electronic circuit or lasers. This seems to be backward, and not actually used in practice. The idea is somewhat roasted in the Springer volume "Open Problems in Mathematics and Computational Science" 2014, chapter "True Random Number Generators" by Mario Stipčević and Çetin Kaya Koç. Other sources that caused me to doubt the genuine application of chaos to crypto include this Crypto StackExchange question, and my friend who has done done cryptography research professionally. As a final false positive example, a use of lava lamps as a source of randomness once gained some publicity. Though this was patented under an explicit reference to chaotic systems, it was only used to generate a random seed, which doesn't really make use of the chaotic dynamics. It sounds to me like it's just a novelty, and off-the-shelf crypto libraries would have been just fine. Anesthesia, Fetal Monitoring, and Approximate Entropy- No (Elizabeth) Approximate Entropy (ApEn) is a measurement designed to assess how regular and predictable a system is, a simplification of Kolmogorov-Sinai entropy. ApEn was originally invented for analyzing medical data, such as brain waves under anesthesia or fetal heart rate. It has several descendents, including Sample Entropy; for purposes of this article I'm going to refer to them all as ApEn. Researchers have since applied the hammer of ApEn and its children to many nails, but as far as I (Elizabeth) can tell it has never reached widespread usage. ApEn's original application was real time fetal heart monitoring; however as far as I can tell it never achieved commercial success and modern doctors use simpler algorithms to evaluate fetal monitoring data. ApEn has also been extensively investigated for monitoring brain waves under anesthesia. However commercially avail...
undefined
Aug 19, 2024 • 7min

LW - Why you should be using a retinoid by GeneSmith

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why you should be using a retinoid, published by GeneSmith on August 19, 2024 on LessWrong. If you want the 60 second version of this post that just tells you what to do, click here to skip to the summary. There is a cheap, widely available, extremeley effective treatment for skin aging that has been around for decades and almost no one outside of dermatolists and beauty bloggers seems to know about it. It's called a retinoid. I first learned about their existence a few months ago after looking in the mirror one day and noticing I was starting to get permanent wrinkles around my mouth. Naturally, I wondered if there was anything I could do to fix them. An ex of mine was a skincare addict and had perhaps the nicest skin of anyone I have ever met. I texted her to ask for advice and she recommended I use a retinoid. Since I didn't know what those were or how they worked, I watched a YouTube video. Thus began my 3 month journey down the rabbit hole of skin care product reviews and progress videos. In this post I'll summarize what I've learned. What are retinoids? Retinoids are a family of medications derived from vitamin A. In the same way that Ozempic was originally developed as an anti-diabetes drug and later turned out to have a broader set of benefits, retinoids were originally developed to treat acne but turned out to do far more than clear up breakouts. These effects can be summed up as "improving almost everything about skin". If we had medications that worked as well for other organs as retinoids work for skin, people would probably live well into their hundreds. It's actually kind of remarkable just how well retinoids work. Exactly HOW retinoids work is a little difficult to describe because they seem to do so many different things. Here's a brief list: Retinoids increase collagen production They decrease degradation of collagen within the skin They protect the extracellular matrix by reducing the activity of metalloproteinases They thicken the epidermis, which tends to thin as we age They increase the formation of blood veseels, which makes the skin's color look nicer and speeds wound healing They increase the levels of fibronectin and tropoelastin, which makes for firmer, bouncier skin These things just sound kind of vague and boring until you start to look at people who have used retinoids for a long time. Here's a screenshot of "Melissa55" on YouTube, a woman in her late 60s that has been using Retin-A (the first available retinoid) for 28 years. That's already pretty remarkable on its own (most people in their late 60s do not look like Melissa), but what's even MORE remarkable is that retinoids can actually REVERSE skin aging after it has taken place. Here's a couple of before and after pictures of various people who used topical retinoids in a study done back in the 90s. This is in addition to their intended use reducing acne, where they perform quite well. Retinoids don't ALWAYS yeild these kinds of results. You can find many pictures online where people essentially look the same after using them. And you can even find the occasional person whose acne got WORSE with use (though this seems to be pretty rare). But the vast majority of people see significant visible improvement in the appearance of their skin, and these benefits only increase with time. Ok, I'm sold. Where do I get a retinoid? The easiest thing to do here is to just buy adapalene on Amazon. Adapalene is a over-the-counter retinoid which seems to work quite well and generally be well tolerated. You can get enough to apply it to your face every night for about $10-15 per month. The most potent retinoid is trentinoin, which is the one all the dermatologists recommend. It's the best studied ingredient for anti-aging, seems to penetrate the skin better and reach deeper layers, and overall seems m...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app