The Nonlinear Library: LessWrong cover image

The Nonlinear Library: LessWrong

Latest episodes

undefined
Sep 11, 2024 • 2min

LW - Formalizing the Informal (event invite) by abramdemski

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Formalizing the Informal (event invite), published by abramdemski on September 11, 2024 on LessWrong. Formalizing the Informal One way to view MIRI's Agent Foundations research is that it saw the biggest problem in AI safety as "human preferences are informal, but we need to somehow get formal guarantees about them" -- and so, in response, it set out to make a formal-informal bridge. Recently, I've been thinking about how we might formally represent the difference between formal and informal. My prompt is something like: if we assume that classical probability theory applies to "fully formal" propositions, how can we generalize it to handle "informal" stuff? I'm going to lead a discussion on this tomorrow, Wednesday Sept. 11, at 11am EDT (8am Pacific, 4pm UK). Discord Event link (might not work for most people): https://discord.com/events/1237103274591649933/1282859362125352960 Zoom link (should work for everyone): https://us06web.zoom.us/j/6274543940?pwd=TGZpY3NSTUVYNHZySUdCQUQ5ZmxQQT09 You can support my work on Patreon. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Sep 10, 2024 • 1h 3min

LW - AI #80: Never Have I Ever by Zvi

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #80: Never Have I Ever, published by Zvi on September 10, 2024 on LessWrong. (This was supposed to be on Thursday but I forgot to cross-post) Will AI ever make art? Fully do your coding? Take all the jobs? Kill all the humans? Most of the time, the question comes down to a general disagreement about AI capabilities. How high on a 'technological richter scale' will AI go? If you feel the AGI and think capabilities will greatly improve, then AI will also be able to do any particular other thing, and arguments that it cannot are almost always extremely poor. However, if frontier AI capabilities level off soon, then it is an open question how far we can get that to go in practice. A lot of frustration comes from people implicitly making the claim that general AI capabilities will level off soon, usually without noticing they are doing that. At its most extreme, this is treating AI as if it will only ever be able to do exactly the things it can already do. Then, when it can do a new thing, you add exactly that new thing. Realize this, and a lot of things make a lot more sense, and are a lot less infuriating. There are also continuous obvious warning signs of what is to come, that everyone keeps ignoring, but I'm used to that. The boat count will increment until morale improves. The most infuriating thing that is unrelated to that was DOJ going after Nvidia. It sure looked like the accusation was that Nvidia was too good at making GPUs. If you dig into the details, you do see accusations of what would be legitimately illegal anti-competitive behavior, in which case Nvidia should be made to stop doing that. But one cannot shake the feeling that the core accusation is still probably too much winning via making too good a product. The nerve of that Jensen. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Sorry, what was the question? 4. Language Models Don't Offer Mundane Utility. A principal-agent problem? 5. Fun With Image Generation. AI supposedly making art, claims AI never will. 6. Copyright Confrontation. OpenAI asks for a mix of forgiveness and permission. 7. Deepfaketown and Botpocalypse Soon. How to fool the humans. 8. They Took Our Jobs. First it came for the unproductive, and the call centers. 9. Time of the Season. If no one else is working hard, why should Claude? 10. Get Involved. DeepMind frontier safety, Patel thumbnail competition. 11. Introducing. Beijing AI Safety and Governance, Daylight Computer, Honeycomb. 12. In Other AI News. Bigger context windows, bigger funding rounds. 13. Quiet Speculations. I don't want to live in a world without slack. 14. A Matter of Antitrust. DOJ goes after Nvidia. 15. The Quest for Sane Regulations. A few SB 1047 support letters. 16. The Week in Audio. Dario Amodei, Dwaresh Patel, Anca Dragon. 17. Rhetorical Innovation. People feel strongly about safety. They're against it. 18. The Cosmos Institute. Philosophy for the age of AI. 19. The Alignment Checklist. What will it take? 20. People Are Worried About AI Killing Everyone. Predicting worries doesn't work. 21. Other People Are Not As Worried About AI Killing Everyone. What happened? 22. Five Boats and a Helicopter. It's probably nothing. 23. Pick Up the Phone. Chinese students talk about AI, safety and regulation. 24. The Lighter Side. Do we have your attention now? Language Models Offer Mundane Utility Prompting suggestion reminder, perhaps: Rohan Paul: Simply adding "Repeat the question before answering it." somehow make the models answer the trick question correctly. Probable explanations: Repeating the question in the model's context, significantly increasing the likelihood of the model detecting any potential "gotchas." One hypothesis is that maybe it puts the model into more of a completion mode vs answering from a c...
undefined
Sep 10, 2024 • 32min

LW - Economics Roundup #3 by Zvi

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Economics Roundup #3, published by Zvi on September 10, 2024 on LessWrong. I am posting this now largely because it is the right place to get in discussion of unrealized capital gains taxes and other campaign proposals, but also there is always plenty of other stuff going on. As always, remember that there are plenty of really stupid proposals always coming from all sides. I'm not spending as much time talking about why it's awful to for example impose gigantic tariffs on everything, because if you are reading this I presume you already know. The Biggest Economics Problem The problem, perhaps, in a nutshell: Tess: like 10% of people understand how markets work and about 10% deeply desire and believe in a future that's drastically better than the present but you need both of these to do anything useful and they're extremely anticorrelated so we're probably all fucked. In my world the two are correlated. If you care about improving the world, you invest in learning about markets. Alas, in most places, that is not true. The problem, in a nutshell, attempt number two: Robin Hanson: There are two key facts near this: 1. Government, law, and social norms in fact interfere greatly in many real markets. 2. Economists have many ways to understand "market failure" deviations from supply and demand, and the interventions that make sense for each such failure. Economists' big error is: claiming that fact #2 is the main explanation for fact #1. This strong impression is given by most introductory econ textbooks, and accompanying lectures, which are the main channels by which economists influence the world. As a result, when considering actual interventions in markets, the first instinct of economists and their students is to search for nearby plausible market failures which might explain interventions there. Upon finding a match, they then typically quit and declare this as the best explanation of the actual interventions. Yep. There are often market failures, and a lot of the time it will be very obvious why the government is intervening (e.g. 'so people don't steal other people's stuff') but if you see a government intervention that does not have an obvious explanation, your first thought should not be to assume the policy is there to sensibly correct a market failure. No Good Very Bad Capital Gains Tax Proposals Kamala Harris endorses Biden's no-good-very-bad 44.6% capital gains tax rate proposal, including the cataclysmic 25% tax on unrealized capital gains, via confirming she supports all Biden budget proposals. Which is not the same as calling for it on the campaign trail, but is still support. She later pared back the proposed topline rate to 33%, which is still a big jump, and I don't see anything there about her pulling back on the unrealized capital gains tax. Technically speaking, the proposal for those with a net worth over $100 million is an annual minimum 25% tax on your net annual income, realized and unrealized including the theoretical 'value' of fully illiquid assets, with taxes on unrealized gains counting as prepayments against future realized gains (including allowing refunds if you ultimately make less). Also, there is a 'deferral' option on your illiquid assets if you are insufficiently liquid, but that carries a 'deferral charge' up to 10%, which I presume will usually be correct to take given the cost of not compounding. All of this seems like a huge unforced error, as the people who know how bad this is care quite a lot, offered without much consideration. It effectively invokes what I dub Deadpool's Law, which to quote Cassandra Nova is: You don't f***ing matter. The most direct 'you' is a combination of anyone who cares about startups, successful private businesses or creation of value, and anyone with a rudimentary understanding of economics. The broa...
undefined
Sep 10, 2024 • 6min

LW - The Best Lay Argument is not a Simple English Yud Essay by J Bostock

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Best Lay Argument is not a Simple English Yud Essay, published by J Bostock on September 10, 2024 on LessWrong. Epistemic status: these are my own opinions on AI risk communication, based primarily on my own instincts on the subject and discussions with people less involved with rationality than myself. Communication is highly subjective and I have not rigorously A/B tested messaging. I am even less confident in the quality of my responses than in the correctness of my critique. If they turn out to be true, these thoughts can probably be applied to all sorts of communication beyond AI risk. Lots of work has gone into trying to explain AI risk to laypersons. Overall, I think it's been great, but there's a particular trap that I've seen people fall into a few times. I'd summarize it as simplifying and shortening the text of an argument without enough thought for the information content. It comes in three forms. One is forgetting to adapt concepts for someone with a far inferential distance; another is forgetting to filter for the important information; the third is rewording an argument so much you fail to sound like a human being at all. I'm going to critique three examples which I think typify these: Failure to Adapt Concepts I got this from the summaries of AI risk arguments written by Katja Grace and Nathan Young here. I'm making the assumption that these summaries are supposed to be accessible to laypersons, since most of them seem written that way. This one stands out as not having been optimized on the concept level. This argument was below-aveage effectiveness when tested. I expect most people's reaction to point 2 would be "I understand all those words individually, but not together". It's a huge dump of conceptual information all at once which successfully points to the concept in the mind of someone who already understands it, but is unlikely to introduce that concept to someone's mind. Here's an attempt to do better: 1. So far, humans have mostly developed technology by understanding the systems which the technology depends on. 2. AI systems developed today are instead created by machine learning. This means that the computer learns to produce certain desired outputs, but humans do not tell the system how it should produce the outputs. We often have no idea how or why an AI behaves in the way that it does. 3. Since we don't understand how or why an AI works a certain way, it could easily behave in unpredictable and unwanted ways. 4. If the AI is powerful, then the consequences of unwanted behaviour could be catastrophic. And here's Claude's just for fun: 1. Up until now, humans have created new technologies by understanding how they work. 2. The AI systems made in 2024 are different. Instead of being carefully built piece by piece, they're created by repeatedly tweaking random systems until they do what we want. This means the people who make these AIs don't fully understand how they work on the inside. 3. When we use systems that we don't fully understand, we're more likely to run into unexpected problems or side effects. 4. If these not-fully-understood AI systems become very powerful, any unexpected problems could potentially be really big and harmful. I think it gets points 1 and 3 better than me, but 2 and 4 worse. Either way, I think we can improve upon the summary. Failure to Filter Information When you condense an argument down, you make it shorter. This is obvious. What is not always as obvious is that this means you have to throw out information to make the core point clearer. Sometimes the information that gets kept is distracting. Here's an example from a poster a friend of mine made for Pause AI: When I showed this to my partner, they said "This is very confusing, it makes it look like an AGI is an AI which makes a chess AI". Making more AI...
undefined
Sep 9, 2024 • 7min

LW - My takes on SB-1047 by leogao

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My takes on SB-1047, published by leogao on September 9, 2024 on LessWrong. I recently decided to sign a letter of support for SB 1047. Before deciding whether to do so, I felt it was important for me to develop an independent opinion on whether the bill was good, as opposed to deferring to the opinions of those around me, so I read through the full text of SB 1047. After forming my opinion, I checked my understanding of tort law basics (definitions of "reasonable care" and "materially contribute") with a law professor who was recommended to me by one of the SB 1047 sponsors, but who was not directly involved in the drafting or lobbying for the bill. Ideally I would have wanted to consult with a completely independent lawyer, but this would have been prohibitively expensive and difficult on a tight timeline. This post outlines my current understanding. It is not legal advice. My main impression of the final version of SB 1047 is that it is quite mild. Its obligations only cover models trained with $100M+ of compute, or finetuned with $10M+ of compute. [1] If a developer is training a covered model, they have to write an SSP, that explains why they believe it is not possible to use the model (or a post-train/finetune of the model costing This would involve running evals, doing red teaming, etc. The SSP also has to describe what circumstances would cause the developer to decide to shut down training and any copies of the model that the developer controls, and how they will ensure that they can actually do so if needed. Finally, a redacted copy of the SSP must be made available to the public (and an unredacted copy filed with the Attorney General). This doesn't seem super burdensome, and is very similar to what labs are already doing voluntarily, but it seems good to codify these things because otherwise labs could stop doing them in the future. Also, current SSPs don't make hard commitments about when to actually stop training, so it would be good to have that. If a critical harm happens, then the question for determining penalties is whether the developer met their duty to exercise "reasonable care" to prevent models from "materially contributing" to the critical harm. This is determined by looking at how good the SSP was (both in an absolute sense and when compared to other developers) and how closely it was adhered to in practice. Reasonable care is a well-established concept in tort law that basically means you did a cost benefit analysis that a reasonable person would have done. Importantly, it doesn't mean the developer has to be absolutely certain that nothing bad can happen. For example, suppose you release an open source model after doing dangerous capabilities evals to make sure it can't make a bioweapon, but then a few years later a breakthrough in scaffolding methods happens and someone makes a bioweapon using your model - as long as you were thorough in your dangerous capabilities evals you would not be liable, because it would not have been reasonable for you to anticipate that someone would make a breakthrough that invalidates your evaluations. Also, if mitigating the risk would be too costly, and the benefit of releasing the model far outweighs the risks of release, this is also a valid reason not to mitigate the risk under the standard of reasonable care (e.g the benefits of driving a car at a normal speed far outweigh the costs of car accidents; so reasonable care doesn't require driving at 2 mph to fully mitigate the risk of car accidents). My personal opinion is I think the reasonable care standard is too weak to prevent AI from killing everyone. However, this also means that I think people opposing the current version of the bill because of the reasonable care requireme...
undefined
Sep 9, 2024 • 13min

LW - Fun With CellxGene by sarahconstantin

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fun With CellxGene, published by sarahconstantin on September 9, 2024 on LessWrong. For this week's post, I thought I'd mess around a bit with the CellXGene tool provided by the Chan Zuckerberg Institute. It's based on a big dataset of individual cells, classified by tissue, cell type, and disease state, and their gene expression profiles (single-cell RNA counts). You can automatically compare how gene expression looks different between sick and healthy individuals, for a variety of diseases, and drill down into which cells/tissues are different and how. It's a fascinating toy and a great way to generate hypotheses. Here, I'll do it for Alzheimer's, comparing 138,438 Alzheimer's brain cells to 9,203,998 normal/healthy brain cells to see what the most "differentially expressed" genes are, and what that might tell us about how the disease works. Top Hits LINC01609 1.6x overexpressed in Alzheimer's, d =4.203 This is a non-protein coding RNA. Typically most expressed in the testis. In CellxGene's healthy brain cells, it's expressed only in activated microglia and astrocytes; but in the Alzheimer's brain, it's expressed in roughly half of all types of cells. Like many long non-coding RNAs, its function is unknown. SLC26A3 10.6x overexpressed in Alzheimer's, d = 3.310 This is a chloride anion exchanger, a membrane protein that transports chloride ions across the cell membrane. It's most heavily expressed in the colon, where it controls the resorption of fluid from the intestines. Defects in this gene are associated with congenital diarrhea, as the body is unable to maintain the right osmotic concentration and loses water in the stool. But we're interested in SLC26A3 in the brain, not in the intestine. In the healthy brain, once again, it's only expressed in activated astrocytes and microglia; in the Alzheimer's brain it's expressed in large numbers of all cell types. CellxGene classifies it as one of the top "markers" for mature astrocytes and mature microglial cells, with a specificity of 1.00. Other researchers have observed the upregulation of SLC26A3 in Alzheimer's, e.g. as part of a pattern of "gliovascular" alteration around the clusters of astrocytes and endothelial cells that control the blood-brain barrier.1 A gliovascular unit is the place a blood vessel meets the brain. The vessel is surrounded by astrocytes and microglia, which control what goes in and out of the bloodstream, clearing excess glutamate and misfolded proteins. Under prolonged stress, these astrocytes in gliovascular units become reactive, and ultimately the blood-brain barrier breaks down. In Alzheimer's disease, the blood vessels get narrower, fragment, and break.2 Activated astrocytes no longer connect as tightly to the surface of the vessels with their "endfeet", compromising the BBB, while activated microglia engulf the endfeet, exacerbating the effect.3 What actually happens if you have more chloride anion exchange in the cells of a gliovascular unit? Is it causal for any Alzheimer's pathology? That, I don't think we know. RASGEF1B 5.5x overexpressed in Alzheimer's, d=3.267 This is a widely expressed cytoplasmic protein that allows the protein Ras to be "switched on", sending intracellular signals that lead to cell growth, differentiation, and survival. 4 Once again, in the healthy brain it is only expressed in activated astrocytes and microglia, while in the Alzheimer's brain it's expressed everywhere. CellxGene classifies it as the top "marker" for mature astrocytes and mature microglial cells, with a specificity of 1.00. In normal circumstances, astrocytes and microglia can grow and proliferate, but most neurons do not. Ras activity increases in conditions of neural stress or injury, as part of the body's attempt to promote cell survival and neurite regeneration. So it makes sense that we...
undefined
Sep 9, 2024 • 3min

LW - Pollsters Should Publish Question Translations by jefftk

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pollsters Should Publish Question Translations, published by jefftk on September 9, 2024 on LessWrong. Pew recently commissioned Gallup to run a poll in Israel on attitudes to social media censorship. They found high support for banning various kinds of speech: The fourth question, especially, is disturbing: 59% want to ban publicly expressing sympathy for civilians in Gaza? Since the polling was conducted in Hebrew and Arabic, however, this got me wondering whether translation might be contributing. I went to check, and Pew did publish the questions: Q123c. Thinking about the war between Israel and Hamas, do you think people should or should not be allowed to post each of the following on social media sites? c. Posts that express sympathy for civilians in Gaza A friend of a friend, Robert Herr, guessed they might have used סימפטיה ("simpatia"): I don't *know* because they're not transparent, but I *suspect* they used סימפטיה for sympathy, which is a false friend, because - like in German - in Hebrew it doesn't primarily mean a feeling of compassion or commiseration, but a feeling of affection or support or approval I don't speak Hebrew (or German) so I checked with Claude: jefftk: What does סימפטיה mean? claude: סימפטיה (simpatia) is a Hebrew word that means "sympathy" or "empathy" in English. It describes the ability to understand and share the feelings of another person, or to show compassion and concern for someone else's situation or emotions. jefftk: Would it be the right word to use when translating "Q123c. Thinking about the war between Israel and Hamas, do you think people should or should not be allowed to post each of the following on social media sites? c. Posts that express sympathy for civilians in Gaza"? claude: You're right to be thoughtful about the translation here, as nuance is important in survey questions. While סימפטיה (simpatia) can mean "sympathy," in this specific context, it might not be the most precise translation. Let me explain why and suggest an alternative: 1. Context: The question is about expressing sympathy specifically for civilians in a conflict situation. 2. Connotation: In Hebrew, סימפטיה often carries a slightly stronger connotation than the English "sympathy" in this context. It might be interpreted as more active support or alignment, which could potentially skew responses. 3. Better alternative: A more neutral and accurate translation for this survey question might use the phrase הזדהות עם (hizdahut im), which means "identification with" or "empathy for." While it's great that they do publish the original English, without also publishing the actual questions they asked the results are much less useful than they could be. If it turns out they used הזדהות עם then I'm much more concerned than if they used סימפטיה, though support for censorship is worrying regardless. So: if you're polling people in another language, please publish the translations you used! Comment via: facebook, mastodon Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Sep 9, 2024 • 14min

LW - On Fables and Nuanced Charts by Niko McCarty

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Fables and Nuanced Charts, published by Niko McCarty on September 9, 2024 on LessWrong. Written by Spencer Greenberg & Amber Dawn Ace for Asimov Press. In 1994, the U.S. Congress passed the largest crime bill in U.S. history, called the Violent Crime Control and Law Enforcement Act. The bill allocated billions of dollars to build more prisons and hire 100,000 new police officers, among other things. In the years following the bill's passage, violent crime rates in the U.S. dropped drastically, from around 750 offenses per 100,000 people in 1990 to under 400 in 2018. But can we infer, as this chart seems to ask us to, that the bill caused the drop in crime? As it turns out, this chart wasn't put together by sociologists or political scientists who've studied violent crime. Rather, we - a mathematician and a writer - devised it to make a point: Although charts seem to reflect reality, they often convey narratives that are misleading or entirely false. Upon seeing that violent crime dipped after 1990, we looked up major events that happened right around that time - selecting one, the 1994 Crime Bill, and slapping it on the graph. There are other events we could have stuck on the graph just as easily that would likely have invited you to construct a completely different causal story. In other words, the bill and the data in the graph are real, but the story is manufactured. Perhaps the 1994 Crime Bill really did cause the drop in violent crime, or perhaps the causality goes the other way: the spike in violent crime motivated politicians to pass the act in the first place. (Note that the act was passed slightly after the violent crime rate peaked!) Charts are a concise way not only to show data but also to tell a story. Such stories, however, reflect the interpretations of a chart's creators and are often accepted by the viewer without skepticism. As Noah Smith and many others have argued, charts contain hidden assumptions that can drastically change the story they tell. This has important consequences for science, which, in its ideal form, attempts to report findings as objectively as possible. When a single chart can be the explanatory linchpin for years of scientific effort, unveiling a data visualization's hidden assumptions becomes an essential skill for determining what's really true. As physicist Richard Feynman once said: In science, "the first principle is that you must not fool yourself, and you are the easiest person to fool."What we mean to say is - don't be fooled by charts. Misleading Charts Bad actors have long used data visualizations to deliberately manipulate and mislead. How to Lie with Statistics, a classic book from 1954, describes tricks that unscrupulous actors use to distort the truth without fabricating results, such as by truncating the y-axis of a chart to make an effect look much larger than it is or by cherry-picking data. Drug companies and special interest groups have employed these techniques for decades to win public support. Merck, for example, was accused of publishing misleading data about the anti-inflammatory drug Vioxx to hide the fact that it could cause heart attacks and strokes, ultimately resulting in a multi-billion dollar settlement. But even when no one is intentionally trying to mislead or manipulate, charts designed to make information clear can still lead to erroneous conclusions. Just consider the U.S. maternal mortality statistics, which seem to show maternal deaths rising from 0.4 deaths per 100,000 women in 2003 to close to 1 per 100,000 in 2020. This graph is worrisome, particularly if you or your partner is pregnant (or expect to be). Why are so many more expectant and new mothers dying? Is there some new danger? Is the healthcare system getting worse? Coverage in Scientific American, NPR, and elsewhere suggested t...
undefined
Sep 8, 2024 • 5min

LW - My Number 1 Epistemology Book Recommendation: Inventing Temperature by adamShimi

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Number 1 Epistemology Book Recommendation: Inventing Temperature, published by adamShimi on September 8, 2024 on LessWrong. In my last post, I wrote that no resource out there exactly captured my model of epistemology, which is why I wanted to share a half-baked version of it. But I do have one book which I always recommend to people who want to learn more about epistemology: Inventing Temperature by Hasok Chang. To be very clear, my recommendation is not just to get the good ideas from this book (of which there are many) from a book review or summary - it's to actually read the book, the old-school way, one word at a time. Why? Because this book teaches you the right feel, the right vibe for thinking about epistemology. It punctures the bubble of sterile non-sense that so easily pass for "how science works" in most people's education, such as the "scientific method". And it does so by demonstrating how one actually makes progress in epistemology: by thinking, yes, but also by paying close attention to what actually happened. It works first because the book is steeped in history, here the history of thermometry (the measurement of temperature). By default, beware anything that is only philosophy of science, without any basis in history - this is definitively ungrounded bullshit. Not only is Chang leveraging history, he also has an advantage over most of the literature in History and Philosophy of Science: early thermometry is truly not that complex technically or mathematically. Except for the last historical chapter, where details of the Carnot cycle get in the way, most of the book describes straightforward questions that anyone can understand, and both experiments and mathematics are at a modern high-school level. As such, I know that any educated person can read this book, and follow the history part. Last but not least, thermometry provides a great opportunity to show what happens at the beginning, before all the frames and techniques and epistemic infrastructure is set up. Another source of oversimplification in people's models of epistemology (including my own before I started digging into the history) is that we moderns mostly learn well-framed and cleaned up science: when we learn Classical Mechanics, we don't just learn it as Newton created it, but we benefit from progress in notations, mathematics, and even the whole structure of physics (with the emphasis on energy over forces). This, I surmise, has the unfortunate consequence of making even practicing scientists feel like science and epistemology is cleaner than it truly is. Sure, we get that data is messy, and that there are many pitfalls, but for many, the foundations have been established before, and so they work in a well-defined setting. But at the start of thermometry, as in the start of every epistemological enterprise, there was almost nothing to rely on. For example, if you want to synchronize different temperature measuring devices (not even thermometers yet, because no scale), a natural idea is to find fixed points: phenomena which always happen at the same temperature. But then… if you don't even have a thermometer, how can you know that fixed points are actually fixed? And even if you can do that, what if your tentative fixed points (like the boiling point of water) are not one very specific phenomenon, but a much complex one with multiple phases, over which the temperature does vary? These are the kind of questions you need to deal with when you start from nothing, and Chang explores the ingenuity of the early thermometricians in teasing imperfect answer out of nature, iterating on them, and then fixing the foundations under their feet. That is, they didn't think really hard and get everything right before starting, they started anyway, and through various strategies, schemes and tricks,...
undefined
Sep 7, 2024 • 12min

LW - That Alien Message - The Animation by Writer

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: That Alien Message - The Animation, published by Writer on September 7, 2024 on LessWrong. Our new video is an adaptation of That Alien Message, by @Eliezer Yudkowsky. This time, the text has been significantly adapted, so I include it below. The author of the adaptation is Arthur Frost. Eliezer has reviewed the adaptation. Part 1 Picture a world just like ours, except the people are a fair bit smarter: in this world, Einstein isn't one in a million, he's one in a thousand. In fact, here he is now. He's made all the same discoveries, but they're not quite as unusual: there have been lots of other discoveries. Anyway, he's out one night with a friend looking up at the stars when something odd happens. [visual: stars get brighter and dimmer, one per second. The two people on the hill look at each other, confused] The stars are flickering. And it's just not a hallucination. Everyone's seeing it. And so everyone immediately freaks out and panics! Ah, just kidding, the people of this world are smarter than ours; What they do is try to work together and figure out what's going on. It turns out that exactly one star seems to shift in brightness every 1.005 seconds. Except, the stars are light years away, so actually the shifts must have happened a long time ago, and somehow they've all been perfectly timed to reach Earth specifically every 1.005 seconds. If you look at the stars from a high-orbit satellite (which of course this planet has) then the flickering looks a little out of sync. So whatever this is, it's directed at Earth. Nobody can find a pattern in the position of the stars, but it's one at a time getting either much dimmer or much brighter by the same amount and, well, that looks a bit like binary. So loads of people think 'huh, maybe it's a code!'. But a lot of other people wonder, 'Who would be trying to send a message to Earth by shifting the brightness of stars across the galaxy? There must be an easier way to talk to us?' But it seems like there must be some intelligence behind it, so the data gets gathered and put on the internet. Some people wonder if maybe it's somehow dangerous, but, well, whoever is making the stars change brightness probably has easier ways to destroy humanity. And so the great analysis begins. Half the planet's physicists, mathematicians, cryptographers, precocious kids, crossword enthusiasts, whoever, they're all trying to work out what this means, they're trying to crack the code. And as they do, the stars keep flickering, exactly one every 1.005 seconds. There are some obvious patterns [visual: display the code, probably someone lining up different wrappings and finding one that makes the pattern look less noisy]: it seems like the numbers come in groups of 32, which in turn come from four groups of 8. Some chunks are much more common. [visual: chunks of 8 getting matched across the text, sorted into uneven piles perhaps] By the way, they do all this just in the first five hours, because like I said, people here are smart. Their civilisation is… a bit more on top of things. And so they are very ready to respond when, after five hours and 16,384 winking stars, it seems like the message begins to repeat itself, or, almost repeat itself, it's just slightly different this time. And it keeps going. [slow zoom out on code going from one line to two, showing only a few differences between the new line and the previous line] Some people start thinking maybe we're seeing the next row of a picture, pixel by pixel. Only, the designers of this image format - whoever they are - use four primary colours instead of three [visual of 32-chunk getting broken into four 8-chunks]. And the picture seems less chaotic if we assume they do binary slightly differently to us. [probably someone gesturing at a diagram of how to get numbers from binary repres...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app