
The Nonlinear Library: LessWrong
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Latest episodes

Sep 4, 2024 • 20min
LW - AI and the Technological Richter Scale by Zvi
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI and the Technological Richter Scale, published by Zvi on September 4, 2024 on LessWrong.
The Technological Richter scale is introduced about 80% of the way through Nate Silver's new book On the Edge.
A full review is in the works (note to prediction markets: this post alone does NOT on its own count as a review, but this counts as part of a future review), but this concept seems highly useful, stands on its own and I want a reference post for it. Nate skips around his chapter titles and timelines, so why not do the same here?
Defining the Scale
Nate Silver, On the Edge (location 8,088 on Kindle): The Richter scale was created by the physicist Charles Richter in 1935 to quantify the amount of energy released by earthquakes.
It has two key features that I'll borrow for my Technological Richter Scale (TRS). First, it is logarithmic. A magnitude 7 earthquake is actually ten times more powerful than a mag 6. Second, the frequency of earthquakes is inversely related to their Richter magnitude - so 6s occur about ten times more often than 7s. Technological innovations can also produce seismic disruptions.
Let's proceed quickly through the lower readings of the Technological Richter Scale.
1. Like a half-formulated thought in the shower.
2. Is an idea you actuate, but never disseminate: a slightly better method to brine a chicken that only you and your family know about.
3. Begins to show up in the official record somewhere, an idea you patent or make a prototype of.
4. An invention successful enough that somebody pays for it; you sell it commercially or someone buys the IP.
5. A commercially successful invention that is important in its category, say, Cool Ranch Doritos, or the leading brand of windshield wipers.
6. An invention can have a broader societal impact, causing a disruption within its field and some ripple effects beyond it. A TRS 6 will be on the short list for technology of the year. At the low end of the 6s (a TRS 6.0) are clever and cute inventions like Post-it notes that provide some mundane utility. Toward the high end (a 6.8 or 6.9) might be something like the VCR, which disrupted home entertainment and had knock-on effects on the movie industry. The impact escalates quickly from there.
7. One of the leading inventions of the decade and has a measurable impact on people's everyday lives. Something like credit cards would be toward the lower end of the 7s, and social media a high 7.
8. A truly seismic invention, a candidate for technology of the century, triggering broadly disruptive effects throughout society. Canonical examples include automobiles, electricity, and the internet.
9. By the time we get to TRS 9, we're talking about the most important inventions of all time, things that inarguably and unalterably changed the course of human history. You can count these on one or two hands. There's fire, the wheel, agriculture, the printing press. Although they're something of an odd case, I'd argue that nuclear weapons belong here also.
True, their impact on daily life isn't necessarily obvious if you're living in a superpower protected by its nuclear umbrella (someone in Ukraine might feel differently). But if we're thinking in expected-value terms, they're the first invention that had the potential to destroy humanity.
10. Finally, a 10 is a technology that defines a new epoch, one that alters not only the fate of humanity but that of the planet. For roughly the past twelve thousand years, we have been in the Holocene, the geological epoch defined not by the origin of Homo sapiens per se but by humans becoming the dominant species and beginning to alter the shape of the Earth with our technologies.
AI wresting control of this dominant position from humans would qualify as a 10, as would other forms of a "technological singularity," a term popularized by...

Sep 4, 2024 • 30min
LW - On the UBI Paper by Zvi
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the UBI Paper, published by Zvi on September 4, 2024 on LessWrong.
Would a universal basic income (UBI) work? What would it do?
Many people agree July's RCT on giving people a guaranteed income, and its paper from Eva Vivalt, Elizabeth Rhodes, Alexander W. Bartik, David E. Broockman and Sarah Miller was, despite whatever flaws it might have, the best data we have so far on the potential impact of UBI. There are many key differences from how UBI would look if applied for real, but this is the best data we have.
This study was primarily funded by Sam Altman, so whatever else he may be up to, good job there. I do note that my model of 'Altman several years ago' is more positive than mine of Altman now, and past actions like this are a lot of the reason I give him so much benefit of the doubt.
They do not agree on what conclusions we should draw. This is not a simple 'UBI is great' or 'UBI it does nothing.'
I see essentially four responses.
1. The first group says this shows UBI doesn't work. That's going too far. I think the paper greatly reduces the plausibility of the best scenarios, but I don't think it rules UBI out as a strategy, especially if it is a substitute for other transfers.
2. The second group says this was a disappointing result for UBI. That UBI could still make sense as a form of progressive redistribution, but likely at a cost of less productivity so long as people impacted are still productive. I agree.
3. The third group did its best to spin this into a positive result. There was a lot of spin here, and use of anecdotes, and arguments as soldiers. Often these people were being very clear they were true believers and advocates, that want UBI now, and were seeking the bright side. Respect? There were some bright spots that they pointed out, and no one study over three years should make you give up, but this was what it was and I wish people wouldn't spin like that.
4. The fourth group was some mix of 'if brute force (aka money) doesn't solve your problem you're not using enough' and also 'but work is bad, actually, and leisure is good.' That if we aren't getting people not to work then the system is not functioning, or that $1k/month wasn't enough to get the good effects, or both. I am willing to take a bold 'people working more is mostly good' stance, for the moment, although AI could change that.
And while I do think that a more permanent or larger support amount would do some interesting things, I wouldn't expect to suddenly see polarity reverse.
I am so dedicated to actually reading this paper that it cost me $5. Free academia now.
RTFP (Read the Paper): Core Design
Core design was that there were 1,000 low-income individuals randomized into getting $1k/month for 3 years, or $36k total. A control group of 2,000 others got $50/month, or $1800 total. Average household income in the study before transfers was $29,900.
They then studied what happened.
Before looking at the results, what are the key differences between this and UBI?
Like all studies of UBI, this can only be done for a limited population, and it only lasts a limited amount of time.
If you tell me I am getting $1,000/month for life, then that makes me radically richer, and also radically safer. In extremis you can plan to live off that, or it can be a full fallback. Which is a large part of the point, and a lot of the danger as well.
If instead you give me that money for only three years, then I am slightly less than $36k richer. Which is nice, but impacts my long term prospects much less. It is still a good test of the 'give people money' hypothesis but less good at testing UBI.
The temporary form, and also the limited scope, means that it won't cause a cultural shift and changing of norms. Those changes might be good or bad, and they could overshadow other impacts.
Does this move tow...

Sep 3, 2024 • 21min
LW - Book Review: What Even Is Gender? by Joey Marcellino
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book Review: What Even Is Gender?, published by Joey Marcellino on September 3, 2024 on LessWrong.
I submitted this review to the 2024 ACX book review contest, but it didn't make the cut, so I'm putting it here instead for posterity.
Conspiracy theories are fun because of how they make everything fit together, and scratch the unbearable itch some of us get when there are little details of a narrative that just don't make sense. The problem is they tend to have a few issues, like requiring one to posit expansive perfectly coordinated infosecurity, demanding inaccessible or running contrary to existing evidence, and generally making you look weird for believing them.
We can get our connecting-the-dots high while avoiding social stigma and epistemic demerits by instead foraging in the verdant jungle of "new conceptual frameworks for intractable debates." Arguments about gender tend to devolve, not just for lack of a shared conceptual framework, but because the dominant frameworks used by both defenders and critics of gender ideology are various shades of incoherent.
To the rescue are R. A. Briggs and B. R.
George, two philosophers of gender promising a new approach to thinking about gender identity and categorization with their book What Even Is Gender? I appreciate that I'm probably atypical in that my first thought when confronting a difficult conceptual problem is "I wonder what mainstream analytic philosophy has to say about this?", but What Even Is Gender? is that rare thing: a philosophical work for a popular audience that is rigorous without sacrificing clarity (and that's clarity by
normal-human-conversation standards, not analytic philosophy standards).
Let's see what they have to say.
Why I Picked This Book
BG are writing for two primary audiences in What Even Is Gender? First are people trying to make sense of their own experience of gender, especially those who feel the existing conceptual toolbox is limited, or doesn't exactly match up with their circumstances. The second, in their words, are:
"people who, while broadly sympathetic (or at least open) to the goals of trans inclusion and trans liberation, harbor some unease regarding the conceptual tensions, apparent contradictions, and metaphysical vagaries of the dominant rhetoric of trans politics.
This sort of reader might feel the pull of some of the foundational concerns that they see raised in "gender critical" arguments, but is also trying to take their trans friends' anxious reactions seriously, and is loath to accept the political agenda that accompanies such arguments."
People with a non-standard experience of gender are known to be overrepresented among readers of this blog, and I suspect people in BG's second kind of audience are as well, extrapolating from my sample size of one. This book thus seemed like a good fit.
BG contrast their conception of gender with what they call the "received narrative": the standard set of ideas about gender and identity that one hears in progressive spaces e.g. college campuses. Reviewing WEIG on this blog provides another interesting point of contrast in The Categories Were Made for Man. BG make similar moves as Scott but extend the analysis further, and provide an alternative account of gender categories that avoids some of the weaknesses of Scott's.
Where we're coming from
So what exactly is this received narrative, and what's wrong with it?
BG give the following sketch:
"1 People have a more-or-less stable inner trait called "gender identity".
2 One's "gender identity" is what disposes one to think of oneself as a "woman" or as a "man" (or, perhaps, as both or as neither).
3 One's "gender identity" is what disposes one to favor or avoid stereotypically feminine or masculine behaviors (or otherwise gendered behaviors).
4 It is possible for there to be a mismatc...

Sep 3, 2024 • 35min
LW - The Checklist: What Succeeding at AI Safety Will Involve by Sam Bowman
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Checklist: What Succeeding at AI Safety Will Involve, published by Sam Bowman on September 3, 2024 on LessWrong.
Crossposted by habryka with Sam's permission. Expect lower probability for Sam to respond to comments here than if he had posted it.
Preface
This piece reflects my current best guess at the major goals that Anthropic (or another similarly positioned AI developer) will need to accomplish to have things go well with the development of broadly superhuman AI. Given my role and background, it's disproportionately focused on technical research and on averting emerging catastrophic risks.
For context, I lead a technical AI safety research group at Anthropic, and that group has a pretty broad and long-term mandate, so I spend a lot of time thinking about what kind of safety work we'll need over the coming years.
This piece is my own opinionated take on that question, though it draws very heavily on discussions with colleagues across the organization: Medium- and long-term AI safety strategy is the subject of countless leadership discussions and Google docs and lunch-table discussions within the organization, and this piece is a snapshot (shared with permission) of where those conversations sometimes go.
To be abundantly clear: Nothing here is a firm commitment on behalf of Anthropic, and most people at Anthropic would disagree with at least a few major points here, but this can hopefully still shed some light on the kind of thinking that motivates our work.
Here are some of the assumptions that the piece relies on. I don't think any one of these is a certainty, but all of them are plausible enough to be worth taking seriously when making plans:
Broadly human-level AI is possible. I'll often refer to this as transformative AI (or TAI), roughly defined as AI that could form as a drop-in replacement for humans in all remote-work-friendly jobs, including AI R&D.[1]
Broadly human-level AI (or TAI) isn't an upper bound on most AI capabilities that matter, and substantially superhuman systems could have an even greater impact on the world along many dimensions.
If TAI is possible, it will probably be developed this decade, in a business and policy and cultural context that's not wildly different from today.
If TAI is possible, it could be used to dramatically accelerate AI R&D, potentially leading to the development of substantially superhuman systems within just a few months or years after TAI.
Powerful AI systems could be extraordinarily destructive if deployed carelessly, both because of new emerging risks and because of existing issues that become much more acute. This could be through misuse of weapons-related capabilities, by disrupting important balances of power in domains like cybersecurity or surveillance, or by any of a number of other means.
Many systems at TAI and beyond, at least under the right circumstances, will be capable of operating more-or-less autonomously for long stretches in pursuit of big-picture, real-world goals. This magnifies these safety challenges.
Alignment - in the narrow sense of making sure AI developers can confidently steer the behavior of the AI systems they deploy - requires some non-trivial effort to get right, and it gets harder as systems get more powerful.
Most of the ideas here ultimately come from outside Anthropic, and while I cite a few sources below, I've been influenced by far more writings and people than I can credit here or even keep track of.
Introducing the Checklist
This lays out what I think we need to do, divided into three chapters, based on the capabilities of our strongest models:
Chapter 1: Preparation
You are here. In this period, our best models aren't yet TAI. In the language of Anthropic's RSP, they're at AI Safety Level 2 (ASL-2), ASL-3, or maybe the early stages of ASL-4. Most of the work that we hav...

Sep 3, 2024 • 2min
LW - How I got 3.2 million Youtube views without making a single video by Closed Limelike Curves
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I got 3.2 million Youtube views without making a single video, published by Closed Limelike Curves on September 3, 2024 on LessWrong.
Just over a month ago, I wrote this.
The Wikipedia articles on the VNM theorem, Dutch Book arguments, money pump, Decision Theory, Rational Choice Theory, etc. are all a horrific mess. They're also completely disjoint, without any kind of Wikiproject or wikiboxes for tying together all the articles on rational choice.
It's worth noting that Wikipedia is the place where you - yes, you! - can actually have some kind of impact on public discourse, education, or policy. There is just no other place you can get so many views with so little barrier to entry. A typical Wikipedia article will get more hits in a day than all of your LessWrong blog posts have gotten across your entire life, unless you're @Eliezer Yudkowsky.
I'm not sure if we actually "failed" to raise the sanity waterline, like people sometimes say, or if we just didn't even try. Given even some very basic low-hanging fruit interventions like "write a couple good Wikipedia articles" still haven't been done 15 years later, I'm leaning towards the latter. edit me senpai
EDIT: Discord to discuss editing here.
An update on this. I've been working on Wikipedia articles for just a few months, and Veritasium just put a video out on Arrow's impossibility theorem - which is almost completely based on my Wikipedia article on Arrow's impossibility theorem! Lots of lines and the whole structure/outline of the video are taken almost verbatim from what I wrote.
I think there's a pretty clear reason for this: I recently rewrote the entire article to make it easy-to-read and focus heavily on the most important points.
Relatedly, if anyone else knows any educational YouTubers like CGPGrey, Veritasium, Kurzgesagt, or whatever - please let me know! I'd love a chance to talk with them about any of the fields I've done work teaching or explaining (including social or rational choice, economics, math, and statistics).
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Sep 1, 2024 • 16min
LW - Free Will and Dodging Anvils: AIXI Off-Policy by Cole Wyeth
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free Will and Dodging Anvils: AIXI Off-Policy, published by Cole Wyeth on September 1, 2024 on LessWrong.
This post depends on a basic understanding of history-based reinforcement learning and the AIXI model.
I am grateful to Marcus Hutter and the lesswrong team for early feedback, though any remaining errors are mine.
The universal agent AIXI treats the environment it interacts with like a video game it is playing; the actions it chooses at each step are like hitting buttons and the percepts it receives are like images on the screen (observations) and an unambiguous point tally (rewards).
It has been suggested that since AIXI is inherently dualistic and doesn't believe anything in the environment can "directly" hurt it, if it were embedded in the real world it would eventually drop an anvil on its head to see what would happen. This is certainly possible, because the math of AIXI cannot explicitly represent the idea that AIXI is running on a computer inside the environment it is interacting with.
For one thing, that possibility is not in AIXI's hypothesis class (which I will write M). There is not an easy patch because AIXI is defined as the optimal policy for a belief distribution over its hypothesis class, but we don't really know how to talk about optimality for embedded agents (so the expectimax tree definition of AIXI cannot be easily extended to handle embeddedness).
On top of that, "any" environment "containing" AIXI is at the wrong computability level for a member of M: our best upper bound on AIXI's computability level is Δ02 = limit-computable (for an ε-approximation) instead of the Σ01 level of its environment class. Reflective oracles can fix this but at the moment there does not seem to be a canonical reflective oracle, so there remains a family of equally valid reflective versions of AIXI without an objective favorite.
However, in my conversations with Marcus Hutter (the inventor of AIXI) he has always insisted AIXI would not drop an anvil on its head, because Cartesian dualism is not a problem for humans in the real world, who historically believed in a metaphysical soul and mostly got along fine anyway.
But when humans stick electrodes in our brains, we can observe changed behavior and deduce that our cognition is physical - would this kind of experiment allow AIXI to make the same discovery? Though we could not agree on this for some time, we eventually discovered the crux: we were actually using slightly different definitions for how AIXI should behave off-policy.
In particular, let ξAI be the belief distribution of AIXI. More explicitly,
I will not attempt a formal definition here. The only thing we need to know is that M is a set of environments which AIXI considers possible. AIXI interacts with an environment by sending it a sequence of actions a1,a2,... in exchange for a sequence of percepts containing an observation and reward e1=o1r1,e2=o2r2,... so that action at precedes percept et.
One neat property of AIXI is that its choice of M satisfies ξAIM (this trick is inherited with minor changes from the construction of Solomonoff's universal distribution).
Now let Vπμ be a (discounted) value function for policy π interacting with environment μ, which is the expected sum of discounted rewards obtained by π. We can define the AIXI agent as
By the Bellman equations, this also specifies AIXI's behavior on any history it can produce (all finite percept strings have nonzero probability under ξAI). However, it does not tell us how AIXI behaves when the history includes actions it would not have chosen. In that case, the natural extension is
so that AIXI continues to act optimally (with respect to its updated belief distribution) even when some suboptimal actions have previously been taken.
The philosophy of this extension is that AIXI acts exactly as if...

Sep 1, 2024 • 11min
LW - Forecasting One-Shot Games by Raemon
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting One-Shot Games, published by Raemon on September 1, 2024 on LessWrong.
Cliff notes:
You can practice forecasting on videogames you've never played before, to develop the muscles for "decision-relevant forecasting."
Turn based videogames work best. I recommend "Luck Be a Landlord", "Battle for Polytopia", or "Into the Breach."
Each turn, make as many Fatebook predictions as you can in 5 minutes, then actually make your decision(s) for the turn.
After 3 turns, instead of making "as many predictions as possible", switch to trying to come up with at least two mutually exclusive actions you might take this turn, and come up with predictions that would inform which action to take.
Don't forget to follow this up with practicing forecasting for decisions you're making in "real life", to improve transfer learning. And, watch out for accidentally just getting yourself addicted to videogames, if you weren't already in that boat.
This is pretty fun to do in groups and makes for a good meetup, if you're into that.
Recently I published Exercise: Planmaking, Surprise Anticipation, and "Baba is You". In that exercise, you try to make a complete plan for solving a puzzle-level in a videogame, without interacting with the world (on levels where you don't know what all the objects in the environment do), and solve it on your first try.
Several people reported it pretty valuable (it was highest rated activity at my metastrategy workshop). But, it's fairly complicated as an exercise, and a single run of the exercise typically takes at least an hour (and maybe several hours) before you get feedback on whether you're "doing it right."
It'd be nice to have a way to practice decision-relevant forecasting with a faster feedback loop. I've been exploring the space of games that are interesting to "one-shot". (i.e. " try to win on your first playthrough"), and also exploring the space of exercises that take advantage of your first playthrough of a game.
So, an additional, much simpler exercise that I also like, is:
Play a turn-based game you haven't played before.
Each turn, set a 5 minute timer for making as many predictions as you can about how the game works, what new rules or considerations you might learn later. Then, a 1 minute timer for actually making your choices for what action(s) to take during the turn.
And... that's it. (to start with, anyway).
Rather that particularly focusing on "trying really hard to win", start with just making lots of predictions, about a situation where you're at least trying to win a little, so you can develop the muscles of noticing what sort of predictions you can make while you're in the process of strategically orienting. And, notice what sorts of implicit knowledge you have, even though you don't technically "know" how the game would work.
Some of the predictions might resolve the very next turn. Some might resolve before the very next turn, depending on how many choices you get each turn. And, some might take a few turns, or even pretty deep into the game. Making a mix of forecasts of different resolution-times is encouraged.
I think there are a lot of interesting skills you can layer on top of this, after you've gotten the basic rhythm of it. But "just make a ton of predictions about a domain where you're trying to achieve something, and get quick feedback on it" seems like a good start.
Choosing Games
Not all games are a good fit for this exercise. I've found a few specific games I tend to recommend, and some principles for which games to pick.
The ideal game has:
Minimal (or skippable) tutorial. A major point of the exercise is to make prediction about the game mechanics and features. Good games for this exercise a) don't spoonfeed you all the information about the game, but also b) are self-explanatory enough to figure out without a tut...

Sep 1, 2024 • 15min
LW - My Model of Epistemology by adamShimi
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Model of Epistemology, published by adamShimi on September 1, 2024 on LessWrong.
I regularly get asked by friends and colleagues for recommendation of good resources to study epistemology. And whenever that happens, I make an internal (or external) "Eeehhh"pained sound.
For I can definitely point to books and papers and blog posts that inspired me, excited me, and shaped my world view on the topic. But there is no single resource that encapsulate my full model of this topic.
To be clear, I have tried to write that resource - my hard-drive is littered with such attempts. It's just that I always end up shelving them, because I don't have enough time, because I'm not sure exactly how to make it legible, because I haven't ironed out everything.
Well, the point of this new blog was to lower the activation energy of blog post writing, by simply sharing what I found exciting quickly. So let's try the simplest possible account I can make of my model.
And keep in mind that this is indeed a work in progress.
The Roots of Epistemology
My model of epistemology stems from two obvious facts:
The world is complex
Humans are not that smart
Taken together, these two facts mean that humans have no hope of ever tackling most problems in the world in the naive way - that is, by just simulating everything about them, in the fully reductionist ideal.
And yet human civilization has figured out how to reliably cook tasty meals, build bridges, predict the minutest behaviors of matter... So what gives?
The trick is that we shortcut these intractable computations: we exploit epistemic regularities in the world, additional structure which means that we don't need to do all the computation.[1]
As a concrete example, think about what you need to keep in mind when cooking relatively simple meals (not the most advanced of chef's meals).
You can approximate many tastes through a basic palette (sour, bitter, sweet, salty, umami), and then consider the specific touches (lemon juice vs vinegar for example, and which vinegar, changes the color of sourness you get)
You don't need to model your ingredient at the microscopic level, most of the transformations that happen are readily visible and understandable at the macro level: cutting, mixing, heating…
You don't need to consider all the possible combinations of ingredients and spices; if you know how to cook, you probably know many basic combinations of ingredients and/or spices that you can then pimp or adapt for different dishes.
All of these are epistemic regularities that we exploit when cooking.
Similarly, when we do physics, when we build things, when we create art, insofar as we can reliably succeed, we are exploiting such regularities.
If I had to summarize my view of epistemology in one sentence, it would be: The art and science of finding, creating, and exploiting epistemic regularities in the world to reliably solve practical problems.
The Goals of Epistemology
If you have ever read anything about the academic topic called "Epistemology, you might have noticed something lacking from my previous account: I didn't focus on knowledge or understanding.
This is because I take a highly practical view of epistemology: epistemology for me teaches us how to act in the world, how to intervene, how to make things.
While doing, we might end up needing some knowledge, or needing to understand various divisions in knowledge, types of models, and things like that. But the practical application is always the end.
(This is also why I am completely uninterested in
the whole realism debate: whether most hidden entities truly exist or not is a fake question that doesn't really teach me anything, and probably cannot be answered. The kind of realism I'm interested in is the realism of usage, where there's a regularity (or lack thereof) which can be exploited in ...

Aug 31, 2024 • 14min
LW - Book review: On the Edge by PeterMcCluskey
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book review: On the Edge, published by PeterMcCluskey on August 31, 2024 on LessWrong.
Book review: On the Edge: The Art of Risking Everything, by Nate Silver.
Nate Silver's latest work straddles the line between journalistic inquiry and subject matter expertise.
"On the Edge" offers a valuable lens through which to understand analytical risk-takers.
The River versus The Village
Silver divides the interesting parts of the world into two tribes.
On his side, we have "The River" - a collection of eccentrics typified by Silicon Valley entrepreneurs and professional gamblers, who tend to be analytical, abstract, decoupling, competitive, critical, independent-minded (contrarian), and risk-tolerant.
On the other, "The Village" - the east coast progressive establishment, including politicians, journalists, and the more politicized corners of academia.
Like most tribal divides, there's some arbitrariness to how some unrelated beliefs end up getting correlated. So I don't recommend trying to find a more rigorous explanation of the tribes than what I've described here.
Here are two anecdotes that Silver offers to illustrate the divide:
In the lead-up to the 2016 US election, Silver gave Trump a 29% chance of winning, while prediction markets hovered around 17%, and many pundits went even lower. When Trump won, the Village turned on Silver for his "bad" forecast. Meanwhile, the River thanked him for helping them profit by betting against those who underestimated Trump's chances.
Wesley had to be bluffing 25 percent of the time to make Dwan's call correct; his read on Wesley's mindset was tentative, but maybe that was enough to get him from 20 percent to 24. ... maybe Wesley's physical mannerisms - like how he put his chips in quickly ... got Dwan from 24 percent to 29. ... If this kind of thought process seems alien to you - well, sorry, but your application to the River has been declined.
Silver is concerned about increasingly polarized attitudes toward risk:
you have Musk at one extreme and people who haven't left their apartment since COVID at the other one. The Village and the River are growing farther apart.
13 Habits of Highly Successful Risk-Takers
The book lists 13 habits associated with the River. I hoped these would improve on Tetlock's ten commandments for superforecasters. Some of Silver's habits fill that role of better forecasting advice, while others function more as litmus tests for River membership. Silver understands the psychological challenges better than Tetlock does. Here are a few:
Strategic Empathy:
But I'm not talking about coming across an injured puppy and having it tug at your heartstrings. Instead, I'm speaking about adversarial situations like poker - or war.
I.e. accurately modeling what's going on in an opponent's mind.
Strategic empathy isn't how I'd phrase what I'm doing on the stock market, where I'm rarely able to identify who I'm trading against. But it's fairly easy to generalize Silver's advice so that it does coincide with an important habit of mine: always wonder why a competent person would take the other side of a trade that I'm making.
This attitude represents an important feature of the River: people in this tribe aim to respect our adversaries, often because we've sought out fields where we can't win using other approaches.
This may not be the ideal form of empathy, but it's pretty effective at preventing Riverians from treating others as less than human. The Village may aim to generate more love than does the River, but it also generates more hate (e.g. of people who use the wrong pronouns).
Abhor mediocrity: take a raise-or-fold attitude toward life.
I should push myself a bit in this direction. But I feel that erring on the side of caution (being a nit in poker parlance) is preferable to becoming the next Sam Bankman-Fried.
Alloc...

Aug 31, 2024 • 12min
LW - Congressional Insider Trading by Maxwell Tabarrok
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Congressional Insider Trading, published by Maxwell Tabarrok on August 31, 2024 on LessWrong.
You've probably seen the
Nancy Pelosi Stock Tracker on X or else a
collection of
articles and
books exposing the secret and lucrative world of congressional insider trading.
The underlying claim behind these stories is intuitive and compelling. Regulations, taxes, and subsidies can
make or break entire industries and congresspeople can get information on these rules before anyone else, so it wouldn't be surprising if they used this information to make profitable stock trades.
But do congresspeople really have a consistent advantage over the market? Or is this narrative built on a cherrypicked selection of a few good years for a few lucky traders?
Is Congressional Insider Trading Real
There are several papers in economics and finance on this topic
First is the 2004 paper:
Abnormal Returns from the Common Stock Investments of the U.S. Senate by Ziobrowski et al. They look at Senator's stock transactions over 1993-1998 and construct a synthetic portfolio based on those transactions to measure their performance.
This is the headline graph. The red line tracks the portfolio of stocks that Senators bought, and the blue line the portfolio that Senators sold. Each day, the performance of these portfolios is compared to the market index and the cumulative difference between them is plotted on the graph. The synthetic portfolios start at day -255, a year (of trading days) before any transactions happen.
In the year leading up to day 0, the stocks that Senators will buy (red line) basically just tracks the market index. On some days, the daily return from the Senator's buy portfolio outperforms the index and the line moves up, on others it underperforms and the line moves down. Cumulatively over the whole year, you don't gain much over the index.
The stocks that Senators will sell (blue line), on the other hand, rapidly and consistently outperform the market index in the year leading up to the Senator's transaction.
After the Senator buys the red portfolio and sells the blue portfolio, the trends reverse. The Senator's transactions seem incredibly prescient. Right after they buy the red stocks, that portfolio goes on a tear, running up the index by 25% over the next year. They also pick the right time to sell the blue portfolio, as it barely gains over the index over the year after they sell.
Ziobrowski finds that the buy portfolio of the average senator, weighted by their trading volume, earns a compounded annual rate of return of 31.1% compared to the market index which earns only 21.3% a year over this period 1993-1998.
This definitely seems like evidence of incredibly well timed trades and above-market performance. There are a couple of caveats and details to keep in mind though.
First, it's only a 5-year period. Additionally, any transactions from a senator in a given year a pretty rare:
Only a minority of Senators buy individual common stocks, never more than 38% in any one year.
So sample sizes are pretty low in the noisy and highly skewed distribution of stock market returns.
Another problem, the data on transactions isn't that precise.
Senators report the dollar volume of transactions only within broad ranges ($1,001 to $15,000, $15,001 to $50,000, $50,001 to $100,000, $100,001 to $250,000, $250,001 to $500,000, $500,001 to $1,000,000 and over $1,000,000)
These ranges are wide and the largest trades are top-coded.
Finally, there are some pieces of the story that don't neatly fit in to an insider trading narrative. For example:
The common stock investments of Senators with the least seniority (serving less than seven years) outperform the investments of the most senior Senators (serving more than 16 years) by a statistically significant margin.
Still though,
several
other
paper...