

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Jul 4, 2024 • 10min
LW - Introduction to French AI Policy by Lucie Philippon
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introduction to French AI Policy, published by Lucie Philippon on July 4, 2024 on LessWrong.
This post was written as part of the AI Governance Fundamentals course by BlueDot. I thank Charles Beasley and the students from my cohort for their feedback and encouragements.
Disclaimer: The French policy landscape is in rapid flux, after president Macron called for a snap election on 1st and 7th July. The situation is still unfolding, and the state of French AI policy may be significantly altered.
At various AI governance events, I noticed that most people had a very unclear vision of what was happening in AI policy in France, why the French government seemed dismissive of potential AI risks, and what that would that mean for the next AI Safety Summit in France.
The post below is my attempt at giving a quick intro to the key stakeholders of AI policy in France, their positions and how they influence international AI policy efforts.
My knowledge comes from hanging around AI safety circles in France for a year and a half, and working since January with the French Government on AI Governance. Therefore, I'm confident in the facts, but less in the interpretations, as I'm no policy expert myself.
Generative Artificial Intelligence Committee
The first major development in AI policy in France was the creation of a committee advising the government on Generative AI questions. This committee was created in September 2023 by former Prime Minister Elisabeth Borne.[1]
The goals of the committee were:
Strengthening AI training programs to develop more AI talent in France
Investing in AI to promoting French innovation on the international stage
Defining appropriate regulation for different sectors to protect against abuses.
This committee was composed of notable academics and companies in the French AI field. This is a list of their notable member:
Co-chairs:
Philippe Aghion, an influential French economist specializing in innovation.
He thinks AI will give a major productivity boost and that the EU should invest in major research projects on AI and disruptive technologies.
Anne Bouverot, chair of the board of directors of ENS, the most prestigious scientific college in France. She was later nominated as leading organizer of the next AI Safety Summit.
She is mainly concerned about the risks of bias and discrimination from AI systems, as well as risks of concentration of power.
Notable members:
Joëlle Barral, scientific director at Google
Nozha Boujemaa, co-chair of the OECD AI expert group and Digital Trust Officer at Decathlon
Yann LeCun, VP and Chief AI Scientist at Meta, generative AI expert
He is a notable skeptic of catastrophic risks from AI
Arthur Mensch, founder of Mistral
He is a notable skeptic of catastrophic risks from AI
Cédric O, consultant, former Secretary of State for Digital Affairs
He invested in Mistral and worked to loosen the regulations on general systems in the EU AI Act.
Martin Tisné, board member of Partnership on AI
He will lead the "AI for good" track of the next Summit.
See the full list of members in the announcement: Comité de l'intelligence artificielle générative.
"AI: Our Ambition for France"
In March 2024, the committee published a report highlighting 25 recommendations to the French government regarding AI. An official English version is available.
The report makes recommendations on how to make France competitive and a leader in AI, by investing in training, R&D and compute.
This report is not anticipating future development, and treats the current capabilities of AI as a fixed point we need to work with. They don't think about future capabilities of AI models, and are overly dismissive of AI risks.
Some highlights from the report:
It dismisses most risks from AI, including catastrophic risks, saying that concerns are overblown. They compare fear of...

Jul 3, 2024 • 11min
EA - 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) by Raemon
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly), published by Raemon on July 3, 2024 on The Effective Altruism Forum.
I haven't shared this post with other relevant parties - my experience has been that private discussion of this sort of thing is more paralyzing than helpful. I might change my mind in the resulting discussion, but, I prefer that discussion to be public.
I think 80,000 hours should remove OpenAI from its job board, and similar EA job placement services should do the same.
(I personally believe 80k shouldn't advertise Anthropic jobs either, but I think the case for that is somewhat less clear)
I think OpenAI has demonstrated a level of manipulativeness, recklessness, and failure to prioritize meaningful existential safety work, that makes me think EA orgs should not be going out of their way to give them free resources. (It might make sense for some individuals to work there, but this shouldn't be a thing 80k or other orgs are systematically funneling talent into)
There plausibly should be some kind of path to get back into good standing with the AI Risk community, although it feels difficult to imagine how to navigate that, given how adversarial OpenAI's use of NDAs was, and how difficult that makes it to trust future commitments.
The things that seem most significant to me:
They promised the superalignment team 20% of their compute-at-the-time (which AFAICT wasn't even a large fraction of their compute over the coming years), but didn't provide anywhere close to that, and then disbanded the team when Leike left.
Their widespread use of non-disparagement agreements, with non-disclosure clauses, which generally makes it hard to form accurate impressions about what's going on at the organization.
Helen Toner's description of how Sam Altman wasn't forthright with the board. (i.e. "The board was not informed about ChatGPT in advance and learned about ChatGPT on Twitter. Altman failed to inform the board that he owned the OpenAI startup fund despite claiming to be an independent board member, giving false information about the company's formal safety processes on multiple occasions.
And relating to her research paper, that Altman in the paper's wake started lying to other board members in order to push Toner off the board.")
Hearing from multiple ex-OpenAI employees that OpenAI safety culture did not seem on track to handle AGI. Some of these are public (Leike, Kokotajlo), others were in private.
This is before getting into more openended arguments like "it sure looks to me like OpenAI substantially contributed to the world's current AI racing" and "we should generally have a quite high bar for believing that the people running a for-profit entity building transformative AI are doing good, instead of cause vast harm, or at best, being a successful for-profit company that doesn't especially warrant help from EAs.
I am generally wary of AI labs (i.e. Anthropic and Deepmind), and think EAs should be less optimistic about working at large AI orgs, even in safety roles. But, I think OpenAI has demonstrably messed up, badly enough, publicly enough, in enough ways that it feels particularly wrong to me for EA orgs to continue to give them free marketing and resources.
I'm mentioning 80k specifically because I think their job board seemed like the largest funnel of EA talent, and because it seemed better to pick a specific org than a vague "EA should collectively do something." (see: EA should taboo "EA should"). I do think other orgs that advise people on jobs or give platforms to organizations (i.e. the organization fair at EA Global) should also delist OpenAI.
My overall take is something like: it is probably good to maintain some kind of intellectual/diplomatic/trade relationships with OpenAI, but bad to continue ...

Jul 3, 2024 • 12min
EA - The Value of Consciousness as a Pivotal Question by Derek Shiller
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Value of Consciousness as a Pivotal Question, published by Derek Shiller on July 3, 2024 on The Effective Altruism Forum.
Context
Longtermists point out that the scale of our potential for impact is far greater if we are able to influence the course of a long future, as we could change the circumstances of a tremendous number of lives.
One potential avenue for long-term influence involves spreading values that persist and shape the futures that our descendants choose to build. There is some reason to expect that future moral values will be stable. Many groups have preferences about the world beyond their backyard. They should work to ensure that their values are shared by those who can help bring them about. Changes in the values that future groups support will lead to changes in the protections for the things we care about.
If our values concern how our descendants will act, then we should aim to create institutions that promote those values. If we are successful in promoting those values, we should expect our descendants to appreciate and protect those institutional choices.
What values should we work to shape so that the future is as good as it might be? Many of humanity's values would be difficult to sway. Some moral questions, however, might be open to change in the coming decades. It is plausible that there are some questions that we haven't previously faced and for which we have no vested interest. We may be pressed to establish policies and precedents or commit to indifference through inaction.
The right policies and precedents could conceivably allow our values to persist indefinitely. These issues are important to get right, even if we're not yet sure what to think about them.
Controversy
Foremost among important soon-to-be-broached moral questions, I propose, is the moral value that we attribute to phenomenal consciousness (having a 'what-its-like' and a subjective perspective). Or, more particularly, whether mental lives can matter in the absence of phenomenal consciousness in anything like the way they do when supplemented with conscious experiences.
What we decide about the value of phenomenal consciousness in the coming few centuries may not make a difference to our survival as a species, but it seems likely to have a huge effect on how the future plays out.
To get a grip on the problem, consider the case of an artificial creature that is otherwise like a normal person but who lacks phenomenally conscious experiences. Would it be wrong to cause them harm?
Kagan (2019, 28) offers a thought experiment along these lines:
Whatever you feel about this thought experiment, I believe that most people in that situation would feel compelled to grant the robots basic rights.
The significance of consciousness has become a recent popular topic in academic philosophy, particularly in the philosophy of AI, and opinions among professionals are divided. It is striking how greatly opinions differ: where some hold that phenomenal consciousness plays little role in explaining why our lives have value, others hold that phenomenal consciousness is absolutely necessary for having any intrinsic value whatsoever.
One reason to doubt that phenomenal consciousness is necessary for value stems from skepticism that proposed analyses of consciousness describe structures of fundamental importance.
Suppose that the global workspace theory of consciousness is true - to be conscious is to have a certain information architecture involving a central public repository - why should that structure be so important as to ground value? What about other information architectures that function in modestly different ways? The pattern doesn't seem all that important when considered by itself.
If we set aside our preconceptions of consciousness, we wouldn't recognize that architecture as having...

Jul 3, 2024 • 40min
EA - Will disagreement about AI rights lead to societal conflict? by Lucius Caviola
Lucius Caviola discusses the ethical implications of AI rights, highlighting the potential for societal conflict over whether AIs can be sentient and what rights they deserve. People may form strong emotional bonds with human-like AIs, leading to disagreements on granting rights. The debate carries risks of national and global conflicts, requiring a delicate balance to prevent digital suffering and human disempowerment.

Jul 3, 2024 • 36min
EA - Digital Minds: Importance and Key Research Questions by Andreas Mogensen
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Digital Minds: Importance and Key Research Questions, published by Andreas Mogensen on July 3, 2024 on The Effective Altruism Forum.
by Andreas Mogensen, Bradford Saad, and Patrick Butlin
1. Introduction
This post summarizes why we think that digital minds might be very important for how well the future goes, as well as some of the key research topics we think it might be especially valuable to work on as a result.
We begin by summarizing the case for thinking that digital minds could be important. This is largely a synthesis of points that have already been raised elsewhere, so readers who are already familiar with the topic might want to skip ahead to section 3, where we outline what we see as some of the highest-priority open research questions.
2. Importance
Let's define a digital mind as a conscious individual whose psychological states are due to the activity of an inorganic computational substrate as opposed to a squishy brain made up of neurons, glia, and the like.[1] By 'conscious', we mean 'phenomenally conscious.' An individual is phenomenally conscious if and only if there is something it is like to be that individual - something it feels like to inhabit their skin, exoskeleton, chassis, or what-have-you.
In the sense intended here, there is something it is like to be having the kind of visual or auditory experience you're probably having now, to feel a pain in your foot, or to be dreaming, but there is nothing it is like to be in dreamless sleep.
Digital minds obviously have an air of science fiction about them. If certain theories of consciousness are true (e.g., Block 2009; Godfrey-Smith 2016), digital minds are impossible. However, other theories suggest that they are possible (e.g. Tye 1995, Chalmers 1996), and many others are silent on the matter.
While the authors of this post disagree about the plausibility of these various theories, we agree that the philosophical position is too uncertain to warrant setting aside the possibility of digital minds.[2]
Even granting that digital minds are possible in principle, it's unlikely that current systems are conscious. A recent expert report co-authored by philosophers, neuroscientists, and AI researchers (including one of the authors of this post) concludes that the current evidence "does not suggest that any existing AI system is a strong candidate for consciousness." (Butlin et al.
2023: 6) Still, some residual uncertainty seems to be warranted - and obviously completely consistent with denying that any current system is a "strong candidate". Chalmers (2023) suggests it may be reasonable to give a probability in the ballpark of 5-10% to the hypothesis that current large language models could be conscious. Moreover, the current rate of progress in artificial intelligence gives us good reason to take seriously the possibility that digital minds will arrive soon.
Systems appearing in the next decade might add a range of markers of consciousness, and Chalmers suggests the probability that we'll have digital minds within this time-frame might rise to at least 25%.[3] Similarly, Butlin et al. (2023) conclude that if we grant the assumption that consciousness can be realized by implementing the right computations, then "conscious AI systems could realistically be built in the near term."[4]
It's possible that digital minds might arrive but exist as mere curiosities. Perhaps the kind of architectures that give rise to phenomenal consciousness will have little or no commercial value. We think it's reasonable to be highly uncertain on this point (see Butlin et al. 2023: §4.2 for discussion).
Still, it's worth noting that some influential AI researchers have been pursuing projects that aim to increase AI capabilities by building systems that exhibit markers of consciousness, like a global workspace (Goyal and Bengi...

Jul 3, 2024 • 13min
LW - 3C's: A Recipe For Mathing Concepts by johnswentworth
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 3C's: A Recipe For Mathing Concepts, published by johnswentworth on July 3, 2024 on LessWrong.
Opening Example: Teleology
When people say "the heart's purpose is to pump blood" or "a pencil's function is to write", what does that mean physically? What are "purpose" or "function", not merely in intuitive terms, but in terms of math and physics? That's the core question of what philosophers call teleology - the study of "telos", i.e. purpose or function or goal.
This post is about a particular way of approaching conceptual/philosophical questions, especially for finding
"True Names" - i.e. mathematical operationalizations of concepts which are sufficiently robust to hold up under optimization pressure. We're going to apply the method to teleology as an example. We'll outline the general approach in abstract later; for now, try to pay attention to the sequence of questions we ask in the context of teleology.
Cognition
We start from the subjective view: set aside (temporarily) the question of what "purpose" or "function" mean physically. Instead, first ask what it means for me to view a heart as "having the purpose of pumping blood", or ascribe the "function of writing" to a pencil. What does it mean to model things as having purpose or function?
Proposed answer: when I ascribe purpose or function to something, I model it as having been optimized (in the sense
usually
used
on LessWrong) to do something. That's basically the standard answer among philosophers, modulo expressing the idea in terms of the LessWrong notion of optimization.
(From there, philosophers typically ask about "original teleology" - i.e. a hammer has been optimized by a human, and the human has itself been optimized by evolution, but where does that chain ground out? What optimization process was not itself produced by another optimization process? And then the obvious answer is "evolution", and philosophers debate whether all teleology grounds out in evolution-like phenomena.
But we're going to go in a different direction, and ask entirely different questions.)
Convergence
Next: I notice that there's an awful lot of convergence in what things different people model as having been optimized, and what different people model things as having been optimized for.
Notably, this convergence occurs even when people don't actually know about the optimization process - for instance, humans correctly guessed millenia ago that living organisms had been heavily optimized somehow, even though those humans were totally wrong about what process optimized all those organisms; they thought it was some human-like-but-more-capable designer, and only later figured out evolution.
Why the convergence?
Our everyday experience implies that there is some property of e.g. a heron such that many different people can look at the heron, convergently realize that the heron has been optimized for something, and even converge to some degree on which things the heron (or the parts of the heron) have been optimized for - for instance, that the heron's heart has been optimized to pump blood.
(Not necessarily perfect convergence, not necessarily everyone, but any convergence beyond random chance is a surprise to be explained if we're starting from a subjective account.) Crucially, it's a property of the heron, and maybe of the heron's immediate surroundings, not of the heron's whole ancestral environment - because people can convergently figure out that the heron has been optimized just by observing the heron in its usual habitat.
So now we arrive at the second big question: what are the patterns out in the world which different people convergently recognize as hallmarks of having-been-optimized? What is it about herons, for instance, which makes it clear that they've been optimized, even before we know all the details of the optimizati...

Jul 3, 2024 • 4min
LW - List of Collective Intelligence Projects by Chipmonk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of Collective Intelligence Projects, published by Chipmonk on July 3, 2024 on LessWrong.
During the last
Foresight Intelligent Cooperation Workshop I got very curious about what collective intelligence tools currently exist. A list:
Pol.is: "Input Crowd, Output Meaning"
Inspired
Twitter/X community notes
People: Colin Megill, et al.
Collective Intelligence Project
vibe: democratic AI,
"How AI and Democracy Can Fix Each Other"
People: Divya Siddharth, Saffron Huang, et al.
AI Objectives Institute
Talk to the City: "an open-source LLM interface for improving collective deliberation and decision-making by analyzing detailed, qualitative data. It aggregates responses and arranges similar arguments into clusters."
AI Objectives Institute works closely with the Taiwanese government.
Other projects in development.
People: Colleen McKenzie, Değer Turan, et al.
Meaning Alignment Institute
vibe: democratic AI, kinda.
I think they think that if you can help individuals make wiser decisions, at scale, then this converges to be equivalent with solving outer alignment.
Remesh
Similar to pol.is AFAIK? I haven't played with it.
People: Andrew Konya, et al.
Loomio: "a flexible decision-making tool that helps you create a more engaged and collaborative culture, build trust and coordinate action"
Deliberative Technology for Alignment paper
They also discuss other tools for this use like Discord, Snapshot, Dembrane
People: Andrew Konya, Deger Turan, Aviv Ovadya, Lina Qui, Daanish Masood, Flynn Devine, Lisa Schirch, Isabella Roberts, and Deliberative Alignment Forum
Someone in the know told me to only read sections 4 and 5 of this paper
Plurality Institute
People: David Bloomin, Rose Bloomin, et al.
Also working on some de-escalator bots for essentially Reddit comment wars
Lots of crypto projects
Quadratic voting
Gitcoin
Metagov: "a laboratory for digital governance"
Soulbound tokens
Various voting and aggregation systems, liquid democracy
Decidem
Decide Madrid
Consider.it
Stanford Online Deliberation Platform
Lightcone Chord (in development)
Brief description
People: Jacob Lagerros (LessWrong)
All of the prediction markets
Manifold, Kalshi, Metaculus, PredictIt, etc.
Midjourney has a Collective Intelligence Team now according to
Ivan Vendrov's website. I couldn't find any other information online.
What about small group collective intelligence tools?
Most of the examples above are for large group collective intelligence (which I'm defining as ~300 people or much larger). But what about small groups? Are there tools that will help me coordinate with 30 friends? Or just one friend? I'm mostly unaware of any recent innovations for small group collective intelligence tools. Do you know of any?
Nexae (in development)
"Nexae Systems builds sociotechnical infrastructure to enable the creation of new types of businesses and organizations."
double crux bot
I'm surprised I haven't heard of many other LLM-facilitated communication tools
Medium group (~30-300 people) projects:
Jason Benn's unconference tools, eg
Idea Ranker.
Other lists
@exgenesis
short tweet thread. Couple things I haven't listed here.
Plurality Institute's
(WIP) map of related orgs, etc.
Know of any I should add?
Opportunities
RFP: Interoperable Deliberative Tools | interop, $200k. Oops this closed before I published this post.
Metagov is running
https://metagov.org/projects/ai-palace which seems similar
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jul 3, 2024 • 6min
EA - Seven Philanthropic Wins: The Stories That Inspired Open Phil's Offices by Open Philanthropy
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seven Philanthropic Wins: The Stories That Inspired Open Phil's Offices, published by Open Philanthropy on July 3, 2024 on The Effective Altruism Forum.
Since our early days, we've studied the
history of philanthropy to understand what great giving looks like. The
lessons we learned made us
more ambitious and broadened our view of philanthropy's potential.
The rooms in our San Francisco office pay tribute to this legacy. Seven of them are named after philanthropic "wins" - remarkable feats made possible by philanthropic funders. In this post, we'll share the story behind each win.
Green Revolution
During the second half of the twentieth century, the Green Revolution dramatically increased agricultural production in developing countries like Mexico and India. At a time of rapid population growth, this boost in production reduced hunger, helped to avert famine, and stimulated national economies.
The Rockefeller Foundation played a key role by supporting early research by Norman Borlaug and others to enhance agricultural productivity. Applications of this research - developed in collaboration with governments, private companies, and the Ford Foundation - sparked the Green Revolution, which is estimated to have saved a billion people from starvation.
Read more about the Rockefeller Foundation's role in the Green Revolution in
Political Geography.
The Pill
In 1960, the FDA approved "the pill", an oral contraceptive that revolutionized women's reproductive health by providing a user-controlled family planning option. This groundbreaking development was largely funded by Katharine McCormick, a women's rights advocate and one of MIT's first female graduates.
In the early 1950s, McCormick collaborated with Margaret Sanger, the founder of Planned Parenthood, to finance critical early-stage research that led to the creation of the pill. Today, the birth control pill stands as one of the most common and convenient methods of contraception, empowering generations of women to decide when to start a family.
For a comprehensive history of the pill, try Jonathan Eig's
The Birth of the Pill.
Sesame Street
In 1967, the Carnegie Corporation funded a
feasibility study on educational TV programming for children, which led to the creation of the Children's Television Workshop and Sesame Street. Sesame Street became one of the most successful television ventures ever, broadcast in more than 150 countries and the winner of more than 200 Emmy awards.
Research monitoring the learning progress of Sesame Street viewers has demonstrated significant advances in early literacy.
A deeper look into how philanthropy helped to launch Sesame Street is available
here.
Nunn-Lugar
The Nunn-Lugar Act (1991), also known as the Cooperative Threat Reduction Program, was enacted in response to the collapse of the USSR and the dangers posed by dispersed weapons of mass destruction. US Senators Sam Nunn and Richard Lugar led the initiative, focusing on the disarmament and securing of nuclear, chemical, and biological weapons from former Soviet states. In the course of this work, thousands of nuclear weapons were deactivated or destroyed.
The act's inception and success were largely aided by the strategic philanthropy of the Carnegie Corporation and the MacArthur Foundation, which funded research at Brookings on the "cooperative security" approach to nuclear disarmament and de-escalation.
Learn more about the Nunn-Lugar Act and its connection to philanthropy in
this paper.
Marriage Equality
The Supreme Court's landmark ruling in
Obergefell v. Hodges granted same-sex couples the right to marry, marking the culmination of decades of advocacy and a sizable cultural shift toward acceptance.
Philanthropic funders - including the Gill Foundation and Freedom to Marry, an organization initially funded by the Evelyn and Wa...

Jul 3, 2024 • 14min
LW - How ARENA course material gets made by CallumMcDougall
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How ARENA course material gets made, published by CallumMcDougall on July 3, 2024 on LessWrong.
TL;DR
In this post, I describe my methodology for building new material for ARENA. I'll mostly be referring to the exercises on IOI, Superposition and Function Vectors as case studies. I expect this to be useful for people who are interested in designing material for ARENA or ARENA-like courses, as well as people who are interested in pedagogy or ML paper replications.
The process has 3 steps:
1. Start with something concrete
2. First pass: replicate, and understand
3. Second pass: exercise-ify
Summary
I'm mostly basing this on the following 3 sets of exercises:
Indirect Object Identification - these exercises focus on the IOI paper (from Conmy et al). The goal is to have people understand what exploratory analysis of transformers looks like, and introduce the key ideas of the circuits agenda.
Superposition & SAEs - these exercises focus on understanding superposition and the agenda of dictionary learning (specifically sparse autoencoders). Most of the exercises explore Anthropic's Toy Models of Superposition paper, except for the last 2 sections which explore sparse autoencoders (firstly by applying them to the toy model setup, secondly by exploring a sparse autoencoder trained on a language model).
Function Vectors - these exercises focus on the Function Vectors paper by David Bau et al, although they also make connections with related work such as Alex Turner's GPT2-XL steering vector work. These exercises were interesting because they also had the secondary goal of being an introduction to the nnsight library, in much the same way that the intro to mech interp exercises were also an introduction to TransformerLens.
The steps I go through are listed below. I'm indexing from zero because I'm a software engineer so of course I am. The steps assume you already have an idea of what exercises you want to create; in Appendix (1) you can read some thoughts on what makes for a good exercise set.
1. Start with something concrete
When creating material, you don't want to be starting from scratch. It's useful to have source code available to browse - bonus points if that takes the form of a Colab or something which is self-contained and has easily visible output.
IOI - this was Neel's "Exploratory Analysis Demo" exercises. The rest of the exercises came from replicating the paper directly.
Superposition - this was Anthroic's Colab notebook (although the final version went quite far beyond this). The very last section (SAEs on transformers) was based on Neel Nanda's demo Colab).
Function Vectors - I started with the NDIF demo notebook, to show how some basic nnsight syntax worked. As for replicating the actual function vectors paper, unlike the other 2 examples I was mostly just working from the paper directly. It helped that I was collaborating with some of this paper's authors, so I was able to ask them some questions to clarify aspects of the paper.
2. First-pass: replicate, and understand
The first thing I'd done in each of these cases was go through the material I started with, and make sure I understood what was going on. Paper replication is a deep enough topic for its own series of blog posts (many already exist), although I'll emphasise that I'm not usually talking about full paper replication here, because ideally you'll be starting from something a it further along, be that a Colab, a different tutorial, or something else.
And even when you are just working directly from a paper, you shouldn't make the replication any harder for yourself than you need to. If there's code you can take from somewhere else, then do.
My replication usually takes the form of working through a notebook in VSCode. I'll either start from scratch, or from a downloaded Colab if I'm using one as a ...

Jul 3, 2024 • 36min
LW - Economics Roundup #2 by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Economics Roundup #2, published by Zvi on July 3, 2024 on LessWrong.
Previously: Economics Roundup #1
Let's take advantage of the normality while we have it. In all senses.
Insane Tax Proposals
There is Trump's proposal to replace income taxes with tariffs, but he is not alone.
So here is your periodic reminder, since this is not actually new at core: Biden's proposed budgets include completely insane tax regimes that would cripple our economic dynamism and growth if enacted. As in for high net worth individuals, taking unrealized capital gains at 25% and realized capital gains, such as those you are forced to take to pay your unrealized capital gains tax, at 44.6% plus state taxes.
Austen Allred explains how this plausibly destroys the entire startup ecosystem.
Which I know is confusing because in other contexts he also talks about how other laws (such as SB 1047) that would in no way apply to startups would also destroy the startup ecosystem. But in this case he is right.
Austen Allred: It's difficult to describe how insane a 25% tax on unrealized capital gains is.
Not a one-time 25% hit. It's compounding, annually taking 25% of every dollar of potential increase before it can grow.
Not an exaggeration to say it could single-handedly crush the economy.
An example to show how insane this is: You're a founder and you start a company. You own… let's say 30% of it.
Everything is booming, you raise a round that values the company at at $500 million.
You now personally owe $37.5 million in taxes.
This year. In cash.
Now there are investors who want to invest in the company, but you can't just raise $37.5 million in cash overnight.
So what happens?
Well, you simply decide not to have a company worth a few hundred million dollars.
Oh well, that's only a handful of companies right?
Well, as an investor, the only way the entire ecosystem works is if a few companies become worth hundreds of millions.
Without that, venture capital no longer works. Investment is gone.
Y Combinator no longer works.
No more funding, mass layoffs, companies shutting down crushes the revenue of those that are still around.
Economic armageddon. We've seen how these spirals work, and it's really bad for everyone.
Just because bad policy only targets rich people doesn't mean it can't kill the economy or make it good policy.
I do think they are attempting to deal with this via another idea he thought was crazy, the 'nine annual payments' for the first year's tax and 'five annual payments' for the subsequent tax. So the theory would be that the first year you 'only' owe 3.5%. Then the second year you owe another 3.5% of the old gain and 5% of the next year's gain.
That is less horrendous, but still super horrendous, especially if the taxes do not go away if the asset values subsequently decline, risking putting you into infinite debt.
This is only the beginning. They are even worse than Warren's proposed wealth taxes, because the acute effects and forcing function here are so bad. At the time this was far worse than the various stupid and destructive economic policies Trump was proposing, although he has recently stepped it up to the point where that is unclear.
The good news is that these policies are for now complete political non-starters. Never will a single Republican vote for this, and many Democrats know better. I would like to think the same thing in reverse, as well.
Also, this is probably unconstitutional in the actually-thrown-out-by-SCOTUS sense, not only in the violates-the-literal-constitution sense.
But yes, it is rather terrifying what would happen if they had the kind of majorities that could enact things like this. On either side.
Why didn't the super high taxes in the 1950s kill growth? Taxes for most people were not actually that high, the super-high marginal rates like 91% kicked in...


