

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Aug 2, 2024 • 2min
EA - Sequence overview: Welfare and moral weights by MichaelStJules
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sequence overview: Welfare and moral weights, published by MichaelStJules on August 2, 2024 on The Effective Altruism Forum.
EAs and EA organizations may be making important conceptual or methodological errors in prioritization between moral patients. In this sequence, I illustrate and address several:
1. Types of subjective welfare: I review types of subjective welfare, interpersonal comparisons with them and common grounds between them.
2. Solution to the two envelopes problem for moral weights: The welfare concepts we value directly are human-based, so we should normalize nonhuman welfare by human welfare. This would result in greater priority for nonhumans.
3. Which animals realize which types of subjective welfare?: I argue that many nonhuman animals may have access to (simple versions of) types of subjective welfare people may expect to require language or higher self-awareness. This would support further prioritizing them.
4. Increasingly vague interpersonal welfare comparisons: I illustrate that interpersonal welfare comparisons can be vague, and more vague the more different two beings are.
5. Gradations of moral weight: I build a model for moral weight assignments given vagueness and gradations in capacities. I explore whether other moral patients could have greater moral weights than humans through (more sophisticated) capacities we don't have.
For more detailed summaries, see the individual posts.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Aug 2, 2024 • 31min
AF - The 'strong' feature hypothesis could be wrong by lewis smith
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 'strong' feature hypothesis could be wrong, published by lewis smith on August 2, 2024 on The AI Alignment Forum.
NB. I am on the Google Deepmind language model interpretability team. But the arguments/views in this post are my own, and shouldn't be read as a team position.
"It would be very convenient if the individual neurons of artificial neural networks corresponded to cleanly interpretable features of the input. For example, in an "ideal" ImageNet classifier, each neuron would fire only in the presence of a specific visual feature, such as the color red, a left-facing curve, or a dog snout"
Elhage et. al, Toy Models of Superposition
Recently, much attention in the field of mechanistic interpretability, which tries to explain the behavior of neural networks in terms of interactions between lower level components, has been focussed on extracting features from the representation space of a model. The predominant methodology for this has used variations on the sparse autoencoder, in a series of papers inspired by Elhage et. als.
model of superposition.It's been conventionally understood that there are two key theories underlying this agenda. The first is the 'linear representation hypothesis' (LRH), the hypothesis that neural networks represent many intermediates or variables of the computation (such as the 'features of the input' in the opening quote) as linear directions in it's representation space, or atoms[1].
And second, the theory that the network is capable of representing more of these 'atoms' than it has dimensions in its representation space, via superposition (the superposition hypothesis).
While superposition is a relatively uncomplicated hypothesis, I think the LRH is worth examining in more detail. It is frequently stated quite vaguely, and I think there are several possible formulations of this hypothesis, with varying degrees of plausibility, that it is worth carefully distinguishing between. For example, the linear representation hypothesis is often stated as 'networks represent features of the input as directions in representation space'.
Here are two importantly different ways to parse this:
1. (Weak LRH) some or many features used by neural networks are represented as atoms in representation space
2. (Strong LRH) all (or the vast majority of) features used by neural networks are represented by atoms.
The weak LRH I would say is now well supported by considerable empirical evidence. The strong form is much more speculative: confirming the existence of many linear representations does not necessarily provide strong evidence for the strong hypothesis.
Both the weak and the strong forms of the hypothesis can still have considerable variation, depending on what we understand by a feature and the proportion of the model we expect to yield to analysis, but I think that the distinction between just a weak and strong form is clear enough to work with.
I think that in addition to the acknowledged assumption of the LRH and superposition hypotheses, much work on SAEs in practice makes the assumption that each atom in the network will represent a "simple feature" or a "feature of the input". These features that the atoms are representations of are assumed to be 'monosemantic': they will all stand for features which are human interpretable in isolation. I will call this the monosemanticity assumption.
This is difficult to state precisely, but we might formulate it as the theory that every represented variable will have a single meaning in a good description of a model. This is not a straightforward assumption due to how imprecise the notion of a single meaning is. While various more or less reasonable definitions for features are discussed in the pioneering work of Elhage, these assumptions have different implications.
For instance, if one thinks of 'feat...

Aug 2, 2024 • 8min
EA - Join us at EAGxIndia 2024 in Bengaluru, on 19-20 October, 2024! by Arthur Malone
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join us at EAGxIndia 2024 in Bengaluru, on 19-20 October, 2024!, published by Arthur Malone on August 2, 2024 on The Effective Altruism Forum.
Apply Here!
EAGxIndia 2024 will be held at The Conrad, Bengaluru, Karnataka, India. The conference aims to help attendees in the region find and accelerate their path to positive impact, using EA principles to foster connections and growth opportunities.
In 2023, 163 attendees participated in the first EAGxIndia, with 70% from India and others from the UK, US, Singapore, Indonesia, the Netherlands, Germany, and Brazil. This event enriched the local community with domestic and international connections, catalyzed movement growth, and had multiple reports of attendees transitioning into new impactful careers as a direct result of the conference. We are excited to build on these successes with a second, larger conference in just a few months!
Basics of EAGx and effective altruism:
EAGx (Effective Altruism Global X) is a series of regionally focused events uniting the global EA community, supported by the Centre for Effective Altruism (CEA) and organized by local teams. The EA community focuses on various cause areas, emphasizing effectiveness and evidence-based decision-making. Key areas include:
Global health and development: Alleviating poverty and improving health in low-income countries through interventions such as malaria prevention, deworming, and direct cash transfers.
Animal welfare: Reducing animal suffering in factory farming by promoting plant-based diets, animal rights, and humane treatment.
Long-term future and existential risks: Mitigating risks to humanity's future, such as those posed by advanced AI, biosecurity, and nuclear weapons.
Effective altruism community building: Strengthening the EA movement by fostering community, supporting EA-aligned careers, and enhancing impact.
Policy and advocacy: Promoting global well-being through public policy and governance, focusing on open government data, evidence based policy, and capacity-building for officials.
Why attend?
EAGx events have a strong track record of fostering new connections within the effective altruism community. Past attendees have gained valuable information and networks, leading to jobs, internships, and development programs.
Connect with a passionate community: Engage with like-minded individuals committed to making a positive impact.
Learn and upskill: Participate in workshops, talks, and interviews with EA thought leaders and practitioners working on groundbreaking ideas.
Get actionable insights and strategies: Receive practical advice on optimizing charitable donations, exploring impactful careers, and learning the latest EA research.
Why India & Bengaluru?
India offers unique opportunities for directly impactful interventions in areas like poverty alleviation, public health, and education. The country also hosts a large population of educated and passionate individuals looking for their way to improve their own country, the world, and the future. This combination of problems to solve and a deep talent pool to empower makes India an ideal location for investment in the local EA community.
Bengaluru, known as the Silicon Valley of India, is a hub of entrepreneurship and innovation. It boasts a rich academic and research landscape, ideal for impactful discussions and initiatives. Bengaluru also hosts one of the longest running Indian EA groups which has been conducting meetups and fostering connections since 2017.
The EAGxIndia 2024 venue, Conrad Hotel, resides centrally with a view of Ulsoor Lake and offers easy access to key areas like M.G. Road and Indira Nagar, making it a perfect site for networking and collaboration.
Should you apply?
EAGxIndia 2024 welcomes both those newly engaged with effective altruism and seasoned participants, particula...

Aug 1, 2024 • 17min
LW - The need for multi-agent experiments by Martín Soto
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The need for multi-agent experiments, published by Martín Soto on August 1, 2024 on LessWrong.
TL;DR: Let's start iterating on experiments that approximate real, society-scale multi-AI deployment
Epistemic status: These ideas seem like my most prominent
delta with the average AI Safety researcher, have stood the test of time, and are shared by others I intellectually respect. Please attack them fiercely!
Multi-polar risks
Some authors have already written about
multi-polar AI failure. I especially like how Andrew Critch has tried to sketch
concrete stories for it.
But, without even considering concrete stories yet, I think there's a good a priori argument in favor of worrying about multi-polar failures:
We care about the future of society. Certain AI agents will be introduced, and we think they could reduce our control over the trajectory of this system. The way in which this could happen can be divided into two steps:
1. The agents (with certain properties) are introduced in certain positions
2. Given the agents' properties and positions, they interact with each other and the rest of the system, possibly leading to big changes
So in order to better control the outcome, it seems worth it to try to understand and manage both steps, instead of limiting ourselves to (1), which is what the alignment community has traditionally done.
Of course, this is just one, very abstract argument, which we should update based on observations and more detailed technical understanding. But it makes me think the burden of proof is on multi-agent skeptics to explain why (2) is not important.
Many have taken on that burden. The most common reason to dismiss the importance of (2) is expecting a centralized intelligence explosion, a fast and unipolar software takeoff, like Yudkowsky's FOOM. Proponents usually argue that the intelligences we are likely to train will, after meeting a sharp threshold of capabilities, quickly bootstrap themselves to capabilities drastically above those of any other existing agent or ensemble of agents.
And that these capabilities will allow them to gain near-complete strategic advantage and control over the future. In this scenario, all the action is happening inside a single agent, and so you should only care about shaping its properties (or delaying its existence).
I tentatively expect more of a decentralized hardware singularity[1] than centralized software FOOM. But there's a weaker claim in which I'm more confident: we shouldn't right now be near-certain of a centralized FOOM.[2] I expect this to be the main crux with many multi-agent skeptics, and won't argue for it here (but rather in an upcoming post).
Even given a decentralized singularity, one can argue that the most leveraged way for us to improve multi-agent interactions is by ensuring that individual agents possess certain properties (like honesty or transparency), or that at least we have enough technical expertise to shape them on the go. I completely agree that this is the natural first thing to look at.
But I think focusing on multi-agent interactions directly is a strong second, and a lot of marginal value might lie there given how neglected they've been until now (more
below). I do think many multi-agent interventions will require certain amounts of single-agent alignment technology. This will of course be a crux with alignment pessimists.
Finally, for this work to be counterfactually useful it's also required that AI itself (in decision-maker or researcher positions) won't iteratively solve the problem by default. Here, I do think we have
some reasons to expect (65%) that intelligent enough AIs aligned with their principals don't automatically solve catastrophic conflict. In those worlds, early interventions can make a big difference setting the right incentives for future agents, or providi...

Aug 1, 2024 • 3min
LW - Dragon Agnosticism by jefftk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dragon Agnosticism, published by jefftk on August 1, 2024 on LessWrong.
I'm agnostic on the existence of dragons. I don't usually talk about this, because people might misinterpret me as actually being a covert dragon-believer, but I wanted to give some background for why I disagree with calls for people to publicly assert the non-existence of dragons.
Before I do that, though, it's clear that horrible acts have been committed in the name of dragons. Many dragon-believers publicly or privately endorse this reprehensible history. Regardless of whether dragons do in fact exist, repercussions continue to have serious and unfair downstream effects on our society.
Given that history, the easy thing to do would be to loudly and publicly assert that dragons don't exist. But while a world in which dragons don't exist would be preferable, that a claim has inconvenient or harmful consequences isn't evidence of its truth or falsehood.
Another option would be to look into whether dragons exist and make up my mind; people on both sides are happy to show me evidence. If after weighing the evidence I were convinced they didn't exist, that would be excellent news about the world. It would also be something I could proudly write about: I checked, you don't need to keep worrying about dragons.
But if I decided to look into it I might instead find myself convinced that dragons do exist. In addition to this being bad news about the world, I would be in an awkward position personally. If I wrote up what I found I would be in some highly unsavory company. Instead of being known as someone who writes about a range of things of varying levels of seriousness and applicability, I would quickly become primarily known as one of those dragon advocates.
Given the taboos around dragon-belief, I could face strong professional and social consequences.
One option would be to look into it, and only let people know what I found if I were convinced dragons didn't exist. Unfortunately, this combines very poorly with collaborative truth-seeking. Imagine a hundred well-intentioned people look into whether there are dragons. They look in different places and make different errors. There are a lot of things that could be confused for dragons, or things dragons could be confused for, so this is a noisy process.
Unless the evidence is overwhelming in one direction or another, some will come to believe that there are dragons, while others will believe that there are not.
While humanity is not perfect at uncovering the truth in confusing situations, our strategy that best approaches the truth is for people to report back what they've found, and have open discussion of the evidence. Perhaps some evidence Pat finds is very convincing to them, but then Sam shows how they've been misinterpreting it. But this all falls apart when the thoughtful people who find one outcome generally stay quiet.
I really don't want to contribute to this pattern that makes it hard to learn what's actually true, so in general I don't want whether I share what I've learned to be downstream from what I learn.
Overall, then, I've decided to remain agnostic on the existence of dragons. I would reconsider if it seemed to be a sufficiently important question, in which case I might be willing to run the risk of turning into a dragon-believer and letting the dragon question take over my life: I'm still open to arguments that whether dragons exist is actually highly consequential.
But with my current understanding of the costs and benefits on this question I will continue not engaging, publicly or privately, with evidence or arguments on whether there are dragons.
Note: This post is not actually about dragons, but instead about how I think about a wide range of taboo topics.
Comment via: facebook, mastodon
Thanks for listening. To help us out wit...

Aug 1, 2024 • 2min
EA - Take the 2024 EA Forum user survey by Sarah Cheng
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take the 2024 EA Forum user survey, published by Sarah Cheng on August 1, 2024 on The Effective Altruism Forum.
One year ago, the EA Forum team ran a survey for EA Forum users. The results have been very helpful for us to clarify our strategy, understand our impact, and decide what to prioritize working on in the past 12 months.
From today through August 20[1], we're running an updated version of the survey for 2024. All questions are optional, as is a new section about other CEA programs.
We estimate the survey will take you 7-15 minutes to complete (less if you skip questions, longer if you write particularly detailed answers). Your answers will be saved in your browser, so you can pause and return to it later.
We appreciate you taking the time to fill out the survey, and we read every response. Even if you do not use the Forum, your response is helpful for us to compare different populations (and you will see a significantly shorter survey).
All of us who work on the Forum do so because we want to have a positive impact, and it's important that we have a clear understanding of what (if any) work on the Forum fulfills that goal. We spend our time on other projects as well[2], and your responses will inform what our team works on in the next 12 months.
1. ^
By default, we plan to close the survey on Tuesday August 20, but we may decide to leave it open longer.
2. ^
Let us know if you would be interested in partnering with us!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Aug 1, 2024 • 1h 55min
LW - AI #75: Math is Easier by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #75: Math is Easier, published by Zvi on August 1, 2024 on LessWrong.
Google DeepMind got a silver metal at the IMO, only one point short of the gold. That's really exciting.
We continuously have people saying 'AI progress is stalling, it's all a bubble' and things like that, and I always find remarkable how little curiosity or patience such people are willing to exhibit. Meanwhile GPT-4o-Mini seems excellent, OpenAI is launching proper search integration, by far the best open weights model got released, we got an improved MidJourney 6.1, and that's all in the last two weeks.
Whether or not GPT-5-level models get here in 2024, and whether or not it arrives on a given schedule, make no mistake. It's happening.
This week also had a lot of discourse and events around SB 1047 that I failed to avoid, resulting in not one but four sections devoted to it.
Dan Hendrycks was baselessly attacked - by billionaires with massive conflicts of interest that they admit are driving their actions - as having a conflict of interest because he had advisor shares in an evals startup rather than having earned the millions he could have easily earned building AI capabilities. so Dan gave up those advisor shares, for no compensation, to remove all doubt. Timothy Lee gave us what is clearly the best skeptical take on SB 1047 so far.
And Anthropic sent a 'support if amended' letter on the bill, with some curious details. This was all while we are on the cusp of the final opportunity for the bill to be revised - so my guess is I will soon have a post going over whatever the final version turns out to be and presenting closing arguments.
Meanwhile Sam Altman tried to reframe broken promises while writing a jingoistic op-ed in the Washington Post, but says he is going to do some good things too. And much more.
Oh, and also AB 3211 unanimously passed the California assembly, and would effectively among other things ban all existing LLMs. I presume we're not crazy enough to let it pass, but I made a detailed analysis to help make sure of it.
Table of Contents
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. They're just not that into you.
4. Language Models Don't Offer Mundane Utility. Baba is you and deeply confused.
5. Math is Easier. Google DeepMind claims an IMO silver metal, mostly.
6. Llama Llama Any Good. The rankings are in as are a few use cases.
7. Search for the GPT. Alpha tests begin of SearchGPT, which is what you think it is.
8. Tech Company Will Use Your Data to Train Its AIs. Unless you opt out. Again.
9. Fun With Image Generation. MidJourney 6.1 is available.
10. Deepfaketown and Botpocalypse Soon. Supply rises to match existing demand.
11. The Art of the Jailbreak. A YouTube video that (for now) jailbreaks GPT-4o-voice.
12. Janus on the 405. High weirdness continues behind the scenes.
13. They Took Our Jobs. If that is even possible.
14. Get Involved. Akrose has listings, OpenPhil has a RFP, US AISI is hiring.
15. Introducing. A friend in venture capital is a friend indeed.
16. In Other AI News. Projections of when it's incrementally happening.
17. Quiet Speculations. Reports of OpenAI's imminent demise, except, um, no.
18. The Quest for Sane Regulations. Nick Whitaker has some remarkably good ideas.
19. Death and or Taxes. A little window into insane American anti-innovation policy.
20. SB 1047 (1). The ultimate answer to the baseless attacks on Dan Hendrycks.
21. SB 1047 (2). Timothy Lee analyzes current version of SB 1047, has concerns.
22. SB 1047 (3): Oh Anthropic. They wrote themselves an unexpected letter.
23. What Anthropic's Letter Actually Proposes. Number three may surprise you.
24. Open Weights Are Unsafe And Nothing Can Fix This. Who wants to ban what?
25. The Week in Audio. Vitalik Buterin, Kelsey Piper, Patrick McKenzie.
26. Rheto...

Aug 1, 2024 • 17min
AF - The need for multi-agent experiments by Martín Soto
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The need for multi-agent experiments, published by Martín Soto on August 1, 2024 on The AI Alignment Forum.
TL;DR: Let's start iterating on experiments that approximate real, society-scale multi-AI deployment
Epistemic status: These ideas seem like my most prominent
delta with the average AI Safety researcher, have stood the test of time, and are shared by others I intellectually respect. Please attack them fiercely!
Multi-polar risks
Some authors have already written about
multi-polar AI failure. I especially like how Andrew Critch has tried to sketch
concrete stories for it.
But, without even considering concrete stories yet, I think there's a good a priori argument in favor of worrying about multi-polar failures:
We care about the future of society. Certain AI agents will be introduced, and we think they could reduce our control over the trajectory of this system. The way in which this could happen can be divided into two steps:
1. The agents (with certain properties) are introduced in certain positions
2. Given the agents' properties and positions, they interact with each other and the rest of the system, possibly leading to big changes
So in order to better control the outcome, it seems worth it to try to understand and manage both steps, instead of limiting ourselves to (1), which is what the alignment community has traditionally done.
Of course, this is just one, very abstract argument, which we should update based on observations and more detailed technical understanding. But it makes me think the burden of proof is on multi-agent skeptics to explain why (2) is not important.
Many have taken on that burden. The most common reason to dismiss the importance of (2) is expecting a centralized intelligence explosion, a fast and unipolar software takeoff, like Yudkowsky's FOOM. Proponents usually argue that the intelligences we are likely to train will, after meeting a sharp threshold of capabilities, quickly bootstrap themselves to capabilities drastically above those of any other existing agent or ensemble of agents.
And that these capabilities will allow them to gain near-complete strategic advantage and control over the future. In this scenario, all the action is happening inside a single agent, and so you should only care about shaping its properties (or delaying its existence).
I tentatively expect more of a decentralized hardware singularity[1] than centralized software FOOM. But there's a weaker claim in which I'm more confident: we shouldn't right now be near-certain of a centralized FOOM.[2] I expect this to be the main crux with many multi-agent skeptics, and won't argue for it here (but rather in an upcoming post).
Even given a decentralized singularity, one can argue that the most leveraged way for us to improve multi-agent interactions is by ensuring that individual agents possess certain properties (like honesty or transparency), or that at least we have enough technical expertise to shape them on the go. I completely agree that this is the natural first thing to look at.
But I think focusing on multi-agent interactions directly is a strong second, and a lot of marginal value might lie there given how neglected they've been until now (more
below). I do think many multi-agent interventions will require certain amounts of single-agent alignment technology. This will of course be a crux with alignment pessimists.
Finally, for this work to be counterfactually useful it's also required that AI itself (in decision-maker or researcher positions) won't iteratively solve the problem by default. Here, I do think we have
some reasons to expect (65%) that intelligent enough AIs aligned with their principals don't automatically solve catastrophic conflict. In those worlds, early interventions can make a big difference setting the right incentives for future agent...

Aug 1, 2024 • 31min
EA - #194 - Defensive acceleration and how to regulate AI when you fear government (Vitalik Buterin on the 80,000 Hours Podcast) by 80000 Hours
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #194 - Defensive acceleration and how to regulate AI when you fear government (Vitalik Buterin on the 80,000 Hours Podcast), published by 80000 Hours on August 1, 2024 on The Effective Altruism Forum.
We just published an interview: Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government. Listen on Spotify, watch on Youtube or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.
Episode summary
… if you're a power that is an island and that goes by sea, then you're more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance.
If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes.
Vitalik Buterin
Can 'effective accelerationists' and AI 'doomers' agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay "My techno-optimism," which both camps agreed was basically reasonable.
Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.
Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.
But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.
The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future - speeding up technology to get its benefits, but preferentially developing 'defensive' technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.
What sorts of things is he talking about? In the area of disease prevention it's most easy to see: disinfecting indoor air, rapid-turnaround vaccine platforms, and nasal spray vaccines that prevent disease transmission all make us safer against pandemics without generating any apparent new threats of their own. (And they might eliminate the common cold to boot!)
Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you,
learn about the programme and
apply here. You don't need a business idea yet - just the hustle to start a technology company. But you'll need to act fast and apply by August 2, 2024.
Vitalik explains how he mentally breaks down defensive technologies into four broad categories:
Defence against big physical things like tanks.
Defence against small physical things like diseases.
Defence against unambiguously hostile information like fraud.
Defence against ambiguously hostile information like possible misinformation.
The philosophy of defensive acceleration has a strong basis in history. Mountain or island countries that are hard to invade, like Switzerland or Britain, tend to have more individual freedom and higher quality of life than the Mongolian steppes - where "your entire mindset is around kill or be killed, conquer or be conquered": a mindset Vitalik calls "the breeding ground for dystopian governance."
Defensive acceleration arguably goes back to ancient China, where the Mohists focused ...

Aug 1, 2024 • 5min
LW - Ambiguity in Prediction Market Resolution is Still Harmful by aphyer
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ambiguity in Prediction Market Resolution is Still Harmful, published by aphyer on August 1, 2024 on LessWrong.
A brief followup to this post in light of recent events.
Free and Fair Elections
Polymarket has an open market 'Venezuela Presidential Election Winner'. Its description is as follows:
The 2024 Venezuela presidential election is scheduled to take place on July 28, 2024.
This market will resolve to "Yes" if Nicolás Maduro wins. Otherwise, this market will resolve to "No."
This market includes any potential second round. If the result of this election isn't known by December 31, 2024, 11:59 PM ET, the market will resolve to "No."
In the case of a two-round election, if this candidate is eliminated before the second round this market may immediately resolve to "No".
The primary resolution source for this market will be official information from Venezuela, however a consensus of credible reporting will also suffice.
Can you see any ambiguity in this specification? Any way in which, in a nation whose 2018 elections "[did] not in any way fulfill minimal conditions for free and credible elections" according to the UN, there could end up being ambiguity in how this market should resolve?
If so, I have bad news and worse news.
The bad news is that Polymarket could not, and so this market is currently in a disputed-outcome state after Maduro's government announced an almost-certainly-faked election win, while the opposition announced somewhat-more-credible figures in which they won.
The worse news is that $3,546,397 has been bet on that market as of this writing.
How should that market resolve? I am not certain! Commenters on the market have...ah...strong views in both directions. And the description of the market does not make it entirely clear. If I were in charge of resolving this market I would probably resolve it to Maduro, just off the phrase about the 'primary resolution source'.
However, I don't think that's unambiguous, and I would feel much happier if the market had begun with a wording that made it clear how a scenario like this would be treated.
How did other markets do?
I've given Manifold a hard time on similar issues in the past, but they actually did a lot better here. There is a 'Who will win Venezuela's 2024 presidential election' market, but it's clear that it "Resolves to the person the CNE declares as winner of the 2024 presidential elections in Venezuela" (which would be Maduro). There are a variety of "Who will be the president of Venezuela on [DATE]" markets, which have the potential to be ambiguous but at least should be better.
Metaculus did (in my opinion) a bit better than Polymarket but worse than Manifold on the wording, with a market that resolves "based on the official results released by the National Electoral Council of Venezuela or other credible sources," a description which, ah, seems to assume something about the credibility of the CNE. Nevertheless, they've resolved it to Maduro (I think correctly given that wording).
On the other hand, neither of these markets had $3.5M bet on them. So.
What does this mean for prediction markets?
This is really nowhere near as bad as this can get:
Venezuelan elections are not all that important to the world (sorry, Venezuelans), and I don't think they get all that much interest compared to other elections, or other events in general. (Polymarket has $3.5M on the Venezuelan election. It has $459M on the US election, $68M on the US Democratic VP nominee, and both $2.4M on 'most medals in the Paris Olympics' and $2.2M on 'most gold medals in the Paris Olympics').
Venezuela's corruption is well-known. I don't think anyone seriously believes Maduro legitimately won the election. I don't think it was hard to realize in advance that something like this was a credible outcome. There is really very littl...


