

The Nonlinear Library: LessWrong
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Aug 8, 2024 • 4min
LW - Leaving MIRI, Seeking Funding by abramdemski
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Leaving MIRI, Seeking Funding, published by abramdemski on August 8, 2024 on LessWrong.
This is slightly old news at this point, but: as part of MIRI's recent strategy pivot, they've eliminated the Agent Foundations research team. I've been out of a job for a little over a month now. Much of my research time in the first half of the year was eaten up by engaging with the decision process that resulted in this, and later, applying to grants and looking for jobs.
I haven't secured funding yet, but for my own sanity & happiness, I am (mostly) taking a break from worrying about that, and getting back to thinking about the most important things.
However, in an effort to try the obvious, I have set up a Patreon where you can fund my work directly. I don't expect it to become my main source of income, but if it does, that could be a pretty good scenario for me; it would be much nicer to get money directly from a bunch of people who think my work is good and important, as opposed to try to justify my work regularly in grant applications.
What I'm (probably) Doing Going Forward
I've been told by several people within MIRI and outside of MIRI that it seems better for me to do roughly what I've been doing, rather than pivot to something else. As such, I mainly expect to continue doing Agent Foundations research.
I think of my main research program as the Tiling Agents program. You can think of this as the question of when agents will preserve certain desirable properties (such as safety-relevant properties) when given the opportunity to self-modify. Another way to think about it is the slightly broader question: when can one intelligence trust another? The bottleneck for avoiding harmful self-modifications is self-trust; so getting tiling results is mainly a matter of finding conditions for trust.
The search for tiling results has two main motivations:
AI-AI tiling, for the purpose of finding conditions under which AI systems will want to preserve safety-relevant propertien.
Human-AI tiling, for the purpose of understanding when we can justifiably trust AI systems.
While I see this as the biggest priority, I also expect to continue a broader project of deconfusion. The bottleneck to progress in AI safety continues to be our confusion about many of the relevant concepts, such as human values.
I'm also still interested in doing some work on accelerating AI safety research using modern AI.
Thoughts on Public vs Private Research
Some work that is worth doing should be done in a non-public, or even highly secretive, way.[1] However, my experience at MIRI has given me a somewhat burned-out feeling about doing highly secretive work. It is hard to see how secretive work can have a positive impact on the future (although the story for public work is also fraught). At MIRI, there was always the idea that if we came up with something sufficiently good, something would happen...
although what exactly was unclear, at least to me.
Secretive research also lacks feedback loops that public research has. My impression is that this slows down the research significantly (contrary to some views at MIRI).
In any case, I personally hope to make my research more open and accessible going forward, although this may depend on my future employer. This means writing more on LessWrong and the Alignment Forum, and perhaps writing academic papers.
As part of this, I hope to hold more of my research video calls as publicly-accessible discussions. I've been experimenting with this a little bit and I feel it has been going well so far.
1. ^
Roughly, I mean dangerous AI capabilities work, although the "capabilities vs safety" dichotomy is somewhat fraught.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Aug 8, 2024 • 1h 17min
LW - AI #76: Six Shorts Stories About OpenAI by Zvi
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #76: Six Shorts Stories About OpenAI, published by Zvi on August 8, 2024 on LessWrong.
If you're looking for audio of my posts, you're in luck. Thanks to multiple volunteers you have two options.
1. Option one is Askwho, who uses a Substack. You can get an ElevenLabs-quality production, with a voice that makes me smile. For Apple Podcasts that means you can add them here, Spotify here, Pocket Casts here, RSS here.
2. Alternatively, for a more traditional AI treatment in podcast form, you can listen via Apple Podcasts, Spotify, Pocket Casts, and RSS.
These should be permanent links so you can incorporate those into 'wherever you get your podcasts.' I use Castbox myself, it works but it's not special.
If you're looking forward to next week's AI #77, I am going on a two-part trip this week. First I'll be going to Steamboat in Colorado to give a talk, then I'll be swinging by Washington, DC on Wednesday, although outside of that morning my time there will be limited. My goal is still to get #77 released before Shabbat dinner, we'll see if that works. Some topics may of course get pushed a bit.
It's crazy how many of this week's developments are from OpenAI. You've got their voice mode alpha, JSON formatting, answering the letter from several senators, sitting on watermarking for a year, endorsement of three bills before Congress and also them losing a cofounder to Anthropic and potentially another one via sabbatical.
Also Google found to be a monopolist, we have the prompts for Apple Intelligence and other neat stuff like that.
Table of Contents
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. Surveys without the pesky humans.
4. Language Models Don't Offer Mundane Utility. Ask a silly question.
5. Activate Voice Mode. When I know more, dear readers, so will you.
6. Apple Intelligence. We have its system prompts. They're highly normal.
7. Antitrust Antitrust. Google found to be an illegal monopolist.
8. Copyright Confrontation. Nvidia takes notes on scraping YouTube videos.
9. Fun With Image Generation. The days of Verify Me seem numbered.
10. Deepfaketown and Botpocalypse Soon. OpenAI built a watermarking system.
11. They Took Our Jobs. We have met the enemy, and he is us. For now.
12. Chipping Up. If you want a chip expert ban, you have to enforce it.
13. Get Involved. Safeguard AI.
14. Introducing. JSONs, METR, Gemma, Rendernet, Thrive.
15. In Other AI News. Google more or less buys out Character.ai.
16. Quiet Speculations. Llama-4 only ten times more expensive than Llama-3?
17. The Quest for Sane Regulations. More on SB 1047 but nothing new yet.
18. That's Not a Good Idea. S. 2770 on deepfakes, and the EU AI Act.
19. The Week in Audio. They keep getting longer.
20. Exact Words. Three bills endorsed by OpenAI. We figure out why.
21. Openly Evil AI. OpenAI replies to the questions from Senators.
22. Goodbye to OpenAI. One cofounder leaves, another takes a break.
23. Rhetorical Innovation. Guardian really will print anything.
24. Open Weights Are Unsafe and Nothing Can Fix This. Possible fix?
25. Aligning a Smarter Than Human Intelligence is Difficult. What do we want?
26. People Are Worried About AI Killing Everyone. Janus tried to warn us.
27. Other People Are Not As Worried About AI Killing Everyone. Disbelief.
28. The Lighter Side. So much to draw upon these days.
Language Models Offer Mundane Utility
Predict the results of social science survey experiments, with (r = 0.85, adj r = 0.91) across 70 studies, with (r = .9, adj r = .94) for the unpublished studies. If these weren't surveys I would be highly suspicious because this would be implying the results could reliably replicate at all. If it's only surveys, sure, I suppose surveys should replicate.
This suggests that we mostly do not actually need the surveys, we can get close (...

Aug 8, 2024 • 16min
LW - [LDSL#0] Some epistemological conundrums by tailcalled
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [LDSL#0] Some epistemological conundrums, published by tailcalled on August 8, 2024 on LessWrong.
This post is also available on my Substack.
When you deal with statistical science, causal inference, measurement, philosophy, rationalism, discourse, and similar, there's some different questions that pop up, and I think I've discovered that there's a shared answer behind a lot of the questions that I have been thinking about. In this post, I will briefly present the questions, and then in a followup post I will try to give my answer for them.
Why are people so insistent about outliers?
A common statistical method is to assume an outcome is due to a mixture of observed factors and unobserved factors, and then model how much of an effect the observed factors have, and attribute all remaining variation to unobserved factors. And then one makes claims about the effects of the observed factors.
But some people then pick an outlier and demand an explanation for that outlier, rather than just accepting the general statistical finding:
In fact, aren't outliers almost by definition anti-informative? No model is perfect, so there's always going to be cases we can't model. By insisting on explaining all those rare cases, we're basically throwing away the signal we can model.
A similar point applies to reading the news. Almost by definition, the news is about uncommon stuff like terrorist attacks, rather than common stuff like heart disease. Doesn't reading such things invert your perception, such that you end up focusing on exactly the least relevant things?
Why isn't factor analysis considered the main research tool?
Typically if you have a ton of variables, you can perform a factor analysis which identifies a set of variables which explain a huge chunk of variation across those variables. If you are used to performing factor analysis, this feels like a great way to get an overview over the subject matter. After all, what could be better than knowing the main dimensions of variation?
Yet a lot of people think of factor analysis as being superficial and uninformative. Often people insist that it only yields aggregates rather than causes, and while that might seem plausible at first, once you dig into it enough, you will see that usually the factors identified are actually causal, so that can't be the real problem.
A related question is why people tend to talk in funky discrete ways when careful quantitative analysis generally finds everything to be continuous. Why do people want clusters more than they want factors? Especially since cluster models tend to be more fiddly and less robust.
Why do people want "the" cause?
There's a big gap between how people intuitively view causal inference (often searching for "the" cause of something), versus how statistics views causal inference. The main frameworks for causal inference in statistics are Rubin's Potential Outcomes framework and Pearl's DAG approach, and both of these view causality as a function from inputs to outputs.
In these frameworks, causality is about functional input/output relationships, and there are many different notions of causal effects, not simply one canonical "cause" of something.
Why are people dissatisfied with GWAS?
In genome-wide association searches, researchers use statistics to identify alleles that are associated with specific outcomes of interest (e.g. health, psychological characteristics, SES outcomes). They've been making consistent progress over time, finding tons of different genetic associations and gradually becoming able to explain more and more variance between people.
Yet GWAS is heavily criticized as "not causal". While there are certain biases that can occur, those biases are usually found to be much smaller than seems justified by these critiques. So what gives?
What value does qualitative r...

Aug 8, 2024 • 14min
LW - It's time for a self-reproducing machine by Carl Feynman
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's time for a self-reproducing machine, published by Carl Feynman on August 8, 2024 on LessWrong.
I've wanted to build a self-reproducing machine since I was 17. It's forty-five years later, and it has finally become feasible. (I've done a few other things along the way.) I'm going to describe one such device, and speculate as to its larger implications. It's a pretty detailed design, which I had to come up with to convince myself that it is feasible. No doubt there are better designs than this.
The Autofac
Here's a top-level description of the device I'm thinking of. It's called an Autofac, which is what they were called in the earliest story about them. It looks like a little metal shed, about a meter cubed. It weighs about 50 kilograms. There's a little gnome-sized door on each end. It's full of robot arms and automated machine tools. It's connected to electricity and by WiFi to a data center somewhere.
It has a front door, where it accepts material, and a back door, where it outputs useful objects, and cans of neatly packaged waste. You can communicate with it, to tell it to make parts and assemble them into useful shapes. It can do all the metalworking operations available to a machinist with a good shop at their disposal. In return, it occasionally asks for help or clarification.
One particular thing it can be told to make is another one of itself. This is of course the case we're all interested in. Here's what that looks like. You feed a 60kg package of steel castings, electronics, and other parts, into the door at one end. It starts by building another shed, next to the other end. The two sheds are butted up next to each other, so the rain can't get in. Once it's enclosed, there is no visible progress for about a month, but it makes various metalworking noises.
Then it announces that it's done. The second shed is now another Autofac, and can be carried away to start the process elsewhere. There's also a can full of metal scrap and used lubricant, which has to be disposed of responsibly. This process can be repeated a number of times, at least seven, to produce more offspring. Eventually the original Autofac wears out, but by then it has hundreds of descendants.
The software
The key part of the Autofac, the part that kept it from being built before, is the AI that runs it. Present-day VLMs (vision-language models) are capable of performing short-deadline manual tasks like folding laundry or simple tool use. But they are deficient at arithmetic, long term planning and precisely controlling operations. Fortunately we already have software for these three purposes.
First, of course, we have calculators for doing arithmetic. LLMs can be taught to use these. In the real world, machinists constantly use calculators. The Autofac will be no different.
Second, there is project planning software that lets a human break down an engineering project into tasks and subtasks, and accommodate changes of plan as things go wrong. We can provide the data structures of this software, initially constructed by humans, as a resource for the AI to use. The AI only has to choose the next task, accomplish it or fail, and either remove it from the queue or add a new task to fix the problem.
There are thousands of tasks in the life of an Autofac; fortunately the AI doesn't need to remember them all. The project planning software keeps track of what has been done and what needs to be done.
Third, there are programs that go from the design of a part to a sequence of machine tool movements that will make that part, and then controls the machine tool motors to do the job. These are called Computer Aided Manufacturing, or CAM. Using CAM relieves the AI of the lowest level responsibilities of controlling motor positions and monitoring position sensors. This software doesn't do everything, of...

Aug 6, 2024 • 35min
LW - WTH is Cerebrolysin, actually? by gsfitzgerald
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: WTH is Cerebrolysin, actually?, published by gsfitzgerald on August 6, 2024 on LessWrong.
[This article was originally published on Dan Elton's blog, More is Different.]
Cerebrolysin is an unregulated medical product made from enzymatically digested pig brain tissue. Hundreds of scientific papers claim that it boosts BDNF, stimulates neurogenesis, and can help treat numerous neural diseases. It is widely used by doctors around the world, especially in Russia and China.
A recent video of Bryan Johnson injecting Cerebrolysin has over a million views on X and 570,000 views on YouTube. The drug, which is advertised as a "peptide combination", can be purchased easily online and appears to be growing in popularity among biohackers, rationalists, and transhumanists. The subreddit r/Cerebrolysin has 3,100 members.
TL;DR
Unfortunately, our investigation indicates that the benefits attributed to Cerebrolysin are biologically implausible and unlikely to be real. Here's what we found:
Cerebrolysin has been used clinically since the 1950s, and has escaped regulatory oversight due to some combination of being a "natural product" and being grandfathered in.
Basic information that would be required for any FDA approved drug is missing, including information on the drug's synthesis, composition, and pharmacokinetics.
Ever Pharma's claim that it contains neurotrophic peptides in therapeutic quantities is likely false. HPLC and other evidence show Cerebrolysin is composed of amino acids, phosphates, and salt, along with some random protein fragments.
Ever Pharma's marketing materials for Cerebrolysin contain numerous scientific errors.
Many scientific papers on Cerebrolysin appear to have ties to its manufacturer, Ever Pharma, and sometimes those ties are not reported.
Ever Pharma's explanation of how the putative peptides in Cerebrolyin cross the blood-brain barrier does not make sense and flies in the face of scientific research which shows that most peptides do not cross the blood-brain barrier (including neurotrophic peptides like BDNF, CDNF, and GDNF).
Since neurotrophic factors are the proposed mechanism for Cerebrolysin's action, it is reasonable to doubt claims of Cerebrolysin's efficacy. Most scientific research is false. It may have a mild therapeutic effect in some contexts, but the research on this is shaky. It is likely safe to inject in small quantities, but is almost certainly a waste of money for anyone looking to improve their cognitive function.
Introduction
One of us (Dan) was recently exposed to Cerebrolysin at the Manifest conference in Berkeley, where a speaker spoke very highly about it and even passed around ampoules of it for the audience to inspect.
Dan then searched for Cerebrolysin on X and found a video by Bryan Johnson from May 23 that shows him injecting Cerebrolysin. Johnson describes it as a "new longevity therapy" that "fosters neuronal growth and repair which may improve memory."
Dan sent the video to Greg Fitzgerald, who is a 6th year neuroscience Ph.D. student at SUNY Albany. Greg is well-versed on the use of neurotrophic peptides for treating CNS disorders and was immediately skeptical and surprised he had not heard of it before. After Greg researched it, he felt a professional responsibility to write up his findings. He sent his writeup to Dan, who then extensively edited and expanded it.
Our critique covers three major topics: (1) sketchy marketing practices, (2) shoddy evidence base, and (3) implausible biological claims. But first, it's interesting to understand the history of this strange substance.
The long history of Cerebrolysin
To our knowledge, the "secret history" of Cerebrolysin has not been illuminated anywhere to date.
Cerebrolysin was invented by the Austrian psychiatrist and neurologist Gerhart Harrer (1917 - 2011), who started usin...

Aug 6, 2024 • 48min
LW - Startup Roundup #2 by Zvi
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Startup Roundup #2, published by Zvi on August 6, 2024 on LessWrong.
Previously: Startup Roundup #1.
This is my periodic grab bag coverage of various issues surrounding startups, especially but not exclusively tech-and-VC style startups, that apply over the longer term.
I always want to emphasize up front that startups are good and you should do one.
Equity and skin in the game are where it is at. Building something people want is where it is at. This is true both for a startup that raises venture capital, and also creating an ordinary business. The expected value is all around off the charts.
That does not mean it is the best thing to do.
One must go in with eyes open to facts such as these:
1. It is hard.
2. There are many reasons it might not be for you.
3. There are also lots of other things also worth doing.
4. If you care largely about existential risk and lowering the probability of everyone dying from AI, a startup is not the obvious natural fit for that cause.
5. The ecosystem is in large part a hive of scum and villainy and horrible epistemics.
I warn of a lot of things. The bottom line still remains that if you are debating between a conventional approach of going to school or getting a regular job, versus starting a business? If it is at all close? I would start the business every time.
An Entrepreneur Immigration Program
This seems promising.
Deedy: HUGE Immigration News for International Entrepreneurs!!
If you own 10%+ of a US startup entity founded <5yrs ago with $264k+ of [qualified investments from qualified investors], you+spouses of up to 3 co-founders can come work in the US for 2.5yrs with renewal to 5yrs.
Startups globally can now come build in SF!
A "qualified investor" has to be a US citizen or PR who has made $600-650k in prior investments with 2+ startups creating 5+ jobs or generating $530k revenue growing 20% YoY.
If you don't meet the funding requirement, don't lose hope. You CAN provide alternate evidence.
For the renewal to 5yrs, you need to maintain 5% ownership, create 5+ jobs and reach $530k+ revenue growing 20% YoY or $530k+ in investment, although alternative criteria can be used.
While on the International Entrepreneur Rule (IER) program, I believe an entrepreneur can just apply directly for an O-1 to have a more renewable work permit not tied to their startup and/or an EB-1A to directly go to a green card.
Here is the official rule for it. Notice that once someone is a 'qualified investor' in this sense, their investments become a lot more valuable to such companies. So there is a lot of incentive to get a win-win deal.
Times are Tough Outside of AI
If you are in AI, it's time to build. Everyone wants to invest in you.
If you are not, times are tough for startups. Ian Rountree lays out exactly how tough.
Matt Truck: Brace for it: hearing from big companies corp dev departments that they're flooded with requests of startups looking for a home. In some categories, pretty much all companies are/would be up for sale. This too shall pass but this long-predicted tough moment seems to be upon us.
Ian Rountree (late 2023): 've been saving my first mega-tweet for this! I'll tell you what the next 3-12 months will probably look like for startups/venture…
But 1st let's rewind to Spring 2022. As soon as rates spiked we had a period where private markets went flat for as long as companies had runway since companies don't have to price their shares unless they like the price OR need the money. (Whereas liquid assets repriced pretty much immediately.)
Very few startups outside of AI - and some in climate and defense - liked the prices they were being offered so most didn't raise capital or raised extensions from insiders incentivized to hold prices steady.
Now most startups are running out of the money they raised in 2020-2021 + those extension...

Aug 6, 2024 • 2min
LW - John Schulman leaves OpenAI for Anthropic by Sodium
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: John Schulman leaves OpenAI for Anthropic, published by Sodium on August 6, 2024 on LessWrong.
Schulman writes:
I shared the following note with my OpenAI colleagues today:
I've made the difficult decision to leave OpenAI. This choice stems from my desire to deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work. I've decided to pursue this goal at Anthropic, where I believe I can gain new perspectives and do research alongside people deeply engaged with the topics I'm most interested in. To be clear, I'm not leaving due to lack of support for alignment research at OpenAI.
On the contrary, company leaders have been very committed to investing in this area. My decision is a personal one, based on how I want to focus my efforts in the next phase of my career.
(statement continues on X, Altman responds here)
TechCrunch notes that only three of the eleven original founders of OpenAI remain at the company.
Additionally, The Information reports:
Greg Brockman, OpenAI's president and one of 11 cofounders of the artificial intelligence firm, is taking an extended leave of absence.
(I figured that there should be at least one post about this on LW where people can add information as more comes in, saw that no one has made one yet, and wrote this one up)
Update 1: Greg Brockman posts on X:
I'm taking a sabbatical through end of year. First time to relax since co-founding OpenAI 9 years ago. The mission is far from complete; we still have a safe AGI to build.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Aug 6, 2024 • 8min
LW - We're not as 3-Dimensional as We Think by silentbob
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We're not as 3-Dimensional as We Think, published by silentbob on August 6, 2024 on LessWrong.
While
thinking about high-dimensional spaces and their less intuitive properties, I came to the realization that even three spatial dimensions possess the potential to overwhelm our basic human intuitions. This post is an exploration of the gap between actual 3D space, and our human capabilities to fathom it. I come to the conclusion that this gap is actually quite large, and we, or at least most of us, are not well equipped to perceive or even imagine "true 3D".
What do I mean by "true 3D"? The most straightforward example would be some ℝ ℝ function, such as the density of a cloud, or the full (physical) inner structure of a human brain (which too would be a ℝ whatever function). The closest example I've found is this visualization of a ℝ ℝ function (jump to 1:14):
(It is of course a bit ironic to watch a video of that 3D display on a 2D screen, but I think it gets the point across.)
Vision
It is true that having two eyes allows us to have depth perception. It is not true that having two eyes allows us to "see in 3D". If we ignore colors for simplicity and assume we all saw only in grayscale, then seeing with one eye is something like ℝ ℝ as far as our internal information processing is concerned - we get one grayscale value for each point on the perspective projection from the 3D physical world onto our 2D retina.
Seeing with two eyes then is ℝ ℝ (same as before, but we get one extra piece of information for each point of the projection, namely depth[1]), but it's definitely not ℝ (...). So the information we receive still has only two spatial dimensions, just with a bit more information attached.
Also note that people who lost an eye, or for other reasons don't have depth perception, are not all that limited in their capabilities. In fact, other people may barely notice there's anything unusual about them. The difference between "seeing in 2D" and "seeing with depth perception" is much smaller than the difference to not seeing at all, which arguably hints at the fact that seeing with depth perception is suspiciously close to pure 2D vision.
Screens
For decades now, humans have surrounded themselves with screens, whether it's TVs, computer screens, phones or any other kind of display. The vast majority of screens are two-dimensional. You may have noticed that, for most matters and purposes, this is not much of a limitation. Video games work well on 2D screens. Movies work well on 2D screens. Math lectures work well on 2D screens. Even renderings of 3D objects, such as cubes and spheres and cylinders and such, work well in 2D.
This is because 99.9% of the things we as humans interact with don't actually require the true power of three dimensions.
There are some exceptions, such as
brain scans - what is done there usually is to use time as a substitute for the third dimension, and show an animated slice through the brain. In principle it may be better to view brain scans with some ~holographic 3D display, but even then, the fact remains that our vision apparatus is not capable of perceiving 3D in its entirety, but only the projection onto our retinas, which even makes true 3D displays less useful than they theoretically could be.
Video Games
The vast majority of 3D video games are based on polygons: 2D surfaces placed in 3D space. Practically every 3D object in almost any video game is hollow. They're just an elaborate surface folded and oriented in space. You can see this when the camera clips into some rock, or car, or even player character: they're nothing but a hull. As 3D as the game looks, it's all a bit of an illusion, as the real geometry in video games is almost completely two-dimensional.
Here's one example of camera clipping:
The only common exception I'm aware o...

Aug 6, 2024 • 3min
LW - How I Learned To Stop Trusting Prediction Markets and Love the Arbitrage by orthonormal
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I Learned To Stop Trusting Prediction Markets and Love the Arbitrage, published by orthonormal on August 6, 2024 on LessWrong.
This is a story about a flawed Manifold market, about how easy it is to buy significant objective-sounding publicity for your preferred politics, and about why I've downgraded my respect for all but the largest prediction markets.
I've had a Manifold account for a while, but I didn't use it much until I saw and became irked by this market on the conditional probabilities of a Harris victory, split by VP pick.
The market quickly got cited by rat-adjacent folks on Twitter like Matt Yglesias, because the question it purports to answer is enormously important.
But as you can infer from the above, it has a major issue that makes it nigh-useless: for a candidate whom you know won't be chosen, there is literally no way to come out ahead on mana (Manifold keeps its share of the fees when a market resolves N/A), so all but a very few markets are pure popularity contests, dominated by those who don't mind locking up their mana for a month for a guaranteed 1% loss.
Even for the candidates with a shot of being chosen, the incentives in a conditional market are weaker than those in a non-conditional market because the fees are lost when the market resolves N/A. (Nate Silver wrote a good analysis of why it would be implausible for e.g. Shapiro vs Walz to affect Harris' odds by 13 percentage points.) So the sharps would have no reason to get involved if even one of the contenders has numbers that are off by a couple points from a sane prior.
You'll notice that I bet in this market. Out of epistemic cooperativeness as well as annoyance, I spent small amounts of mana on the markets where it was cheap to reset implausible odds closer to Harris' overall odds of victory. (After larger amounts were poured into some of those markets, I let them ride because taking them out would double the fees I have to pay vs waiting for the N/A.)
A while ago, someone had dumped Gretchen Whitmer down to 38%, but nobody had put much mana into that market, so I spent 140 mana (which can be bought for 14-20 cents if you want to pay for extra play money) to reset her to Harris' overall odds (44%). When the market resolves N/A, I'll get all but around 3 mana (less than half a penny) back.
And that half-penny bought Whitmer four paragraphs in the Manifold Politics Substack, citing the market as evidence that she should be considered a viable candidate.
(At the time of publication, it was still my 140 mana propping her number up; if I sold them, she'd be back under 40%.)
Is this the biggest deal in the world? No. But wow, that's a cheap price for objective-sounding publicity viewed by some major columnists (including some who've heard that prediction markets are good, but aren't aware of caveats). And it underscores for me that conditional prediction markets should almost never be taken seriously, and indicates that only the most liquid markets in general should ever be cited.
The main effect on me, though, is that I've been addicted to Manifold since then, not as an oracle, but as a game. The sheer amount of silly arbitrage (aside from veepstakes, there's a liquid market on whether Trump will be president on 1/1/26 that people had forgotten about, and it was 10 points higher than current markets on whether Trump will win the election) has kept the mana flowing and has kept me unserious about the prices.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Aug 5, 2024 • 54min
LW - Value fragility and AI takeover by Joe Carlsmith
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Value fragility and AI takeover, published by Joe Carlsmith on August 5, 2024 on LessWrong.
1. Introduction
"Value fragility," as I'll construe it, is the claim that slightly-different value systems tend to lead in importantly-different directions when subject to extreme optimization. I think the idea of value fragility haunts the AI risk discourse in various ways - and in particular, that it informs a backdrop prior that adequately aligning a superintelligence requires an extremely precise and sophisticated kind of technical and ethical achievement.
That is, the thought goes: if you get a superintelligence's values even slightly wrong, you're screwed.
This post is a collection of loose and not-super-organized reflections on value fragility and its role in arguments for pessimism about AI risk. I start by trying to tease apart a number of different claims in the vicinity of value fragility. In particular:
I distinguish between questions about value fragility and questions about how different agents would converge on the same values given adequate reflection.
I examine whether "extreme" optimization is required for worries about value fragility to go through (I think it at least makes them notably stronger), and I reflect a bit on whether, even conditional on creating super-intelligence, we might be able to avoid a future driven by relevantly extreme optimization.
I highlight questions about whether multipolar scenarios alleviate concerns about value fragility, even if your exact values don't get any share of the power.
My sense is that people often have some intuition that multipolarity helps notably in this respect; but I don't yet see a very strong story about why. If readers have stories that they find persuasive in this respect, I'd be curious to hear.
I then turn to a discussion of a few different roles that value fragility, if true, could play in an argument for pessimism about AI risk. In particular, I distinguish between:
1. The value of what a superintelligence does after it takes over the world, assuming that it does so.
2. What sorts of incentives a superintelligence has to try to take over the world, in a context where it can do so extremely easily via a very wide variety of methods.
3. What sorts of incentives a superintelligence has to try to take over the world, in a context where it can't do so extremely easily via a very wide variety of methods.
Yudkowsky's original discussion of value fragility is most directly relevant to (1). And I think it's actually notably irrelevant to (2). In particular, I think the basic argument for expecting AI takeover in a (2)-like scenario doesn't require value fragility to go through - and indeed, some conceptions of "AI alignment" seem to expect a "benign" form of AI takeover even if we get a superintelligence's values exactly right.
Here, though, I'm especially interested in understanding (3)-like scenarios - that is, the sorts of incentives that apply to a superintelligence in a case where it can't just take over the world very easily via a wide variety of methods. Here, in particular, I highlight the role that value fragility can play in informing the AI's expectations with respect to the difference in value between worlds where it does not take over, and worlds where it does.
In this context, that is, value fragility can matter to how the AI feels about a world where humans do retain control - rather than solely to how humans feel about a world where the AI takes over.
I close with a brief discussion of how commitments to various forms of "niceness" and intentional power-sharing, if made sufficiently credible, could help diffuse the sorts of adversarial dynamics that value fragility can create.
2. Variants of value fragility
What is value fragility? Let's start with some high-level definitions and clarifications.
...