

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

May 31, 2024 • 7min
AF - We might be dropping the ball on Autonomous Replication and Adaptation. by Charbel-Raphael Segerie
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We might be dropping the ball on Autonomous Replication and Adaptation., published by Charbel-Raphael Segerie on May 31, 2024 on The AI Alignment Forum.
Here is a little Q&A
Can you explain your position quickly?
I think autonomous replication and adaptation in the wild is under-discussed as an AI threat model. And this makes me sad because this is one of the main reasons I'm worried. I think one of AI Safety people's main proposals should first focus on creating a nonproliferation treaty. Without this treaty, I think we are screwed. The more I think about it, the more I think we are approaching a point of no return.
It seems to me that open source is a severe threat and that nobody is really on the ball. Before those powerful AIs can self-replicate and adapt, AI development will be very positive overall and difficult to stop, but it's too late after AI is able to adapt and evolve autonomously because Natural selection favors AI over humans.
What is ARA?
Autonomous Replication and Adaptation. Let's recap this quickly. Today, generative AI functions as a tool: you ask a question and the tool answers. Question, answer. It's simple. However, we are heading towards a new era of AI, one with autonomous AI. Instead of asking a question, you give it a goal, and the AI performs a series of actions to achieve that goal, which is much more powerful.
Libraries like AutoGPT or ChatGPT, when they navigate the internet, already show what these agents might look like.
Agency is much more powerful and dangerous than AI tools. Thus conceived, AI would be able to replicate autonomously, copying itself from one computer to another, like a particularly intelligent virus. To replicate on a new computer, it must navigate the internet, create a new account on AWS, pay for the virtual machine, install the new weights on this machine, and start the replication process.
According to METR, the organization that audited OpenAI, a dozen tasks indicate ARA capabilities. GPT-4 plus basic scaffolding was capable of performing a few of these tasks, though not robustly. This was over a year ago, with primitive scaffolding, no dedicated training for agency, and no reinforcement learning. Multimodal AIs can now successfully pass CAPTCHAs. ARA is probably coming.
It could be very sudden. One of the main variables for self-replication is whether the AI can pay for cloud GPUs. Let's say a GPU costs $1 per hour. The question is whether the AI can generate $1 per hour autonomously continuously. Then, you have something like an exponential process.
I think that the number of AIs is probably going to plateau, but regardless of a plateau and the number of AIs you get asymptotically, here you are: this is an autonomous AI, which may become like an endemic virus that is hard to shut down.
Is ARA a point of no return?
Yes, I think ARA with full adaptation in the wild is beyond the point of no return.
Once there is an open-source ARA model or a leak of a model capable of generating enough money for its survival and reproduction and able to adapt to avoid detection and shutdown, it will be probably too late:
The idea of making an ARA bot is very accessible.
The seed model would already be torrented and undeletable.
Stop the internet? The entire world's logistics depend on the internet. In practice, this would mean starving the cities over time.
Even if you manage to stop the internet, once the ARA bot is running, it will be unkillable. Even rebooting all providers like AWS would not suffice, as individuals could download and relaunch the model, or the agent could hibernate on local computers. The cost to completely eradicate it altogether would be way too high, and it only needs to persist in one place to spread again.
The question is more interesting for ARA with incomplete adaptation capabilities. It is likely th...

May 31, 2024 • 22min
AF - There Should Be More Alignment-Driven Startups! by Matthew "Vaniver" Gray
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There Should Be More Alignment-Driven Startups!, published by Matthew "Vaniver" Gray on May 31, 2024 on The AI Alignment Forum.
Many thanks to Brandon Goldman, David Langer, Samuel Härgestam, Eric Ho, Diogo de Lucena, and Marc Carauleanu, for their support and feedback throughout.
Most alignment researchers we sampled in our recent survey think we are currently not on track to succeed with alignment-meaning that humanity may well be on track to lose control of our future.
In order to improve our chances of surviving and thriving, we should apply our most powerful coordination methods towards solving the alignment problem. We think that startups are an underappreciated part of humanity's toolkit, and having more AI-safety-focused startups would increase the probability of solving alignment.
That said, we also appreciate that AI safety is highly complicated by nature[1] and therefore calls for a more nuanced approach than simple pro-startup boosterism. In the rest of this post, we'll flesh out what we mean in more detail, hopefully address major objections, and then conclude with some pro-startup boosterism.
Expand the alignment ecosystem with startups
We applaud and appreciate current efforts to align AI. We could and should have many more. Founding more startups will develop human and organizational capital and unlock access to financial capital not currently available to alignment efforts.
"The much-maligned capitalism is actually probably the greatest incentive alignment success in human history" - Insights from Modern Principles of Economics
The alignment ecosystem is limited on entrepreneurial thinking and behavior. The few entrepreneurs among us commiserate over this whenever we can.
We predict that many interested in alignment seem to do more to increase P(win) if they start thinking of themselves as problem-solvers specializing in a particular sub-problem first, deploying whatever approaches are appropriate in order to solve the smaller problem. Note this doesn't preclude scaling ambitiously and solving bigger problems later on.[2]
Running a company that is targeting a particular niche of the giant problem seems like one of the best ways to go about this transition, unlocking a wealth of best practices that could be copied. For example, we've seen people in this space raise too little, too late, resulting in spending unnecessary time in the fundraising stage instead of doing work that advances alignment.
We think this is often the result of not following a more standard playbook on how and when to raise, which could be done without compromising integrity and without being afraid to embrace the fact that they are doing a startup rather than a more traditional (non-profit) AI safety org.[3]
We think creating more safety-driven startups will both increase capital availability in the short-term (as more funding might be available for for-profit investments than non-profit donations) and in the long-term (as those companies succeed and have money to invest and create technically skilled and safety-motivated employees who have the resources to themselves be investors or donors for other projects).
The creation of teams that have successfully completed projects together-organizational capital-will also better prepare the ecosystem to respond to new challenges as they arise. The organic structures formed by market systems allow for more dynamic and open allocation of people and resources to solve problems as they arise.
We also think that it is possible that alignment research will benefit from and perhaps even require significant resources that existing orgs may be too hesitant to spend. OpenAI, for example, never allocated the resources it promised to its safety team, and it has received pressure from corporate partners to be more risk-averse investing in R&D after ...

May 31, 2024 • 11min
EA - titotal on AI risk scepticism by Vasco Grilo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: titotal on AI risk scepticism, published by Vasco Grilo on May 31, 2024 on The Effective Altruism Forum.
This is a linkpost for titotal's posts on AI risk scepticism, which I think are great. I list the posts below chronologically.
Chaining the evil genie: why "outer" AI safety is probably easy
Conclusion
Summing up my argument in TLDR format:
1. For each AGI, there will be tasks that have difficulty beyond it's capabilities.
2. You can make the task "subjugate humanity under these constraints" arbitrarily more difficult or undesirable by adding more and more constraints to a goal function.
3. A lot of these constraints are quite simple, but drastically effective, such as implementing time limits, bounded goals, and prohibitions on human death.
4. Therefore, it is not very difficult to design a useful goal function that raises subjugation difficulty above the capability level of the AGI, simply by adding arbitrarily many constraints.
Even if you disagree with some of these points, it seems hard to see how a constrained AI wouldn't at least have a greatly reduced probability of successful subjugation, so I think it makes sense to pursue constraints anyway (as I'm sure plenty of people already are).
AGI Battle Royale: Why "slow takeover" scenarios devolve into a chaotic multi-AGI fight to the death
Summary
The main argument goes as follows:
1. Malevolent AGI's (in the standard model of unbounded goal maximisers) will almost all have incompatible end goals, making each AGI is an existential threat to every other AGI.
2. Once one AGI exists, others are likely not far behind, possibly at an accelerating rate.
3. Therefore, if early AGI can't take over immediately, there will be a complex, chaotic shadow war between multiple AGI's with the ultimate aim of destroying every other AI and humanity.
I outlined a few scenarios of how this might play out, depending on what assumptions you make:
Scenario a: Fast-ish takeoff
The AGI is improving fast enough that it can tolerate a few extra enemies. It boosts itself until the improvement saturates, takes a shot at humanity, and then dukes it out with other AGI after we are dead.
Scenario b: Kamikaze scenario
The AGI can't improve fast enough to keep up with new AGI generation. It attacks immediately, no matter how slim the odds, because it is doomed either way.
Scenario c: AGI induced slowdown
The AGI figures out a way to quickly sabotage the growth of new AGI's, allowing it to outpace their growth and switch to scenario a.
Scenario d: AI cooperation
Different AGI's work together and pool power to defeat humanity cooperatively, then fight each other afterwards.
Scenario e: Crabs in a bucket
Different AGI's constantly tear down whichever AI is "winning", so the AI are too busy fighting each other to ever take us down.
I hope people find this analysis interesting! I doubt I'm the first person to think of these points, but I thought it was worth giving an independent look at it.
How "AGI" could end up being many different specialized AI's stitched together
Summary:
In this post, I am arguing that advanced AI may consist of many different smaller AI modules stitched together in a modular fashion. The argument goes as follows:
1. Existing AI is already modular in nature, in that it is wrapped into larger, modular, "dumb" code.
2. In the near-term, you can produce far more impressive results by stitching together different specialized AI modules than by trying to force one AI to do everything.
3. This trend could continue into the future, as specialized AI can have their architecture, goals and data can be customized for maximum performance in each specific sub-field.
I then explore a few implications this type of AI system might have for AI safety, concluding that it might result in disunified or idiot savant AI's (helping humanity), or ...

May 30, 2024 • 21min
LW - OpenAI: Helen Toner Speaks by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Helen Toner Speaks, published by Zvi on May 30, 2024 on LessWrong.
Helen Toner went on the TED AI podcast, giving us more color on what happened at OpenAI. These are important claims to get right.
I will start with my notes on the podcast, including the second part where she speaks about regulation in general. Then I will discuss some implications more broadly.
Notes on Helen Toner's TED AI Show Podcast
This seems like it deserves the standard detailed podcast treatment. By default each note's main body is description, any second-level notes are me.
1. (0:00) Introduction. The host talks about OpenAI's transition from non-profit research organization to de facto for-profit company. He highlights the transition from 'open' AI to closed as indicative of the problem, whereas I see this as the biggest thing they got right.
He also notes that he was left with the (I would add largely deliberately created and amplified by enemy action) impression that Helen Toner was some kind of anti-tech crusader, whereas he now understands that this was about governance and misaligned incentives.
2. (5:00) Interview begins and he dives right in and asks about the firing of Altman. She dives right in, explaining that OpenAI was a weird company with a weird structure, and a non-profit board supposed to keep the company on mission over profits.
3. (5:20) Helen says for years Altman had made the board's job difficult via withholding information, misrepresenting things happening at the company, and 'in some cases outright lying to the board.'
4.
(5:45) Helen says she can't share all the examples of lying or withholding information, but to give a sense: The board was not informed about ChatGPT in advance and learned about ChatGPT on Twitter, Altman failed to inform the board that he owned the OpenAI startup fund despite claiming to be an independent board member, giving false information about the company's formal safety processes on multiple occasions, and relating to her research paper, that Altman in the paper's wake started lying to
other board members in order to push Toner off the board.
1. I will say it again. If the accusation bout Altman lying to the board in order to change the composition of the board is true, then in my view the board absolutely needed to fire Altman. Period. End of story. You have one job.
2. As a contrasting view, the LLMs I consulted thought that firing the CEO should be considered, but it was plausible this could be dealt with via a reprimand combined with changes in company policy.
3. I asked for clarification given the way it was worded in the podcast, and can confirm that the Altman withheld information from the board regarding the startup fund and the launch of ChatGPT, but he did not lie about those.
4. Repeatedly outright lying about safety practices seems like a very big deal?
5. It sure sounds like Altman had a financial interest in OpenAI via the startup fund, which means he was not an independent board member, and that the company's board was not majority independent despite OpenAI claiming that it was. That is… not good, even if the rest of the board knew.
5. (7:25) Toner says that any given incident Altman could give an explanation, but the cumulative weight meant they could not trust Altman. And they'd been considering firing Altman for over a month.
1. If they were discussing firing Altman for at least a month, that raises questions about why they weren't better prepared, or why they timed the firing so poorly given the tender offer.
6. (8:00) Toner says that Altman was the board's main conduit of information about the company. They had been trying to improve processes going into the fall, these issues had been long standing.
7. (8:40) Then in October two executives went to the board and said they couldn't trust Altman, that the atmospher...

May 30, 2024 • 6min
LW - Non-Disparagement Canaries for OpenAI by aysja
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Non-Disparagement Canaries for OpenAI, published by aysja on May 30, 2024 on LessWrong.
Since at least 2017, OpenAI has asked departing employees to sign offboarding agreements which legally bind them to permanently - that is, for the rest of their lives - refrain from criticizing OpenAI, or from otherwise taking any actions which might damage its finances or reputation.[1]
If they refused to sign, OpenAI threatened to take back (or make unsellable) all of their already-vested equity - a huge portion of their overall compensation, which often amounted to millions of dollars. Given this immense pressure, it seems likely that most employees signed.
If they did sign, they became personally liable forevermore for any financial or reputational harm they later caused. This liability was unbounded, so had the potential to be financially ruinous - if, say, they later wrote a blog post critical of OpenAI, they might in principle be found liable for damages far in excess of their net worth.
These extreme provisions allowed OpenAI to systematically silence criticism from its former employees, of which there are now hundreds working throughout the tech industry. And since the agreement also prevented signatories from even disclosing that they had signed this agreement, their silence was easy to misinterpret as evidence that they didn't have notable criticisms to voice.
We were curious about who may have been silenced in this way, and where they work now, so we assembled an (incomplete) list of former OpenAI employees.[2] From what we were able to find, it appears that over 500 people may have signed these agreements, of which only 3 have publicly reported being released so far.[3]
We were especially alarmed to notice that the list contains at least 12 former employees currently working on AI policy, and 6 working on safety evaluations.[4] This includes some in leadership positions, for example:
Beth Barnes (Head of Research, METR)
Bilva Chandra (Senior AI Policy Advisor, NIST)
Charlotte Stix (Head of Governance, Apollo Research)
Chris Painter (Head of Policy, METR)
Geoffrey Irving (Research Director, AI Safety Institute)
Jack Clark (Co-Founder [focused on policy and evals], Anthropic)
Jade Leung (CTO, AI Safety Institute)
Paul Christiano (Head of Safety, AI Safety Institute)
Remco Zwetsloot (Executive Director, Horizon Institute for Public Service)
In our view, it seems hard to trust that people could effectively evaluate or regulate AI, while under strict legal obligation to avoid sharing critical evaluations of a top AI lab, or from taking any other actions which might make the company less valuable (as many regulations presumably would). So if any of these people are not subject to these agreements, we encourage them to mention this in public.
It is rare for company offboarding agreements to contain provisions this extreme - especially those which prevent people from even disclosing that the agreement itself exists. But such provisions are relatively common in the American intelligence industry. The NSA periodically forces telecommunications providers to reveal their clients' data, for example, and when they do the providers are typically prohibited from disclosing that this ever happened.
In response, some companies began listing warrant canaries on their websites - sentences stating that they had never yet been forced to reveal any client data. If at some point they did receive such a warrant, they could then remove the canary without violating their legal non-disclosure obligation, thereby allowing the public to gain indirect evidence about this otherwise-invisible surveillance.
Until recently, OpenAI succeeded at preventing hundreds of its former employees from ever being able to criticize them, and prevented most others - including many of their current employees! - from...

May 30, 2024 • 1min
EA - Florida and Alabama ban cultivated meat by Ben Millwood
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Florida and Alabama ban cultivated meat, published by Ben Millwood on May 30, 2024 on The Effective Altruism Forum.
This was ~three weeks ago, so I'm a little surprised I couldn't already find anything about it on the forum. Maybe I'm just bad at searching?
Anyway, Ron DeSantis posted the following around the same time as the Florida ban:
https://twitter.com/GovRonDeSantis/status/1785684809467011431
I suspect the real motivations are more ordinary protectionism, but it seems like potentially a sign that cultivated meat might start getting politicized / embroiled in the broader culture war, though also (perhaps surprisingly) DeSantis has had supportive comments from a Democratic senator (see Vox again).
I'm interested in the extent to which alt-meat startups and policy folk were surprised by this move - whether this represents a more hostile environment than we expected, or whether this was "already priced in". I tried to look at stock prices, but based on some quick searches only found one publically-traded cultivated meat stock STKH, and don't understand enough about the process of when the bill's passing became inevitable to really interpret it.
(Link preview image is bull grayscale photo by Hans Eiskonen)
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

May 30, 2024 • 4min
AF - Clarifying METR's Auditing Role by Beth Barnes
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clarifying METR's Auditing Role, published by Beth Barnes on May 30, 2024 on The AI Alignment Forum.
Although METR has never claimed to have audited anything or to be providing meaningful oversight or accountability, there has been some confusion about whether METR is an auditor or planning to be one.
To clarify this point:
1. METR's top priority is to develop the science of evaluations, and we don't need to be auditors in order to succeed at this.
We aim to build evaluation protocols that can be used by evaluators/auditors regardless of whether that is the government, an internal lab team, another third party, or a team at METR.
2. We should not be considered to have 'audited' GPT-4 or Claude.
Those were informal pilots of what an audit might involve, or research collaborations - not providing meaningful oversight. For example, it was all under NDA - we didn't have any right or responsibility to disclose our findings to anyone outside the labs - and there wasn't any formal expectation it would inform deployment decisions. We also didn't have the access necessary to perform a proper evaluation. In the OpenAI case, as is noted in their system card:
"We granted the Alignment Research Center (ARC) early access to the models as a part of our expert red teaming efforts … We provided them with early access to multiple versions of the GPT-4 model, but they did not have the ability to fine-tune it. They also did not have access to the final version of the model that we deployed.
The final version has capability improvements relevant to some of the factors that limited the earlier models power-seeking abilities, such as longer context length, and improved problem-solving abilities as in some cases we've observed. … fine-tuning for task-specific behavior could lead to a difference in performance.
As a next step, ARC will need to conduct experiments that (a) involve the final version of the deployed model (b) involve ARC doing its own fine-tuning, before a reliable judgment of the risky emergent capabilities of GPT-4-launch can be made".
3. We are and have been in conversation with frontier AI companies about whether they would like to work with us in a third-party evaluator capacity, with various options for how this could work.
As it says on our website:
"We have previously worked with
Anthropic,
OpenAI, and other companies to pilot some informal pre-deployment evaluation procedures. These companies have also given us some kinds of non-public access and provided compute credits to support evaluation research.
We think it's important for there to be third-party evaluators with formal arrangements and access commitments - both for evaluating new frontier models before they are scaled up or deployed, and for conducting research to improve evaluations.
We do not yet have such arrangements, but we are excited about taking more steps in this direction."
4. We are interested in conducting third-party evaluations and may hire & fundraise to do so, but would also be happy to see other actors enter the space. Whether we expand our capacity here depends on many factors such as:
Whether governments mandate access/this kind of relationship.
Whether governments want to work with third parties vs conduct audits in-house.
Whether frontier AI companies are keen to work with us in this capacity, giving us the necessary access to do so.
How successful we are in hiring the talent we need to do this without detracting from our top priority of developing the science.
How successful governments or other third-party evaluators are at performing evaluation protocols sufficiently well.
Technical considerations of what kind of expertise is required for doing good elicitation.
Etc.
If you're interested in helping METR conduct third-party evaluations in-house and/or support government or other auditors t...

May 30, 2024 • 4min
EA - A Scar Worth Bearing: My Improbable Story of Kidney Donation by Elizabeth Klugh
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Scar Worth Bearing: My Improbable Story of Kidney Donation, published by Elizabeth Klugh on May 30, 2024 on The Effective Altruism Forum.
TL;DR: I donated my kidney and you can too. If that's too scary, consider blood donation, the bone marrow registry, post-mortem organ donation, or other living donations (birth tissue, liver donation).
Kidney donation sucks. It's scary, painful, disruptive, scarring. My friends and family urged me not to; words were exchanged, tears were shed. My risk of preeclampsia tripled, that of end stage renal disease multiplied by five. I had to turn down two job offers while prepping for donation.
It is easy to read philosophical arguments in favor of donation, agree with them, and put the book back on the shelf. But it is different when your friend needs a kidney: Love bears all things, believes all things, hopes all things, endures all things.
Eighteen months ago, at 28-years-old, my friend Alan started losing weight. He developed a distinctive butterfly-shaped rash and became too weak to eat. On February 1, 2023, he collapsed. The level of toxins in his blood were the worst the doctors had ever seen, 24 times the normal level. He shouldn't have been alive. Two years ago, he'd watched his mother die of lupus and now he had the same disease. His body was attacking itself.
Alan:
By April 1, transplant discussions were under way. A living donor would mean avoiding years of relentless dialysis while waiting for a 3-year backlog of deceased donors. Living kidneys are better-quality and longer-lasting too. Having received six units of blood though, Alan had become allergic to 88% of donors.
Regardless, I completed a comprehensive eleven-page history to determine my eligibility. In each of my classes, and at my own wedding, I gave a brief presentation encouraging others to apply as well. Nobody did.
After initial blood work, my blood type was deemed incompatible, but I continued the health screenings to see if I could give indirectly through the National Kidney Registry. There were countless physicals, urine samples, iron infusions, psychological examinations, and dozens of tubes of blood. Throughout the process, Alan, his wife Meg, and I had many conversations that went something like this:
Meg and Alan: "You know you don't have to do this right?"
Me: "I want to... I might still bail though."
Meg and Alan: "We certainly wouldn't blame you if you did."
In January, I got the call that further bloodwork showed that I had a special type of AB+ blood and would be a direct match for Alan. I was elated. Alan cried. We both figured that God wouldn't have made me such an improbable match if I wasn't meant to share.
So, on Tuesday April 9, 2024, we applied lick-and-stick kidney tattoos and drove to the hospital together at 5am. We were wheeled into surgery at 9am and were out by lunchtime. I took the anesthesia harder than most and spent a day longer in the hospital than predicted. I had an ugly scar and was crumpled in pain. I vomited on myself. I couldn't sleep on my side. I couldn't sleep at all. For weeks, every time I coughed or sneezed, it felt like I was going to rip open.
There were times I feared I would never heal.
But that's not the point.
The point is that life persists. Alan is composing symphonies and playing cello and learning Mandarin. He released a new rap album, Back from the Dead, about his experiences. Though still recovering, I'm attending weddings and baking muffins, planting plants, and mending clothes. I started a new role with FHI 360 and am working to eradicate neglected tropical diseases. I will continue to fight for life.
And you can too: Consider donating a kidney, giving blood, joining the bone marrow registry, signing up for post-mortem organ donation, or giving other living donations (birth tissue, liver donation).
Alan ...

May 30, 2024 • 25min
LW - Awakening by lsusr
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Awakening, published by lsusr on May 30, 2024 on LessWrong.
This is the story of my personal experience with Buddhism (so far).
First Experiences
My first experience with Buddhism was in my high school's World Religions class. For homework, I had to visit a religious institution. I was getting bad grades, so I asked if I could get extra credit for visiting two and my teacher said yes. I picked an Amida Buddhist church and a Tibetan Buddhist meditation center.
I took off my shoes at the entrance to the Tibetan Buddhist meditation center. It was like nothing I had ever seen before in real life. There were no chairs. Cushions were on the floor instead. The walls were covered in murals. There were no instructions. People just sat down and meditated. After that there was some walking meditation. I didn't know anything about meditation so I instead listened to the birds and the breeze out of an open window.
Little did I know that this is similar to the Daoist practices that would later form the foundation of my practice.
The Amida Buddhist church felt like a fantasy novelist from a Protestant Christian background wanted to invent a throwaway religion in the laziest way possible so he just put three giant Buddha statues on the alter and called it a day. The priest told a story about his beautiful stained glass artifact. A young child asked if he could have the pretty thing. The priest, endeavoring to teach non-attachment, said yes. Then the priest asked for it back.
The child said no, thereby teaching the priest about non-attachment. Lol.
It would be ten years until I returned to Buddhism.
Initial Search
It is only after you have lost everything that you are free to do anything.
Things were bad. I had dumped six years of my life into a failed startup. I had allowed myself to be gaslit (nothing to do with the startup; my co-founders are great people) for even longer than that. I believed (incorrectly) that I had an STD. I had lost most of my friends. I was living in a basement infested with mice. I slept poorly because my mattress was so broken I could feel the individual metal bedframe bars cut into my back. And that's just the stuff I'm comfortable writing about.
I was looking for truth and salvation. This is about when I discovered LessWrong. LessWrong addressed the truth problem. I still needed salvation.
On top of all this, I had chronic anxiety. I was anxious all the time. I had always been anxious all the time. What was different is this time I was paying attention. Tim Ferris recommends the book Don't Feed the Monkey Mind: How to Stop the Cycle of Anxiety, Fear, and Worry by Jennifer Shannon (Licensed Marriage and Family Therapist) so I read it. The book has lots of good advice.
At the end, there's a small segment about how meditation might trump everything else in the book put together, but science doesn't really understand it (yet) and its side-effects are unknown [to science].
Eldritch mind altering practices beyond the domain of science? Sign me up!
[Cue ominous music.]
I read The Art of Happiness: A Handbook for Living by the Dalai Lama. The Dalai Lama's approach to happiness felt obviously true, yet it was a framework nobody had ever told me about. The basic idea is that if you think and behave lovingly and ethically then you will be happy. He included instructions for basic metta (compassion) meditation. Here's how it works:
1. You focus on your feelings of compassion for your closest family and pets.
2. Then you focus on your feelings of compassion for your closest friends.
3. Then less-close friends.
4. Then acquaintenances.
5. Then enemies.
That's the introductory version. At the advanced level, you can skip all these bootstrapping steps and jump straight to activating compassion itself. The first time I tried the Dalai Lama's metta instructions, it felt so...

May 30, 2024 • 6min
LW - US Presidential Election: Tractability, Importance, and Urgency by kuhanj
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Presidential Election: Tractability, Importance, and Urgency, published by kuhanj on May 30, 2024 on LessWrong.
Disclaimer: To avoid harmful polarization of important topics, this post is written in a non-partisan manner, and I'd encourage comments to be written with this in mind.
US presidential elections are surprisingly tractable
1. US presidential elections are often extremely close.
1. Biden won the last election by 42,918 combined votes in three swing states. Trump won the election before that by 77,744 votes. 537 votes in Florida decided the 2000 election.
2. There's a good chance the 2024 election will be very close too.
1. Trump leads national polling by around 1% nationally, and polls are tighter than they were the last two elections. If polls were perfectly accurate (which of course, they aren't), the tipping point state would be Pennsylvania or Michigan, which are currently at +1-2% for Trump.
3. There is still low-hanging fruit. Estimates for how effectively top RCT-tested interventions to generate net swing-state votes this election range from a few hundred to several thousand dollars per vote. Top non-RCT-able interventions are likely even better. Many potentially useful strategies have not been sufficiently explored. Some examples:
1. mobilizing US citizens abroad (who vote at a ~10x lower rate than citizens in the country), or swing-state university students (perhaps through a walk-out-of-classes-to-the-polls demonstration).
2. There is no easily-searchable resource on how to best contribute to the election. (Look up the best ways to contribute to the election online - the answers are not very helpful.)
3. Anecdotally, people with little political background have been able to generate many ideas that haven't been tried and were received positively by experts.
4. Many top organizations in the space are only a few years old, which suggests they have room to grow and that more opportunities haven't been picked.
5. Incentives push talent away from political work:
1. Jobs in political campaigns are cyclical/temporary, very demanding, poorly compensated, and offer uncertain career capital (i.e. low rewards for working on losing campaigns).
2. How many of your most talented friends work in electoral politics?
6. The election is more tractable than a lot of other work: Feedback loops are more measurable and concrete, and the theory of change fairly straightforward. Many other efforts that significant resources have gone into have little positive impact to show for them (though of course ex-ante a lot of these efforts seemed very reasonable to prioritize) - e.g. efforts around OpenAI, longtermist branding, certain AI safety research directions, and more.
Much more important than other elections
This election seems unusually important for several reasons (though
people always say this):
There's arguably a decent chance that very critical decisions about transformative AI will be made in 2025-2028. The role of governments might be especially important for AI if other prominent (state and lab) actors cannot be trusted. Biden's administration issued a landmark executive order on AI in October 2023. Trump has vowed to repeal it on Day One.
Compared to other governments, the US government is unusually influential. The US government spent over $6 trillion in the 2023 fiscal year, and makes key decisions involving billions of dollars each year for issues like global development, animal welfare, climate change, and international conflicts.
Critics argue that Trump and his allies are unique in their response to the 2020 election, plans to fill the government with tens of thousands of vetted loyalists, and in how people who have worked with Trump have described him. On the other side, Biden's critics point to his age (81 years, four years older than Trump), his respo...


