

EA - Statistical foundations for worldview diversification by Karthik Tadepalli
Aug 28, 2024
20:43
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Statistical foundations for worldview diversification, published by Karthik Tadepalli on August 28, 2024 on The Effective Altruism Forum.
Note: this has been in my drafts for a long time, and I just decided to let it go without getting too hung up on details, so this is much rougher than it should be.
Summary:
Worldview diversification seems hard to justify philosophically, because it results in lower expected value than going with a single worldview that has the highest EV.
I show that you can justify worldview diversification as the solution to a decision problem under uncertainty.
The first way is to interpret worldview diversification as a minimax strategy, in which you maximize the worst-case utility of your allocation.
The second way is as an approximate solution to the problem of maximizing expected utility for a risk-averse decision maker.
Overview
Alexander Berger: ...the central idea of worldview diversification is that the internal logic of a lot of these causes might be really compelling and a little bit totalizing, and you might want to step back and say, "Okay, I'm not ready to go all in on that internal logic." So one example would be just comparing farm animal welfare to human causes within the remit of global health and wellbeing. One perspective on farm animal welfare would say, "Okay, we're going to get chickens out of cages.
I'm not a speciesist and I think that a chicken-day suffering in the cage is somehow very similar to a human-day suffering in a cage, and I should care similarly about these things." I think another perspective would say, "I would trade an infinite number of chicken-days for any human experience.
I don't care at all." If you just try to put probabilities on those views and multiply them together, you end up with this really chaotic process where you're likely to either be 100% focused on chickens or 0% focused on chickens. Our view is that that seems misguided. It does seem like animals could suffer. It seems like there's a lot at stake here morally, and that there's a lot of cost-effective opportunities that we have to improve the world this way.
But we don't think that the correct answer is to either go 100% all in where we only work on farm animal welfare, or to say, "Well, I'm not ready to go all in, so I'm going to go to zero and not do anything on farm animal welfare."
...
Rob Wiblin: Yeah. It feels so intuitively clear that when you're to some degree picking these numbers out of a hat, you should never go 100% or 0% based on stuff that's basically just guesswork. I guess, the challenge here seems to have been trying to make that philosophically rigorous, and it does seem like coming up with a truly philosophically grounded justification for that has proved quite hard.
But nonetheless, we've decided to go with something that's a bit more cluster thinking, a bit more embracing common sense and refusing to do something that obviously seems mad.
Alexander Berger: And I think part of the perspective is to say look, I just trust philosophy a little bit less. So the fact that something might not be philosophically rigorous... I'm just not ready to accept that as a devastating argument against it.
80000 hours
This note explains how you might arrive at worldview diversification from a formal framework. I don't claim it is the only way you might arrive at it, and I don't claim that it captures everyone's intuitions for why worldview diversification is a good idea. It only captures my intuitions, and formalizes them in a way that might be helpful for others.
Suppose a decisionmaker wants to allocate money across different cause areas. But the marginal social value of money to each cause area is unknown/known with error (e.g. moral weights, future forecasts), so they don't actually know how to maximize social value ex ante. What sh...
Note: this has been in my drafts for a long time, and I just decided to let it go without getting too hung up on details, so this is much rougher than it should be.
Summary:
Worldview diversification seems hard to justify philosophically, because it results in lower expected value than going with a single worldview that has the highest EV.
I show that you can justify worldview diversification as the solution to a decision problem under uncertainty.
The first way is to interpret worldview diversification as a minimax strategy, in which you maximize the worst-case utility of your allocation.
The second way is as an approximate solution to the problem of maximizing expected utility for a risk-averse decision maker.
Overview
Alexander Berger: ...the central idea of worldview diversification is that the internal logic of a lot of these causes might be really compelling and a little bit totalizing, and you might want to step back and say, "Okay, I'm not ready to go all in on that internal logic." So one example would be just comparing farm animal welfare to human causes within the remit of global health and wellbeing. One perspective on farm animal welfare would say, "Okay, we're going to get chickens out of cages.
I'm not a speciesist and I think that a chicken-day suffering in the cage is somehow very similar to a human-day suffering in a cage, and I should care similarly about these things." I think another perspective would say, "I would trade an infinite number of chicken-days for any human experience.
I don't care at all." If you just try to put probabilities on those views and multiply them together, you end up with this really chaotic process where you're likely to either be 100% focused on chickens or 0% focused on chickens. Our view is that that seems misguided. It does seem like animals could suffer. It seems like there's a lot at stake here morally, and that there's a lot of cost-effective opportunities that we have to improve the world this way.
But we don't think that the correct answer is to either go 100% all in where we only work on farm animal welfare, or to say, "Well, I'm not ready to go all in, so I'm going to go to zero and not do anything on farm animal welfare."
...
Rob Wiblin: Yeah. It feels so intuitively clear that when you're to some degree picking these numbers out of a hat, you should never go 100% or 0% based on stuff that's basically just guesswork. I guess, the challenge here seems to have been trying to make that philosophically rigorous, and it does seem like coming up with a truly philosophically grounded justification for that has proved quite hard.
But nonetheless, we've decided to go with something that's a bit more cluster thinking, a bit more embracing common sense and refusing to do something that obviously seems mad.
Alexander Berger: And I think part of the perspective is to say look, I just trust philosophy a little bit less. So the fact that something might not be philosophically rigorous... I'm just not ready to accept that as a devastating argument against it.
80000 hours
This note explains how you might arrive at worldview diversification from a formal framework. I don't claim it is the only way you might arrive at it, and I don't claim that it captures everyone's intuitions for why worldview diversification is a good idea. It only captures my intuitions, and formalizes them in a way that might be helpful for others.
Suppose a decisionmaker wants to allocate money across different cause areas. But the marginal social value of money to each cause area is unknown/known with error (e.g. moral weights, future forecasts), so they don't actually know how to maximize social value ex ante. What sh...