AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
We want to help others as much as we can. Knowing how is hard: there are many empirical, normative, and decision-theoretic uncertainties that make it difficult to identify the best paths toward that goal. Should we be focused on sparing children from vitamin deficiencies? Reducing suffering on factory farms? Mitigating the threats associated with AI? Should we split our attention between all three? Something else entirely? Two common answers to these questions are (1) that we ought to set priorities based on what would maximize expected value and (2) that expected value maximization supports prioritizing existential risk mitigation over all else. This presentation introduces a sequence from Rethink Priorities’ Worldview Investigations Team that examines these two claims. We argue that there are reasons to doubt them both—reasons stemming from significant uncertainty about the correct normative theory of ethical decision-making and uncertainty about many of the parameters and assumptions that enter into expected value calculations. We also introduce a tool for comparing the cost-effectiveness of different causes and summarize its implications for decision-making under uncertainty. There is a follow-on workshop (Modeling your own cause prioritization) straight after this talk for those who would like hands-on experience on using the model.