AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Read the full transcript here.
What is longtermism? Is the long-term future of humanity (or life more generally) the most important thing, or just one among many important things? How should we estimate the chance that some particular thing will happen given that our brains are so computationally limited? What is "the optimizer's curse"? How top-down should EA be? How should an individual reason about expected values in cases where success would be immensely valuable but the likelihood of that particular individual succeeding is incredibly low? (For example, if I have a one in a million chance of stopping World War III, then should I devote my life to pursuing that plan?) If we want to know, say, whether protests are effective or not, we merely need to gather and analyze existing data; but how can we estimate whether interventions implemented in the present will be successful in the very far future?
William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator–backed 80,000 Hours, which together have moved over $200 million to effective charities. He's the author of Doing Good Better, Moral Uncertainty, and What We Owe The Future.
Staff
Music
Affiliates