AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Our existential risk – the probability that we could wipe ourselves out due to AI, bio-engineering, nuclear war, climate change, etc. in the next 100 years – currently sits at 1 in 6. Let that sink in! Would you get on a plane if there was a 17% chance it would crash? Would you do everything you could to prevent a calamity if you were presented with those odds?
My chat today covers a wild idea that could – and should - better our chances of existing as a species…and lead to a human flourishing I struggle to even imagine. Longtermism argues that prioritising the long-term future of humanity has exponential ethical and existential boons. Flipside, if we don’t choose the longtermism route, the repercussions are well devastating.
Will MacAskill is one of the world’s leading moral philosophers and I travel to Oxford UK, where he runs the Centre for Effective Altruism, the Global Priorities Institute and the Forethought Foundation, to talk through these massive moral issues. Will also explains that right now is the most important time in humanity’s history. Our generation singularly has the power and responsibility to determine two diametrically different paths for humanity. This excites me; I hope it does you, too.
Learn more about Will MacAskill’s work
Purchase his new book What We Owe the Future: A Million year view
If you need to know a bit more about me… head to my "about" page.
Subscribe to my Substack newsletter for more such conversations.
Get your copy of my book, This One Wild and Precious Life
Let’s connect on Instagram! It’s where I interact the most.
Hosted on Acast. See acast.com/privacy for more information.