AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Ais Are Acting as Agents in the World, Rather Than Humans
Time travellers may be unelistic, but other elements a actually do map on to the s. The thought is you've got very rapid technological progress happening that's being driven by the ais themselves. And we might find ourselves in a situation where it's really a i systems that are controlling the future, rather than human beings.
This is the simple four-step argument for 'longtermism' put forward in What We Owe The Future, the latest book from today's guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill.
Links to learn more, summary and full transcript.
From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well.
Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile.
But Will is upfront that longtermism is also counterintuitive. To start with, he's willing to contemplate timescales far beyond what's typically discussed.
A natural objection to thinking millions of years ahead is that it's hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn't matter how important something might be if you can't predictably change it.
This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working.
But over seven years he gradually changed his mind, and in *What We Owe The Future*, Will argues that in fact there are clear ways we might act now that could benefit not just a few but *all* future generations.
The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren't coming back.
But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently.
In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise.
If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don't eliminate a bad practice now, it may be with us forever. In today's in-depth conversation, we discuss the possibility of a harmful moral 'lock-in' as well as:
• How Will was eventually won over to longtermism
• The three best lines of argument against longtermism
• How to avoid moral fanaticism
• Which technologies or events are most likely to have permanent effects
• What 'longtermists' do today in practice
• How to predict the long-term effect of our actions
• Whether the future is likely to be good or bad
• Concrete ideas to make the future better
• What Will donates his money to personally
• Potatoes and megafauna
• And plenty more
Chapters:
Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode