
Transhumanism and Existential Risk with Josh Schuster and Derek Woods
The Good Robot
The Existential Risk of Intelligent Life
How do transhumanists define intelligent life? And how do they think intelligent life can disappear? Sure. So I'm just going to quote the Fothrim's initial definition from that early essay where he defines existential risk as one where an adverse outcome would either annihilate Earth originating intelligent life or permanently and drastically curtail its potential. He doesn't mention human life or animal life or anything ecological, but rather earth originating intelligent life as this sort of most important objective for thinking what is a true existential risk. We spend a lot of time in the book unpacking that initial definition. It suggests that there could be catastrophes that will inflict violence on humans, animals, but as long as they