Looking into the future, we can see the possibility of severe occurrences that threaten human extinction. Until recently, we haven’t taken this seriously and are therefore putting the future of humanity at risk. When looking at existential risk, there is a difference between natural disasters such as asteroids and the human-created risks inherent in the rapid advancements of areas like artificial intelligence and nanotechnology. No one wants to stop science, but if we want to create a sustainable future, we need to understand these risks as fully as we can so that we can balance the benefits of scientific discovery and innovation and protect ourselves from existential risk.Huw Price is the Bertrand Russell Professor of Philosophy Cambridge and a co-founder of the Centre for Study of Existential Risk at the University of Cambridge.Jaan Tallinn is a founding engineer of Skype and Kazaa as well as co-founder of MetaMed, a personalized medical research company. He is a co-founder of the Centre for Study of Existential Risk at the University of Cambridge.