AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Impossibility of Accepting Existential Risk
In a world where we have existential risks so nuclear weapons for example constitute an existential risk perhaps engineered pandemics could also wipe out from humanity. Why in a sense shouldn't we accept some level of existential risk from from AI systems? We don't have to build super intelligent godlike machines we can be very happy with very helpful tools if we agree that this is the level of technology we want now.