I think it's a little bit too much to demand of our AI systems, that they be articulate moral philosophers as long as they seem to be doing mostly the right things. And so contractarianism does make predictions about this. If you could quantify the kind of harm being done by that threatening agent, that enemy agent, and you could say, usually that neutralizing the threat is better than just say killing the agent,. Butneutralizing the threat would be the best of all possible outcomes. You could imagine security robots and in the extreme military robots being designed with their goal of neutralizing enemies and neutralizing threats. I think that would be the maximum approach to it.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode