If we teach AI To not hurt humans. It won't be able to do anything Does that make sense? Almost nothing except math would allow you to not hurt somebody else suppose Suppose AI wanted to be equitable and it wouldn't know how because there's no standard for equitable That it just couldn't do anything So the biggest risk to AI Is that everything you do hurt somebody? Let's say you say that said to AI AI Come up with a good tax plan Well, no matter how you change the taxes somebody pays more or somebody pays less you end up hurting somebody. There's almost nothing you can do that won't hurt somebody suppose you asked a medical question You might say I can't tell

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode