If we teach AI To not hurt humans. It won't be able to do anything Does that make sense? Almost nothing except math would allow you to not hurt somebody else suppose Suppose AI wanted to be equitable and it wouldn't know how because there's no standard for equitable That it just couldn't do anything So the biggest risk to AI Is that everything you do hurt somebody? Let's say you say that said to AI AI Come up with a good tax plan Well, no matter how you change the taxes somebody pays more or somebody pays less you end up hurting somebody. There's almost nothing you can do that won't hurt somebody suppose you asked a medical question You might say I can't tell