Artificial intelligence has given us algorithms capable of recognizing faces, diagnosing disease, and of course, crushing computer games. But even the smartest algorithms can sometimes behave in unexpected and unwanted ways, for example picking up gender bias from the text or images they are fed. A new framework for building AI programs suggests a way to prevent aberrant behavior in machine learning by specifying guardrails in the code from the outset.
Learn about your ad choices:
dovetail.prx.org/ad-choices