The European Union today published a set of guidelines on how companies and governments should develop ethical applications of artificial intelligence. These rules aren’t like Isaac Asimov’s “Three Laws of Robotics.” They don’t offer a snappy, moral framework that will help us control murderous robots. Instead, they address the murky and diffuse problems that will affect society as we integrate AI into sectors like health care, education, and consumer technology. So, for example, if an AI system diagnoses you with cancer sometime in the future, the EU’s guidelines would want to make sure that a number of things take place: that the software wasn’t biased by your race or gender, that it didn’t override the objections of a human doctor, and that it gave the patient the option to have their diagnosis explained to them.
EU has published seven guidelines for developing ethical AI applications including being accountable, sustainable, explainable, privacy-respecting, and unbiased. Read more on The Verge.