Decades before today’s deep learning neural networks compiled imponderable layers of statistics into working machines, researchers were trying to figure out how one explains statistical findings to a human. IBM this week offered up the latest effort in that long quest to interpret, explain, and justify machine learning, a set of open-source programming resources it calls “AI 360 Explainability.” It remains to be seen whether yet another tool will solve the conundrum of how people can understand what is going on when artificial intelligence makes a prediction based on data.
Read more about IBM’s latest foray into making A.I. more amenable to the world is a toolkit of algorithms that can be used to explain the decisions of machine learning programs on ZDNet.