At the heart of the MPPA model lies Explainable AI (XAI). There are two types of models that can be built in ML. The first, called Black Box, refers to models where results cannot be understood by humans, and designers struggle to explain how the AI arrived at a specific decision. In contrast, the second is called White Box, and refers to models where results can be clearly understood by humans. MPPA employs this type of model.
There are many ways to implement XAI, however, we decided to use Shapley Additive explanations (SHAP), a game theoretic approach to explain the output of any ML model, as it provides the easiest, fastest, and most accurate method to explain the MPPA model’s output. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory as well as their related extensions.
Figure 4. SHAP Local Interpretability graph