At the heart of the MPPA model lies Explainable AI (XAI). There are two types of models that can be built in ML. The first, called Black Box, refers to models where results cannot be understood by humans, and designers struggle to explain how the AI arrived at a specific decision. In contrast, the second is called White Box, and refers to models where results can be clearly understood by humans. MPPA employs this type of model.
There are many ways to implement XAI, however, we decided to use Shapley Additive explanations (SHAP), a game theoretic approach to explain the output of any ML model, as it provides the easiest, fastest, and most accurate method to explain the MPPA model’s output. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory as well as their related extensions.
SHAP offers two methods to explain a model’s predictions. Global interpretability offers a holistic approach, providing an understanding of the importance of all features, whereas local interpretability focuses on the specifics of each feature and its contribution to the prediction. The latter was used to assign NBAs to each alert, binding the predictions generated to concrete actions.