What is Explainable AI?

Taken from an article by Dr. Yosef Yehuda (Yossi) Kuttner, Ph.D. ML/AI Researcher at RAD’s Innovation Lab

The era of artificial intelligence (AI) has finally arrived. We hear about generative AI, ChatGPT, Bard, Midjourney, and other tools everywhere. Every company has its own AI ambitions or is already using it. It’s even getting to the point where some companies are rebranding their old rule-based engines as AI-based.

Embedding AI and sophisticated analytics into business processes and automating decisions creates a need for transparency and the question of how to achieve this transparency while taking advantage of AI’s benefits. Here’s where Explainable AI (XAI) comes in.

 

The difference between predictive and explanatory models

Predictive models prioritise accuracy over causal inference and don’t explain why something happens. Explanatory models, on the other hand, explain why an event happens. For example, why a new business strategy didn’t work out or didn’t improve the business as much as expected.

Models that explain results prioritise causal explanations over forecast accuracy. Explanatory models explain why something happens, whereas predictive models predict what will happen.

Predictive models aim to minimise bias and estimation error. Even if they sacrifice estimation accuracy, explanation models minimise bias and explain results casually. 

When businesses mix up or confuse predictive accuracy with explanatory power, it could lead to models that are neither very explanatory nor very predictive.

 

What makes an explanatory model

An explanatory model in ML/AI is intended to provide insights into the predictions’ underlying mechanisms. Their simplicity and ease of analysis may favour causal explanations over forecast accuracy.  

A model’s explanatory power can be evaluated by the following factors:

Transparency and simplicity:
Explanatory models should be straightforward. Models and algorithms should be interpretable and provide clear insights into decision-making.

Feature importance:
Explanatory models should provide information on the importance of various features. To understand the model’s behaviour, it’s essential to identify the attributes that influence the final decision.

Feature relationships:
Feature-target relationships should be revealed in explanation models. Feature values should be shown to affect predictions, and their magnitude and direction should be explained.

Intuitive and meaningful explanations:
An explanation model should provide intuitive and meaningful insights to domain experts. Explaining should align with domain knowledge and provide easy-to-understand insights.

Visualisations:
Explainable models require visualisations. Comprehending model behaviour, feature relevance, and linkages might be easier with graphs, charts, and decision trees.

Contextual interpretation:
Explanatory models must consider the context. Explanations should be relevant to the application or domain.

Model validation:
Validating an explanatory model extensively ensures reliability and broad applicability. Therefore, cross-validation, sensitivity analysis, and permutation importance help examine the model’s stability and robustness.

Human-AI interaction:
To be fully explanatory, a model should allow human-AI interaction. It involves providing interactive interfaces for users, allowing them to explore and adjust inputs and parameters and getting feedback to refine and improve model explanations.

 

Explaining Models: Types and Behaviour

Model explanation offers insights and interpretations about model behaviour, forecasts, and decision-making.

Here are examples of models and explanation approaches:

  1. Linear regression:
  • Linear regression coefficients represent the impact of each feature on the target variable, which is the phenomenon we want to predict. An unknown variable is predicted by using a known variable as a predictor of the unknown variable. Correlation coefficients indicate positive or negative relationships.
  • A model’s relative importance can be determined by examining the magnitude of the coefficients.
  1. Decision trees:
  • Decision trees are naturally interpretable. Show the path leading to a prediction by traversing the tree describing the conditions at each split.
  • A decision tree’s importance can be measured by how much it reduces impurity or splits the data. Using purity, we can determine whether we should divide a node. It is 100% impure when a node’s data is evenly split 50/50 and 100% pure when all its data belongs to one class.
  • Visualising the decision tree structure can help understand the model’s decision-making process.
  1. Random Forests and Gradient Boosting Models:
  • Each tree in the Random Forest is trained on a different subset of data. In addition to providing feature importance, the model also provides insights into how features interact and affect predictions.
  1. Gradient Boosting algorithms 
  • Gradient Boosting is another ensemble method that combines multiple weak learners (typically decision trees) to create a strong predictive model. It assigns importance to features based on their ability to improve the model’s loss function.
  • The feature importance is calculated by summing up the contribution of each feature across all the trees in the ensemble. It can also reveal complex relationships between features and the target variable.
  1. Support Vector Machines (SVM):
  • Hyperplane: SVM separates classes by a hyperplane in high-dimensional space. In SVM, a hyperplane is a decision boundary that classifies data points. It is a line or plane that divides the data into classes. The optimal hyperplane maximises the margin between the closest data points from both classes. The model can be explained by visualising the hyperplane and describing how it separates the classes.
  • Support vectors: These are the data points closest to the decision boundary. By analysing the SVM model’s data distribution and decision boundaries, we can gain insights. Consider a dataset with two classes, red and blue, and we want to classify new data points based on their features. Using SVM, we can find the optimal hyperplane. The optimal hyperplane maximises the margin between the nearest points in both classes. They influence the hyperplane’s position and orientation most because they are closest to it. We can identify the most important data points for classification by identifying the support vectors. SVM models can be understood by visualising the hyperplane position and orientation using support vectors.
  1. Neural networks:
  • Relevance of features: Gradient-based methods, saliency maps for image processing, or occlusion analysis can help identify the most relevant features. Image processing models use these methods to identify the most relevant features. The prediction (or classification score) is computed based on the input image. Gradient-based methods generate saliency maps that highlight the most significant pixels, while saliency maps highlight the most relevant regions for a given model. The occlusion analysis involves systematically occluding parts of the input image and observing the results. It is possible to identify which parts of an image are most important by comparing predictions with and without occlusion. These techniques provide insights into the model’s decision-making process and improve performance.
  • Layer activations: Examining the activations of hidden layers can provide insights into the model’s learned representations and feature hierarchies.
  1. Rule-based Models:
  • Rule extraction: These models generate rules that explicitly describe associations or patterns in the data. These rules can be interpreted and explained by presenting them in a human-readable format, unlike equations.
  • Rule support and confidence: Metrics like support and confidence can quantify the strength and reliability of the rules.
  1. Bayesian networks: Bayesian networks enable probabilistic inference by representing dependencies among variables. Performing inference to explain reasoning and predictions involves analysing the network structure and conditional probabilities.

Recent Posts