What is Explainable AI? – Part 2

Taken from an article by Dr. Yosef Yehuda (Yossi) Kuttner, Ph.D. ML/AI Researcher at RAD’s Innovation Lab

 

In the first part of this blog series, we delved into the significance of explaining a model and identified the methods and types of models that can be explained. The concept of model explanation is crucial in our work, and in this blog, we’ll look at essential tools and techniques.

 

Model Explanation Essentials

Model explanation builds trust and validates them, ensuring effective communication of findings to stakeholders. Tools for explanation differ between traditional Machine Learning (ML) and Generative Artificial Intelligence (AI) approaches. ML often uses techniques like feature importance analysis, while generative AI focuses on understanding creative content generation processes. These tools break down model workings, enhancing transparency and robustness. Efficient coding is central to swift and precise analyses. Whether with ML or generative AI, the right tools and efficient coding unlock the full potential of data science and AI projects.

 

Machine Learning Model Explanation Techniques:

 

Two prominent tools for interpreting and explaining machine learning models are LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations).

 

LIME:

  • LIME is a versatile technique applicable to various machine learning models, including deep neural networks, decision trees, and support vector machines.
  • It generates local explanations by perturbing a selected data instance and creating a dataset of perturbed instances.
  • A simple, interpretable model, such as linear regression or decision tree, is trained on the perturbed dataset to approximate the complex model’s behaviour.
  • Feature importance scores are provided, indicating the contribution of each feature to the prediction for the selected instance.
  • The final explanation highlights the most influential features, including visualisations or text-based formats.

 

SHAP:

  • SHAP is based on cooperative game theory and Shapley values, offering a theoretically grounded framework for attributing feature importance.
  • It is model-agnostic and can be applied to complex models, ensuring fair and consistent feature importance attribution.
  • Shapley values are computed for each feature by considering all possible combinations of features and calculating marginal contributions.
  • The average marginal contributions across combinations provide Shapley values, indicating each feature’s average effect on the prediction.
  • Feature attribution values are communicated through various visualisation techniques, such as summary plots, force plots, and dependence plots.

 

In summary, LIME and SHAP are powerful tools for enhancing the interpretability of machine learning models, providing insights into specific predictions and overall model behaviour.

Exploring Transparent Machine Learning Tools in Python:

Here’s a list of Python packages emphasising transparency and explainability, primarily built on LIME and SHAP. While these packages may support other programming languages, this post will concentrate on their Python implementations.

 

1. Explainer Dashboard 

 

  • Interactive dashboard creation for explaining ML model behaviour.
  • Features include model performance, feature importance, SHAP values, and what-if scenarios.
  • Designed for easy GUI customisation.

 

2. Explainable Boosting Machines (EBM)

 

  • Accurate and interpretable machine learning model.
  • Uses additive models in boosting for enhanced interpretability.
  • Automatically detects feature interactions, providing insights into complex relationships.

 

3. Shapash

 

  • Focuses on making Data Science models interpretable.
  • Offers various visualisations with explicit labels for better understanding.
  • Enables Data Scientists to communicate findings through HTML reports.

 

4. Eli5 (Explain like I am 5)

 

  • Assists in understanding and debugging machine learning classifiers.
  • Provides explanations for predictions, uncovering the rationale behind model decisions.
  • A helpful tool for making machine learning more transparent and less mysterious.

 

5. Other Tools to Explore

 

  • Alibi (covering strategies like Anchors).
  • Skater (evolved from LIME, now its own framework).
  • EthicalML, Aix360 by IBM, DiCE for ML, ExplainX.Ai, and more.

 

Selecting the Right Tool:

 

  • Consider the model type, data type, and interpretability needs.
  • Evaluate the trade-off between model complexity and interpretability.
  • Assess the efficiency of the tool in terms of computational requirements.
  • Consider domain expertise for a better fit with your problem.

 

Explainability in Generative AI:

 

  • Generative AI creates new content and benefits from human supervision for responsible judgments.
  • Simplifying model architecture enhances interpretability.
  • Utilise attention maps, decision trees, and feature importance plots for interpretability.

 

Explainability in Networking AI:

 

  • AI in networking aids in performance, security, and efficiency.
  • Explainable AI empowers network managers, providing insights for strategic decision-making.
  • AI tools cover anomaly detection, predictive maintenance, resource allocation, and more.

 

Key Guidelines:

 

  •  Human supervision for responsible judgments.
  •  Simplifying models for better understanding.
  •  Utilising interpretability tools for decision insights.

 

Conclusion:

 

Explainability in AI is vital, especially in sensitive fields. AI packages, combined with domain expertise, contribute to model transparency. Consider ethical implications, stay updated on evolving techniques, and create user-friendly documentation to use explainability tools effectively.

Recent Posts

IoT and CSPs in the Future

IoT and CSPs in the Future

The Internet of Things (IoT) has revolutionised the business world and reshaped how enterprises work. Undoubtedly, the Internet of Things is assisting businesses in becoming more productive, regardless of whether they're using smart shelving to manage inventory or...