https://www.aidancooper.co.uk/a-non-technical-guide-to-interpreting-shap-analyses/
Explaining Machine Learning Models: A Non-Technical Guide to Interpreting SHAP Analyses
Aidan Cooper
Nov 1, 2021
With interpretability becoming an increasingly important requirement for machine learning projects, there’s a growing need for the complex outputs of techniques such as SHAP to be communicated to non-technical stakeholders.
“It is important not to become over-invested in conclusions that certain features cause certain outcomes… Decision-makers are often tempted to view features in SHAP analyses as dials that can be manipulated to engineer specific outcomes, so this distinction must be communicated.”