https://www.kdnuggets.com/your-features-are-important-it-doesnt-mean-they-are-good Your Features Are Important? It Doesn’t Mean They Are GoodBy Samuele Mazzanti, Lead Data Scientist at JAKALASeptember 21, 2023 ““Feature Importance” is not enough. You also need to look
What I Read: explainability, survival analysis
https://medium.com/responsibleml/survex-model-agnostic-explainability-for-survival-analysis-94444e6ce83d survex: model-agnostic explainability for survival analysisMikołaj SpytekSep 19 “… survival models… tells us what is the probability of an event not happening until a given time t…. The complexity…
What I Read: Visual Explanation of Classifiers
https://ai.googleblog.com/2022/01/introducing-stylex-new-approach-for.html Introducing StylEx: A New Approach for Visual Explanation of ClassifiersTuesday, January 18, 2022Posted by Oran Lang and Inbar Mosseri, Software Engineers, Google Research“Previous approaches for visual explanations of classifiers…
What I Read: Interpretable Time Series
https://ai.googleblog.com/2021/12/interpretable-deep-learning-for-time.html Interpretable Deep Learning for Time Series ForecastingMonday, December 13, 2021Posted by Sercan O. Arik, Research Scientist and Tomas Pfister, Engineering Manager, Google Cloud “Multi-horizon forecasting, i.e. predicting variables-of-interest at
What I Read: Non-Technical Guide to Interpreting SHAP
https://www.aidancooper.co.uk/a-non-technical-guide-to-interpreting-shap-analyses/ Explaining Machine Learning Models: A Non-Technical Guide to Interpreting SHAP AnalysesAidan CooperNov 1, 2021With interpretability becoming an increasingly important requirement for machine learning projects, there’s a growing need for