
Gig: A new method for explaining complex ensemble ML models - ecurb
https://www.zest.ai/blog/introducing-generalized-integrated-gradients-gig-a-practical-method-for-explaining-diverse-ensemble-machine-learning-models
======
manthideaal
I think that assigning credit to individual features doesn't work if there is
a strong interaction among features. I would like to see a method of
constructing a map that describes sets of features with strong interaction,
that is new features that are not reducible to individual features in order to
explain how the model work.

Edited, (1) is a good first step at shapely value theory and applications:

(1) [https://medium.com/datalab-log/understanding-the-impact-
of-f...](https://medium.com/datalab-log/understanding-the-impact-of-features-
and-data-through-shapley-values-f235489b0b3e)

------
cosmic_ape
Not a big fan of the idea of assigning credit to individual features, but as
far as this idea goes, this seems like a very useful improvement.

The blog mentions an evaluation on some loan related data, and this data does
not appear in the paper. I wonder wether it is available somewhere.

------
ianandrich
Interesting. Has anyone played with this yet?

