Awesome, I love to see how there's coming more and more frameworks for interpretability. It's incredibly important, especially when selling your solution to higher-ups.
There's another solution named LIME which seems to take a similar but more general approach, I like this more tailored idea as it'll probably give a better interpretation of the NLP questions.
Interpretability is super important... first for technical debugging, and even more important for giving domain experts a view of the model inner working (otherwise they have to just trust the ML model blindly).
Slightly off topic, but the analogy of this to Deep Thought having to design a system to explain what the actual Question was is pretty amazing. Douglas Adams was incredibly prescient.
There's another solution named LIME which seems to take a similar but more general approach, I like this more tailored idea as it'll probably give a better interpretation of the NLP questions.