
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models - polm23
https://allennlp.org/interpret
======
londogard
Awesome, I love to see how there's coming more and more frameworks for
interpretability. It's incredibly important, especially when selling your
solution to higher-ups.

There's another solution named LIME which seems to take a similar but more
general approach, I like this more tailored idea as it'll probably give a
better interpretation of the NLP questions.

~~~
polm23
LIME's authors came out with a new method called Anchors last year:

[https://homes.cs.washington.edu/~marcotcr/aaai18.pdf](https://homes.cs.washington.edu/~marcotcr/aaai18.pdf)

[https://github.com/marcotcr/anchor](https://github.com/marcotcr/anchor)

------
ivan_ah
Wow this is amazing. Check out their interactive demo that lets you see
visualizations for various tasks: [https://demo.allennlp.org/coreference-
resolution/MTA3MDQ4Nw=...](https://demo.allennlp.org/coreference-
resolution/MTA3MDQ4Nw==)

Interpretability is super important... first for technical debugging, and even
more important for giving domain experts a view of the model inner working
(otherwise they have to just trust the ML model blindly).

~~~
carljoseph
Very interesting demo. Thanks for sharing. Unfortunately it doesn't seem to
parse Winograd schemas[0] correctly

[0]
[https://en.wikipedia.org/wiki/Winograd_Schema_Challenge](https://en.wikipedia.org/wiki/Winograd_Schema_Challenge)

~~~
lifeisstillgood
>>> The city councilmen refused the demonstrators a permit because they
[feared/advocated] violence.

I fear that quite a lot of humans won't get that either - am i being overly
pessimistic?

------
russellbeattie
Slightly off topic, but the analogy of this to Deep Thought having to design a
system to explain what the actual Question was is pretty amazing. Douglas
Adams was incredibly prescient.

