Hacker News new | past | comments | ask | show | jobs | submit login
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models (allennlp.org)
87 points by polm23 on Sept 28, 2019 | hide | past | favorite | 6 comments



Awesome, I love to see how there's coming more and more frameworks for interpretability. It's incredibly important, especially when selling your solution to higher-ups.

There's another solution named LIME which seems to take a similar but more general approach, I like this more tailored idea as it'll probably give a better interpretation of the NLP questions.


LIME's authors came out with a new method called Anchors last year:

https://homes.cs.washington.edu/~marcotcr/aaai18.pdf

https://github.com/marcotcr/anchor


Wow this is amazing. Check out their interactive demo that lets you see visualizations for various tasks: https://demo.allennlp.org/coreference-resolution/MTA3MDQ4Nw=...

Interpretability is super important... first for technical debugging, and even more important for giving domain experts a view of the model inner working (otherwise they have to just trust the ML model blindly).


Very interesting demo. Thanks for sharing. Unfortunately it doesn't seem to parse Winograd schemas[0] correctly

[0] https://en.wikipedia.org/wiki/Winograd_Schema_Challenge


>>> The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.

I fear that quite a lot of humans won't get that either - am i being overly pessimistic?


Slightly off topic, but the analogy of this to Deep Thought having to design a system to explain what the actual Question was is pretty amazing. Douglas Adams was incredibly prescient.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: