
Closing the Gap Between Performance and Interpretability in Visual Reasoning - andyjohnson0
https://arxiv.org/abs/1803.05268
======
andyjohnson0
For a fairly accessible description of this work see [1]. As I understand it,
they're addressing the problem of the internal functioning of DNNs being hard
to interpret, by using composable NN modules.

[1] [http://news.mit.edu/2018/mit-lincoln-laboratory-ai-system-
so...](http://news.mit.edu/2018/mit-lincoln-laboratory-ai-system-solves-
problems-through-human-reasoning-0911)

