
Explain Yourself Leveraging Language Models for Commonsense Reasoning - sel1
https://arxiv.org/abs/1906.02361
======
cs702
The model is trained end-to-end to answer common-sense-reasoning questions
_after_ generating explanations for its answers, using sample human
explanations as part of the training data.

This results in improved performance on the question-answering task.

This is fascinating, although in hindsight, not entirely surprising: inducing
a machine to learn to model human explanations helps the machine perform
better in testing.

A natural question follows:

Can we find ways to induce much larger models to learn to generate human
explanations about a growing number of subjects of increasing complexity?

------
MasterScrat
Reminds me of the concept of "Social Stories":
[https://en.wikipedia.org/wiki/Social_Stories](https://en.wikipedia.org/wiki/Social_Stories)

> Social Stories are a concept devised by Carol Gray in 1991 to improve the
> social skills of people with autism spectrum disorders (ASD). The objective
> is to share information, which is often through a description of the events
> occurring around the subject and also why.

I'm wondering if this kind of "common sense" interactions could be leveraged
to train models?

Here's a concrete example, "Being Angry and Safe":
[https://youtu.be/R8c_Br8I_Tc?t=28](https://youtu.be/R8c_Br8I_Tc?t=28)

