That's possible. It's using Andrej Karpathy's char-rnn. Presumably it's doing something like running the trained model with 'th sample.lua -model something.t7 -primetext '$WORD, ' and taking everything after the comma as the definition. So to reverse this, you would take the dictionary corpus, remove '^$WORD, ', and suffix ', $WORD$'. Then it will be training to predict a final word conditional on the definitions, and you can do the same thing with the new model, feed in '-primetext '$DEFINITION, ' to get out a word.
Hey, I'm the creator. This would probably work, you'd want to use a unique delimiter character that's not in the rest of the corpus --- so not a comma. (I'm using the pipe character.)