

A Neural Network for Factoid Question Answering over Paragraphs - mbeissinger
http://cs.umd.edu/~miyyer/qblearn/

======
jbarrow
I believe that training methods for recursive neural networks will be some of
the most interesting future research in the field. We've seen some pretty
revolutionary techniques come out for feedforward models in the last 30 years,
but with recurrent/recursive networks it's often difficult to see what's going
on inside the network, let alone properly train it. We've already seen what
Deep RNNs can do when trained [1], but I think this is only the beginning.

[1] [http://arxiv.org/pdf/1303.5778.pdf](http://arxiv.org/pdf/1303.5778.pdf)

------
Terr_
"Factoid"?

Technically speaking, a factoid is something which _seems_ like a fact, but
which is actually false/unconfirmed.

------
discardorama
I just took the last sentence from the paragraph, " _Name this German author
of The Magic Mountain and Death in Venice_ ", put it in Google, and the answer
popped right up.

I guess I'll have to read the paper and play with the data set, but the
example wasn't very convincing. :)

~~~
wodenokoto
It guessed the author based on the first sentence of that paragraph, not the
last.

------
sadfaceunread
I just started downloading the code and training data, I'm interested in what
quiz bowl packets were used as the training data. Most college level quiz bowl
packets are posted publicly on the web but are copyrighted by the original
authors and are unlikely to be licensed for such a purpose.

------
morenoh149
can't find the code. Is only the paper being published?

~~~
lucidrains
[http://cs.umd.edu/~miyyer/qblearn/qanta.tar.gz](http://cs.umd.edu/~miyyer/qblearn/qanta.tar.gz)

