
No training required: Exploring random encoders for sentence classification - jimarcey
https://code.fb.com/ml-applications/random-encoders/
======
jeromebaek
Interesting paper. I'd like to know how this compares with even more naive
methods like simple summation. If this method is an application of Cover's
theorem it should handily beat summation or any other simple method that
places the sentence embedding in the same dimension as the word embeddings.

~~~
yorwba
From the "related work" section of the paper:

 _The nowadays surprisingly poor performance of the models in Hill et al.
(2016) can at least partly be explained because 1) they use poorer (older)
word embeddings; and 2) FastSent sentence representations are of the same
dimensionality as the input word embeddings, while they are compared in the
same table to much higher-dimensional representations._

See also figure 1 for the increase in performance across tasks when the
embedding dimension is increased.

------
zuzun
How does SentEval work? As I understand it, it trains a model on top of the
sentence embeddings for almost all tasks. Could the baseline BOE be worse
because its 300 dimensional input will give the model fewer trainable
parameters compared to the 4096 dimensions of all other embeddings?

------
anon1253
Love it. Especially the echo state network trick. I wonder how much of
BERT/ELMO performance is simply due to them having a such a high
dimensionality. Not that there is anything wrong with that, just makes a tad
less practical for some applications.

------
moneil971
“A strong, novel baseline for sentence embeddings that requires no training
whatsoever.”

