Hacker News new | past | comments | ask | show | jobs | submit login

What you describe is word2vec which is one kind of word embedding that is pre-learned. Often (with keras at least ), embedding is learned simultaneously with the deep learning network. Keras uses index of words to progressively adjust a set of randomly initialized n-dimensional vectors. I’m telling you because I feel it may be the case for you too. The various kind of embedding was very confusing at first.

However, I just marvel at word2vec when I stumbled upon it. Encoding of meaning as vector dimensions was mind expanding for me.

These guys laid a lot of the groundwork for embeddings before word2vec while also showing practical applications in finance https://www.elastic.co/blog/generating-and-visualizing-alpha...

It's also possible to initialise a word embedding matrix with vectors trained with word2vec (or any other type of pre-trained embeddings: fastText, GLoVe being common) to get a perf boost. Of course, you have to ensure the embedding matrix is pre-populated with respect to the word-index mapping. Then, as you describe, these can be adjusted during training, although sometimes they're fixed. The boost is more noticeable when you're training a model on small amounts of data.

For me, the marvel of word2vec is that we do not need to embedding meaning into words. The move toward semantic understanding and meaning is not essential, but this post think that a semantic representation is the next logical step.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact