Hacker News new | past | comments | ask | show | jobs | submit login

Well, sort of. They do have a meaning. It’s probably not an easily findable or understandable concept to humans. If you hypothetically had a large labeled corpus for a bunch of different features, you could create linear regressions over the embedding space to find vectors that do represent exactly (perhaps not uniquely) the meaning you’re looking for... and from that you could imagine a function that transforms the existing embedding space into an organized one with meaning.



No, it is not true. Everything is up to an orthogonal rotation. It is not an SVD (though, even for SVD, usually only the first few dimensions have a human interpretation).

Instead, you can:

- rotate it with SVD (works really well, when working on a subset of words)

- project it on given axes (e.g. "woman - man" and "king - man")


you could still interchange the dimensions arbitrarily. You can't say "dimension 1 = happiness", a re-training would not replicate that, and would not necessarily produce a dimension for "happiness" at all.


I’m not saying that. I’m saying you could identify a linear combination of x,y,z that approximates happiness, and by doing this for many concepts, transform the matrix into an ordered state where each dimension on its own is a labeled concept.

People are quick to claim that embedding dimensions have no meaning, but if that is your goal, and your embedding space is good, you’re not terribly far from getting there.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: