Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Just intuitively, in such a high dimensional space, two random vectors are basically orthogonal.

Which, incidentally, is the main reason why deep learning and LLM are effective in the first place.

A vector of a few thousands dimensions would be woefully inadequate to represent all of human knowledge, if not for the fact that it works as the projection of a much higher, potentially infinite-dimensional vector representing all possible knowledge. The smaller-sized one works in practice as a projection, precisely because any two such vectors are almost always orthogonal.





Two random vectors are almost always neither collinear nor orthogonal. So what you mean is either "not collinear", which is a trivial statement, or something like "their dot product is much smaller than abs(length(vecA) * length(vecB))", which is probably interesting but still not very clear.

Well, the actual interesting part is that when the vector dimension grows then random vectors will become almost orthogonal. smth smth exponential number of almost orthogonal vectors. this is probably the most important reason why text embedding is working. you take some structure from a 10^6 dimension, and project it to 10^3 dimension, and you can still keep the distances between all vectors.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: