The model interpretability goes out the window because we used techniques for the vectorization that kinda suck. NLP is obsessed with self-supervision unnecessarily when they should be innovating in dimensionality reduction techniques
it boggles my mind I haven't seen anyone implement my idea.
Word2vec's popularity is the result of people valuing performance (i.e. accuracy) more than interpretability.
what's mindboggling to me is that I haven't seen anyone else come up with the idea independently.