I suspect that, given a reasonable prompt, it would absolutely discard certain phrases or concepts for others. I think it may find it difficult to cross check and synthesize, but "term families" are sort of a core idea of using multi-dimensional embedding. Related terms have low square distances in embeddings. I'm not super well versed on LLMs but I do believe this would be represented in the models.