Um, there is no indication of quantum mathematics being used here, instead the article just says this method was used in studying quantum related data sets and "The new method gauges the importance of words in a document based on where they appear, rather than simply on how often they occur."
I've seen that sentence almost verbatim in a dozen different search engine proposals recently.
Agreed, seems like a non-story. Lots of people with an interest in the field have ideas like this all the time. And it has barely been tested or compared.
I'm trying not to be too snarky, as it is interesting, and I enjoy all innovation (especially cross-pollination), but the only reason this was written is because the writer was able to shoe-horn 'quantum' into the story.
I almost clicked on the title, realized it was probably linkbait, noticed it was New Scientist, then clicked the comments link fully expecting someone to have debunked it. I was not disappointed.
Read the article. It claims they have found a better estimate of word importance in a document than tf-idf, which would be very significant. Also it doesn't seem to need text segmentation, which means no need for language-specific tokenizers. The research paper is here (haven't read it yet):
Hmm I have no background but how can they determine what a 'term' is for the purposes of frequency without some form of tokenization unless they are using an arbitrary maximum length on 'term' sizes and are eliminating small terms.
I've seen that sentence almost verbatim in a dozen different search engine proposals recently.