Earlier this year I published a paper [1] that was partly about recognizing place references from text. Various NLP libraries and gpt-3.5-turbo were used in the comparison. The comparison was not the focus of the paper and newer LLMs are probably better, but in the specific case, gpt had a lower precision score than most of the tested NLP libraries and was also a bit more difficult to handle when trying to force machine-readable output.
A lot of old stuff are still very relevant, they are much resource efficient, smaller amount of RAM, CPU, performance that's much faster. Easily 100x cheaper. I mean, if you have a few things you are trying to sort out, then using LLM to solve is okay, but if you have lots and lots of computation, then it's worth going classic.
I don't think there are any more which is a shame, because spaCy was an amazing library and probably the library I most ever enjoyed working with, it truly felt like a craftsmans belt for intelligent text transformation/insight. Some things like topic clouds can still be useful for creative work but this is not where spaCy shines.
But ChatGPT can derive better insights, doesn't need pipelines, doesn't need hard coded approaches with their issues. And the (NLTK/Stanford parser-like) dependency views are still interesting for linguistic purposes.
[1] https://www.mdpi.com/1999-5903/16/3/87
reply