I'm wondering how their LLM parsing 250 mil words in 9 hours compares with performance of traditional sentiment analysis.
Also, many exisiting sentiment analysis tools have a lot of research behind them that can be referenced when interpreting the results (known confounds etc). I don't think there is yet an equivalent for the LLM approach
Pretty slow. I built a sentiment analysis service (https://classysoftware.io/) and 250M words @ ~384 words per message I’m pushing 5.6 hours to crunch all that data, and even at that I’m pretty sure there are ways to push it lower without sacrificing on accuracy.
Sometimes, doing something different does result in something better.
For example, EVs. Compare EVs to ICEVs and you can point out a lot of faults, but ICEVs have had 100 years of refinement. Perhaps you're comparing battle hardened SA with fledgling LLM based SA?
Never don't do new things, not only when its for fun, but especially when its for fun. If you want to be closed minded, that's your choice, but don't try to put that mentality onto others.
By "deploy", they almost certainly mean "set up to use" and they may have also included "learn how to use" and all its various forms, as well.
LLMs really are almost magic in how they can help in this space; and setting them up is often just getting an API key and throwing some money and webservice calls at them.
No, there is further action necessary. If you want any kind of decent performance, you need to run it with an appropriate GPU. This is not true of spaCy, which makes spaCy easier to deploy.
Also, many exisiting sentiment analysis tools have a lot of research behind them that can be referenced when interpreting the results (known confounds etc). I don't think there is yet an equivalent for the LLM approach