Hacker News new | past | comments | ask | show | jobs | submit login

I'm wondering how their LLM parsing 250 mil words in 9 hours compares with performance of traditional sentiment analysis.

Also, many exisiting sentiment analysis tools have a lot of research behind them that can be referenced when interpreting the results (known confounds etc). I don't think there is yet an equivalent for the LLM approach




Pretty slow. I built a sentiment analysis service (https://classysoftware.io/) and 250M words @ ~384 words per message I’m pushing 5.6 hours to crunch all that data, and even at that I’m pretty sure there are ways to push it lower without sacrificing on accuracy.


And yet, it's so much easier to deploy an LLM, either through a service or on prem.


It's easier to do a lot of things. That doesn't make it better.


But it does make people to feel like that action is now possible. And once someone believes something is possible, they're more likely to do it


Often makes things built on top of them better because of improved speed of iteration.


Sometimes, doing something different does result in something better.

For example, EVs. Compare EVs to ICEVs and you can point out a lot of faults, but ICEVs have had 100 years of refinement. Perhaps you're comparing battle hardened SA with fledgling LLM based SA?

Never don't do new things, not only when its for fun, but especially when its for fun. If you want to be closed minded, that's your choice, but don't try to put that mentality onto others.

Keep your non-hacker mindset to yourself.


What do you mean? Deploying something like spaCy is far easier than deploying an LLM in my experience.


By "deploy", they almost certainly mean "set up to use" and they may have also included "learn how to use" and all its various forms, as well.

LLMs really are almost magic in how they can help in this space; and setting them up is often just getting an API key and throwing some money and webservice calls at them.


Set up to use, sure. Learn? Learning isn’t deployment in this context.

Setting up spaCy is just `pip install spacy`. No need to worry about GPUs or dedicated services like you do with LLMs.


yeah, then you have to learn the API. Basically every option for running LLMs have converged on the openAI (web) API.


“Learning the API” is not deployment.


pip install vllm, boom you have an openapi-compatible webserver. No further action necessary.


No, there is further action necessary. If you want any kind of decent performance, you need to run it with an appropriate GPU. This is not true of spaCy, which makes spaCy easier to deploy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: