Is this just using LLM to be cool? How does pure LLM with basic "In the scale between 0-10 ..." prompt stack up against traditional, battle-tested sentiment analysis tools?
I'm wondering how their LLM parsing 250 mil words in 9 hours compares with performance of traditional sentiment analysis.
Also, many exisiting sentiment analysis tools have a lot of research behind them that can be referenced when interpreting the results (known confounds etc). I don't think there is yet an equivalent for the LLM approach
Pretty slow. I built a sentiment analysis service (https://classysoftware.io/) and 250M words @ ~384 words per message I’m pushing 5.6 hours to crunch all that data, and even at that I’m pretty sure there are ways to push it lower without sacrificing on accuracy.
Sometimes, doing something different does result in something better.
For example, EVs. Compare EVs to ICEVs and you can point out a lot of faults, but ICEVs have had 100 years of refinement. Perhaps you're comparing battle hardened SA with fledgling LLM based SA?
Never don't do new things, not only when its for fun, but especially when its for fun. If you want to be closed minded, that's your choice, but don't try to put that mentality onto others.
By "deploy", they almost certainly mean "set up to use" and they may have also included "learn how to use" and all its various forms, as well.
LLMs really are almost magic in how they can help in this space; and setting them up is often just getting an API key and throwing some money and webservice calls at them.
No, there is further action necessary. If you want any kind of decent performance, you need to run it with an appropriate GPU. This is not true of spaCy, which makes spaCy easier to deploy.
Because LLMs WILL dominate all NLP use cases, whether you like it or not.
Its like the linux of operating systems. Sure you can handwrite up some custom OS more specialized for a purpose. But its much easier to just use linux, which everyone understands on a basic level and is extremely robust, and modifying it slightly for the end goal.
And saying "Traditional sentiment analysis" tools are "Battle tested" is laughable. LLMs in the past year alone, probably has 1000x the cumulative usage of all sentiment analysis tools in history.
LLMs get 100 billion + each year in research, improvements, engineering, optimisations.
LLMs keep rapidly improving year to year in capabilities. Sonnet 3.5 already obliterates the original GPT-4 in every aspect.
LLMs keep getting cheaper year to year. Gemini flash is like 100x cheaper than the original GPT3.5.
You can onboard any person who can write python, to start using LLMs to perform language analysis in a day. Versus weeks to use these traditional tools.
Nearly all NLP tasks will be standardised to use LLMs as the baseline default tool. Sure there'll be some short term degradations in some specific aspect, but there's no stopping the tide.
By the way, traditional ML-based translation is also pretty much dead and replaced by LLMs. I've been seeing an explosion in fan-translations done by say Sonnet 3.5, the improvement in fluency and accuracy is just radical and extreme, I often don't even notice the AI-translation anymore.
Sorry, but not really. If you know what you do, you don't just pick an LLM. LLMs are trained/built for a specific task: text generation. Other models are trained on different tasks. If you know what you do, you compare models (I don't mean LLM models with that!) and choose the best performing. Just because LLMs receive more training doesn't mean they have a better performance. Very weird and flawed way of thinking. This is just hype thinking
I have to agree with the parent. LLMs are excellent at a large range of NLP tasks. Of course they are not going to replace all ML models, but when it comes to NLP they are clearly better than lots of trained models (e.g. https://arxiv.org/pdf/2310.18025).
LLMs are general purpose tools and absolutely are not better than trained models (using the latest techniques) for a specific task. I mean, that's obviously true if you think about it.
You can use similar datasets and the latest model architectures and if you train a model purely for sentiment analysis it will be better than frontier general purpose LLMs for sentiment analysis.
It's really mind-boggling that so many people disagree via downvotes that you compare models and choose the best performing one, independent of the hype ...
I can see a simliarity here in comparing Java/JavaScript/any other modern, more productive language to C. Yes, both can write more or less the same program, but you'll get the same result with less effort and more quickly with the modern languages. Yes, modern languages will always be slower and heavier on resources than C.
That's inaccurate, because traditional sentiment analysis, or rather the entire NLP ecosystem, is a very niche and underoptimized space.
Its not comparing C against Javascript, its comparing Ada against Javascript. Ada is not going to be any faster than javascript because its too niche and therefore underoptimised.
The theoretical minimum computation required by LLMs is far higher than traditional simple NLP algorithms. But the practical computation cost of LLMs will soon be cheaper, because LLMs get so much investment and use, there's massive full-stack optimizations all the way from the GPU to the end libraries.
I did a similar kind of process for my own chat logs. I have about 11M tokens worth of logs, and it took 2 days to crunch all of them with ollama and LLaMA 3.1 8B on my MacBook. It's slow, but free.
I generated title, summary, keywords and hierarchical topics up to 3 levels up from the original text. My plan for now is to put them in a vector search engine, which, incidentally, was made with Sonnet 3.5 with very little iteration. I want to play around to see how I can organize my ideas with LLMs, make something useful from all that text.
I really don't know what I will discover. One small insight I already found is that summarization works really well, you can use summaries instead of full texts to prime Claude and it works better than expected. Unlimited context? Maybe.
Another direction of research is to create a nice taxonomy, there are thousands of topics, pretty difficult task, but there must be a way using clustering and LLMs. That is why I generated topic, parent-topic, gp-topic, and ggp-topic from all snippets. I would probably manually edit the top 2 levels of the taxonomy to give it the right focus.
I'm also integrating with my HN and reddit feeds. X is too stingy with the API. Maybe Pocket and local downloads folder too, I save/bookmark stuff I like. I could also include all the papers I am reading into the corpus. It could synthesize a ranked feed aligned to my own interests.
I'm working on something tangentially related [1] but by sourcing my Google search history data. It's surprising how LLaMA 3.1 8B is pulling most of the weight in my case too.
I find it notable that tokens don't necessarily express people's feelings. Put another way, tokens aren't how people feel, they're how they write.
Samstave mentioned in this thread that Twitter is a 'global sentiment engine'. I'm sure that's literally true. Sentiment measurement is only accurate to the degree that people are expressing their real feelings via tokens. I can imagine various psychological and political reasons for a discrepancy.
If you did sentiment analysis of publicly known writings of North Korean administrators, would that represent their feelings?
I think the interplay with free speech is interesting here: In a setting where people feel socially and legally safe to express their true opinion, sentiment analysis will be more accurate.
I wonder if the dip is more about LLama3 70b training and data than a change in sentiment. The data cut-off was Dec 2023 for 70b. That looks to coincide with the reversal of the dip.
That's an interesting hypothesis but the words we use to express agreement and disagreement haven't changed much.
We don't try to retrieve articles/topics from the model, which would be affected by the cutoff, just asking it to analyze the sentiment or summarize the content provided in a prompt
True. It would be interesting to run these same tests on the 7B model to see if trend information changes or not. 7B had a march cutoff so if the aug-dec dip migrated to oct-march (or just disappeared) it would be strong evidence for training/data bias. If nothing else, comparing 7B to 70B would likely be interesting.
edit I realized too late I had the years off. It is pure coincidence of month, not a real data bias. Sorry! I still think it would be interesting to see a 7B comparison but that is just to see how well a small model could spot big trends compared to a bigger one.
>>Use the tool below to explore various topics and the sentiments they evoke.
This is a cool phrase.
It is personally important as when I was asked in a panel interview @ -- They asked "what do you think Twitter is?
My response was "You're a global sentiment engine""
(There are a lot of conversations I'd love to have with the HN community with respect to our shared experiences, and weird history flipped-bits that exists in the minds of those who experienced that...
like threads of how linux came, or how xml was born through things I touched in a forrest gump way - and how there are so many stories from so many.
Speaking of Twitter, it would be very neat to be able to see a graph of sentiment over time if you select a term.
You could watch Twitter go from being a niche little new thing to popular to "twitter is trash" too popular to increasingly divisive to the purchase and rename to X to today.
I wanted to do an analysis of hacker news on another topic, but over a longer timespan.
I started to look into it, but in the little time I had to devote to the idea, I read that the Agolia API lets you look over a longer period, but that it is relatively costly.
I just want to look for all story titles from the beginning of time which match one of several simple search terms, and return submission date and title for an analysis I'd conduct in R.
Am I overthinking it and a simple Python script without an API code can do it?
I also think it *could* be less of a problem than you might think. If we treat the scale as arbitrary (which I think is a safe thing to do), then movement along the scale could be sufficient to ascertain *something*
Great work folks, glad we can all agree on that one.
Interesting that they used an LLM for this. I mean it makes sense and the data seems to pass the pub test but I, in my ignorance, would not have assumed that a language model would be well suited for number crunching.
Why is everything only plotted between 4 and 8 if the scale of the least liked topic should be 0 and most liked should be 9. Also 4.5 is the midpoint, but 4 is displayed as bright red and 6 is a muted gray blue, why? This makes no sense except to be psychologically disingenuous.
LLM's are really sensitive to bad or even slightly ambiguous grammar. I wonder if the numbers would differ significantly with "Reply only with the tags, in the following format".
> It is worth clarifying though that Hacker News does not hate International Students, but the posts related to them tend to be overwhelmingly negative, reflecting the community’s sympathy for the challenges faced by those studying abroad.
I was horrified when I read international students as one of top on the hate list. Although I saw a couple of comments attributed their cities housing crises on international students and thought that this sentiment is wide supported.
I don't know about this analysis and its conclusions. I'll just use this as a jumping point to selfishly spout my own human observations.
For context, I'm someone who uses HN to search for topics I'm interested in, rather than something like Google or Reddit.
- For anything SF community-related, most hits are from 10+ years ago. Lots of "hey we have a space in soma, any local startups want to hang and drink beers?" or "we have an empty desk in a space in the mission, any hackers want to grab it for free?" - all from around 2012 or prior. Nothing like that seems to happen anymore.
- Starting from around 2016, a heavy anti-technology sentiment appears. Cloud, crypto, AI - all are nonsense propagated by VC types and overzealous engineers.
- Similarly, any thread involving money/labor invariably has an anti-capitalist and/or "unions would solve everything" tangent.
Would be interested to hear if others have observed similar.
Yeah that’s roughly been my read too. I think the audience of the site changed. The user base has grown significantly. The site has gone from being about hacking (“hey here’s an empty desk”) to the culture of hackers at large (“tech was a mistake when it got invaded by VC hucksters”.)
TFA’s sentiment decrease tracks very closely with the huge uptick in user creation that started in 2022. HN isn’t really a tech site anymore, it’s about vibes. That makes sense given that in 2024 there’s a million places online talking about tech so HN only has its culture to distinguish itself. This wasn’t the case in 2008. The vibes here, along with the older demographics of the site, are increasingly nostalgic and cynical.
It'll all probably go the same way as Slashdot did which went through the same cycle (replace "VC huckster" with "Microsoft" and "surveillance capitalism" with "three letter agencies") until it too gets replaced by a site/community with energetic younger users creating new things.
Gemini suggests NLTK and spaCy
https://www.nltk.org/
https://spacy.io/