Hacker News new | past | comments | ask | show | jobs | submit login
LLM-based sentiment analysis of Hacker News posts between Jan 2020 and June 2023 (outerbounds.com)
126 points by mochomocha 28 days ago | hide | past | favorite | 72 comments



Is this just using LLM to be cool? How does pure LLM with basic "In the scale between 0-10 ..." prompt stack up against traditional, battle-tested sentiment analysis tools?

Gemini suggests NLTK and spaCy

https://www.nltk.org/

https://spacy.io/


I'm wondering how their LLM parsing 250 mil words in 9 hours compares with performance of traditional sentiment analysis.

Also, many exisiting sentiment analysis tools have a lot of research behind them that can be referenced when interpreting the results (known confounds etc). I don't think there is yet an equivalent for the LLM approach


Pretty slow. I built a sentiment analysis service (https://classysoftware.io/) and 250M words @ ~384 words per message I’m pushing 5.6 hours to crunch all that data, and even at that I’m pretty sure there are ways to push it lower without sacrificing on accuracy.


And yet, it's so much easier to deploy an LLM, either through a service or on prem.


It's easier to do a lot of things. That doesn't make it better.


But it does make people to feel like that action is now possible. And once someone believes something is possible, they're more likely to do it


Often makes things built on top of them better because of improved speed of iteration.


Sometimes, doing something different does result in something better.

For example, EVs. Compare EVs to ICEVs and you can point out a lot of faults, but ICEVs have had 100 years of refinement. Perhaps you're comparing battle hardened SA with fledgling LLM based SA?

Never don't do new things, not only when its for fun, but especially when its for fun. If you want to be closed minded, that's your choice, but don't try to put that mentality onto others.

Keep your non-hacker mindset to yourself.


What do you mean? Deploying something like spaCy is far easier than deploying an LLM in my experience.


By "deploy", they almost certainly mean "set up to use" and they may have also included "learn how to use" and all its various forms, as well.

LLMs really are almost magic in how they can help in this space; and setting them up is often just getting an API key and throwing some money and webservice calls at them.


Set up to use, sure. Learn? Learning isn’t deployment in this context.

Setting up spaCy is just `pip install spacy`. No need to worry about GPUs or dedicated services like you do with LLMs.


yeah, then you have to learn the API. Basically every option for running LLMs have converged on the openAI (web) API.


“Learning the API” is not deployment.


pip install vllm, boom you have an openapi-compatible webserver. No further action necessary.


No, there is further action necessary. If you want any kind of decent performance, you need to run it with an appropriate GPU. This is not true of spaCy, which makes spaCy easier to deploy.


Because LLMs WILL dominate all NLP use cases, whether you like it or not.

Its like the linux of operating systems. Sure you can handwrite up some custom OS more specialized for a purpose. But its much easier to just use linux, which everyone understands on a basic level and is extremely robust, and modifying it slightly for the end goal.

And saying "Traditional sentiment analysis" tools are "Battle tested" is laughable. LLMs in the past year alone, probably has 1000x the cumulative usage of all sentiment analysis tools in history.

LLMs get 100 billion + each year in research, improvements, engineering, optimisations.

LLMs keep rapidly improving year to year in capabilities. Sonnet 3.5 already obliterates the original GPT-4 in every aspect.

LLMs keep getting cheaper year to year. Gemini flash is like 100x cheaper than the original GPT3.5.

You can onboard any person who can write python, to start using LLMs to perform language analysis in a day. Versus weeks to use these traditional tools.

Nearly all NLP tasks will be standardised to use LLMs as the baseline default tool. Sure there'll be some short term degradations in some specific aspect, but there's no stopping the tide.

By the way, traditional ML-based translation is also pretty much dead and replaced by LLMs. I've been seeing an explosion in fan-translations done by say Sonnet 3.5, the improvement in fluency and accuracy is just radical and extreme, I often don't even notice the AI-translation anymore.


Aside from a half dozen or so zeros, you're right on.


on what, the spending? Facebook alone said that it will spend $40b this year on AI. probably not all of it is on Llama but a sizable portion is.


Sorry, but not really. If you know what you do, you don't just pick an LLM. LLMs are trained/built for a specific task: text generation. Other models are trained on different tasks. If you know what you do, you compare models (I don't mean LLM models with that!) and choose the best performing. Just because LLMs receive more training doesn't mean they have a better performance. Very weird and flawed way of thinking. This is just hype thinking


I have to agree with the parent. LLMs are excellent at a large range of NLP tasks. Of course they are not going to replace all ML models, but when it comes to NLP they are clearly better than lots of trained models (e.g. https://arxiv.org/pdf/2310.18025).


LLMs are general purpose tools and absolutely are not better than trained models (using the latest techniques) for a specific task. I mean, that's obviously true if you think about it.

You can use similar datasets and the latest model architectures and if you train a model purely for sentiment analysis it will be better than frontier general purpose LLMs for sentiment analysis.


It's really mind-boggling that so many people disagree via downvotes that you compare models and choose the best performing one, independent of the hype ...


I can see a simliarity here in comparing Java/JavaScript/any other modern, more productive language to C. Yes, both can write more or less the same program, but you'll get the same result with less effort and more quickly with the modern languages. Yes, modern languages will always be slower and heavier on resources than C.


That's inaccurate, because traditional sentiment analysis, or rather the entire NLP ecosystem, is a very niche and underoptimized space.

Its not comparing C against Javascript, its comparing Ada against Javascript. Ada is not going to be any faster than javascript because its too niche and therefore underoptimised.

The theoretical minimum computation required by LLMs is far higher than traditional simple NLP algorithms. But the practical computation cost of LLMs will soon be cheaper, because LLMs get so much investment and use, there's massive full-stack optimizations all the way from the GPU to the end libraries.


I did a similar kind of process for my own chat logs. I have about 11M tokens worth of logs, and it took 2 days to crunch all of them with ollama and LLaMA 3.1 8B on my MacBook. It's slow, but free.

I generated title, summary, keywords and hierarchical topics up to 3 levels up from the original text. My plan for now is to put them in a vector search engine, which, incidentally, was made with Sonnet 3.5 with very little iteration. I want to play around to see how I can organize my ideas with LLMs, make something useful from all that text.

I really don't know what I will discover. One small insight I already found is that summarization works really well, you can use summaries instead of full texts to prime Claude and it works better than expected. Unlimited context? Maybe.

Another direction of research is to create a nice taxonomy, there are thousands of topics, pretty difficult task, but there must be a way using clustering and LLMs. That is why I generated topic, parent-topic, gp-topic, and ggp-topic from all snippets. I would probably manually edit the top 2 levels of the taxonomy to give it the right focus.

I'm also integrating with my HN and reddit feeds. X is too stingy with the API. Maybe Pocket and local downloads folder too, I save/bookmark stuff I like. I could also include all the papers I am reading into the corpus. It could synthesize a ranked feed aligned to my own interests.


I'm working on something tangentially related [1] but by sourcing my Google search history data. It's surprising how LLaMA 3.1 8B is pulling most of the weight in my case too.

[1] https://github.com/enclaveid/enclaveid


LLMs are shit at generating content, but summarization works really well.

I’d like to use your project


> NFL (915 posts)

> Football (206 posts)

Either hacker news really likes the national forensic league, or these LLM-categories are a bit dubious.

Also hmmm:

> American football (7 posts)

> American_football (6 posts)


It's this one "these LLM-categories are a bit dubious", specialized models still outperform LLMs on niche tasks like classification and sentiment.


> Tokens Don't Lie

> But how do people feel about these topics

I find it notable that tokens don't necessarily express people's feelings. Put another way, tokens aren't how people feel, they're how they write.

Samstave mentioned in this thread that Twitter is a 'global sentiment engine'. I'm sure that's literally true. Sentiment measurement is only accurate to the degree that people are expressing their real feelings via tokens. I can imagine various psychological and political reasons for a discrepancy.

If you did sentiment analysis of publicly known writings of North Korean administrators, would that represent their feelings?

I think the interplay with free speech is interesting here: In a setting where people feel socially and legally safe to express their true opinion, sentiment analysis will be more accurate.


Can you run this tool on the removed posts dataset?

https://github.com/vitoplantamura/HackerNewsRemovals


I wonder if the dip is more about LLama3 70b training and data than a change in sentiment. The data cut-off was Dec 2023 for 70b. That looks to coincide with the reversal of the dip.


That's an interesting hypothesis but the words we use to express agreement and disagreement haven't changed much.

We don't try to retrieve articles/topics from the model, which would be affected by the cutoff, just asking it to analyze the sentiment or summarize the content provided in a prompt


True. It would be interesting to run these same tests on the 7B model to see if trend information changes or not. 7B had a march cutoff so if the aug-dec dip migrated to oct-march (or just disappeared) it would be strong evidence for training/data bias. If nothing else, comparing 7B to 70B would likely be interesting.

edit I realized too late I had the years off. It is pure coincidence of month, not a real data bias. Sorry! I still think it would be interesting to see a 7B comparison but that is just to see how well a small model could spot big trends compared to a bigger one.


yep! And of course the new 3.1 model


>>Use the tool below to explore various topics and the sentiments they evoke.

This is a cool phrase.

It is personally important as when I was asked in a panel interview @ -- They asked "what do you think Twitter is?

My response was "You're a global sentiment engine""

(There are a lot of conversations I'd love to have with the HN community with respect to our shared experiences, and weird history flipped-bits that exists in the minds of those who experienced that...

like threads of how linux came, or how xml was born through things I touched in a forrest gump way - and how there are so many stories from so many.


Speaking of Twitter, it would be very neat to be able to see a graph of sentiment over time if you select a term.

You could watch Twitter go from being a niche little new thing to popular to "twitter is trash" too popular to increasingly divisive to the purchase and rename to X to today.


> My response was "You're a global sentiment engine""

More like a sentiment engine for bot operators.


I wanted to do an analysis of hacker news on another topic, but over a longer timespan.

I started to look into it, but in the little time I had to devote to the idea, I read that the Agolia API lets you look over a longer period, but that it is relatively costly.

I just want to look for all story titles from the beginning of time which match one of several simple search terms, and return submission date and title for an analysis I'd conduct in R.

Am I overthinking it and a simple Python script without an API code can do it?


even simpler, you can just do it in SQL

You can find all titles and dates since the beginning of HN in this public BigQuery dataset: https://console.cloud.google.com/marketplace/product/y-combi...


whoah. thank you dude!!


It's funny filtering by crypto and seeing the (sometimes hazy) division between cryptography (we love this) and cryptocurrency (we hate it) terms.


I wonder if using prompts to get the sentiment in LLM is enough? So we do not need to do any fine-tuning anymore?


I think you raise a reasonable question.

I also think it *could* be less of a problem than you might think. If we treat the scale as arbitrary (which I think is a safe thing to do), then movement along the scale could be sufficient to ascertain *something*


> Hate : Torture

Great work folks, glad we can all agree on that one.

Interesting that they used an LLM for this. I mean it makes sense and the data seems to pass the pub test but I, in my ignorance, would not have assumed that a language model would be well suited for number crunching.


Seems we mostly agree on hating Atlassian, too, so it's working as intended.


Conveniently sandwiched between War on Terror and CSAM.


Why is everything only plotted between 4 and 8 if the scale of the least liked topic should be 0 and most liked should be 9. Also 4.5 is the midpoint, but 4 is displayed as bright red and 6 is a muted gray blue, why? This makes no sense except to be psychologically disingenuous.

And no 5s? What is even going on in that LLM?


> "It's a scale of 1 to 13, but it goes up and back down. Eight is the highest score on the scale." - Jason Mendoza

It's nice to see this scale used outside of The Good Place.


The scale makes no sense.

Sentiment of forum posts is not an absolute value, you can't compare it against, for example, conversations in a pub, or talks between friends, etc.

I think they should have normalized the numbers around the average, so to have a relative measurement of the various topics.


> Reply only the tags

LLM's are really sensitive to bad or even slightly ambiguous grammar. I wonder if the numbers would differ significantly with "Reply only with the tags, in the following format".


I had the same concern. However, the structure of the output was surprisingly stable. We rejected badly formatted responses: https://github.com/outerbounds/hacker-news-sentiment/blob/ma...

The semantics of the topics/tags could be improved for sure with a more detailed prompt


At least Republicans and Democrats share the same low sentiment score of 4.


Apparently your comment is divisive though!


what's up with the title flips from

> 350M Tokens Don't Lie: Love And Hate In Hacker News, to

> LLM-based sentiment analysis of Hacker News posts, to

> LLM-based sentiment analysis of Hacker News posts between Jan 2020 and June 2023


A/B testing? Possibly increasing accuracy from high click-bait, low signal to low click-bait high signal?


Can we get a 2-d visualization of topics, and drill into topics?


yes please! The data is conveniently available as JSON blobs here https://github.com/outerbounds/hacker-news-sentiment/tree/ma...


> It is worth clarifying though that Hacker News does not hate International Students, but the posts related to them tend to be overwhelmingly negative, reflecting the community’s sympathy for the challenges faced by those studying abroad.

I was horrified when I read international students as one of top on the hate list. Although I saw a couple of comments attributed their cities housing crises on international students and thought that this sentiment is wide supported.


here's how the model ranks the discussion on this page after 40 comments:

SENTIMENT 6

:D


Great analysis. How is divisiveness actually calculated?


the most divisive topic seems to be "gnome" with 0.82 on the divisiveness scale

that's really "hacker", a worthy first place


more like h4x0r


search "divisive" here: https://github.com/outerbounds/hacker-news-sentiment/blob/ma...

I actually spent 10 minutes trying to see if there are obvious tests for U-shaped distributions. I'd love to hear if anyone has ideas here.


I don't know about this analysis and its conclusions. I'll just use this as a jumping point to selfishly spout my own human observations.

For context, I'm someone who uses HN to search for topics I'm interested in, rather than something like Google or Reddit.

- For anything SF community-related, most hits are from 10+ years ago. Lots of "hey we have a space in soma, any local startups want to hang and drink beers?" or "we have an empty desk in a space in the mission, any hackers want to grab it for free?" - all from around 2012 or prior. Nothing like that seems to happen anymore.

- Starting from around 2016, a heavy anti-technology sentiment appears. Cloud, crypto, AI - all are nonsense propagated by VC types and overzealous engineers.

- Similarly, any thread involving money/labor invariably has an anti-capitalist and/or "unions would solve everything" tangent.

Would be interested to hear if others have observed similar.


Yeah that’s roughly been my read too. I think the audience of the site changed. The user base has grown significantly. The site has gone from being about hacking (“hey here’s an empty desk”) to the culture of hackers at large (“tech was a mistake when it got invaded by VC hucksters”.)

TFA’s sentiment decrease tracks very closely with the huge uptick in user creation that started in 2022. HN isn’t really a tech site anymore, it’s about vibes. That makes sense given that in 2024 there’s a million places online talking about tech so HN only has its culture to distinguish itself. This wasn’t the case in 2008. The vibes here, along with the older demographics of the site, are increasingly nostalgic and cynical.

It'll all probably go the same way as Slashdot did which went through the same cycle (replace "VC huckster" with "Microsoft" and "surveillance capitalism" with "three letter agencies") until it too gets replaced by a site/community with energetic younger users creating new things.


Systemd now in the Love HN section, that a HN news in itself.


[flagged]


Listen you just had to be there. Who wants a file picker with no thumbnails? That's no way to live.


I got a fairly good chuckle seeing Gnome as the most divisive topic, too.


Yeah, that tracks

Systemd is fairly well up there too.


And Tesla is the most disliked with a real number of posts.

In the other direction math is the most liked! And if you go a little further Python is the clear winner for languages.


- - Its' pronounced 'GNome'




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: