Hacker Newsnew | past | comments | ask | show | jobs | submit | relevant_stats's commentslogin

The stats are wrong - on Android my finger has not moved triple digit times, and I haven't tapped double digit times. In 4 seconds.

My general location is also wrong.

This site's theme is barely visible.

And the entire idea for the site is at least couple decades old.

Unoriginal slop.


> This is a lovely bit of writing

A lovely bit of AI slop.

Edit - This is not the first time I'm observing this. Could somebody explain to me why the comments which point out the discussed texts are AI generated are being frequently downvoted on Hacker News?

In the very same thread there is this apparently downvoted (as of now) comment: https://news.ycombinator.com/item?id=47807528

Why is it so, is this really this community's stance on LLM-generated, mostly weak and empty writing?


I don't downvote those comments, even though I have a serious problem them.

These comments are little more than a witch hunt. It is clear from the language being used: "it's obvious", rather than providing evidence. When people do provide evidence, it is in terms of "tells". In other words, bits of overused grammar that are common in LLM generated texts yet also exist in human written texts. It doesn't really prove their claims. Worse yet, it is also next to impossible to defend one's self from such claims.

None of this means that I want to spend my days reading LLM generated articles. I believe they pollute the Internet with yet another source of hollow writing. (LLMs are not the only guilty party here. Plenty of flesh and blood humans do the same.) They also further necessitate the use of LLMs for what I think is the one legitimate use, which is research. (Before you attack LLMs for hallucinating, it is worth noting that many, if not most, of the articles written by people demonstrate the same.) Finally, if I am interested in the output of an LLM, I would rather do it myself. At least then I would know what I am getting is LLM generated, rather than a misrepresentation. Plus it is easier to dig deeper if something seems to be out of kilter, either through further prompting or requesting sources.

Yet all of my distaste for LLM generated articles does not outweigh my distaste for the witch hunt.


I think your comment was maybe downvoted for being so terse and dismissive.

But you're right that it is anything but a good piece of writing and it is genuinely strange to see people act otherwise.

> That kind of furniture organized more than just objects. It organized a relationship with technology. It suggested that the computer (and with it, the internet) was something used under particular conditions: seated, in that spot, for a certain amount of time. Something that was switched on and off, opened and closed.

It's making a nice point and one that I'm sure most of the people here do find appealing, it's an idea that I relate to myself. But the words used to make that point are bordering on nonsense.


> But you're right that it is anything but a good piece of writing and it is genuinely strange to see people act otherwise.

It is prudent to assume there is a decent chance it is not a person acting otherwise (i.e. could be bots). Funny, because this was also a recent post:

https://news.ycombinator.com/item?id=47800738


> I think your comment was maybe downvoted for being so terse and dismissive.

Yes, I know, but what motivated me to ask was that from my observations also less derisive comments raising AI point are prone to being downvoted. Like this comment I linked to was 'just asking a question'. And I saw others being more pleasant, with no different results.

Usually the LLM generated texts they are reacting to aren't IMO worthwhile - like in this case. Idk, I feel very surprised by how accepting of them others here seem to be (if measured by points system).


> But you're right that it is anything but a good piece of writing and it is genuinely strange to see people act otherwise.

The prose isn't good. It does read like AI slop.

But it invoked an insight and feeling in me that was novel and poignant and (I think) intended by the author.

That's why I called it lovely writing.


Yes, I get you, I recognised that in my final paragraph. But it would better called writing that "makes a lovely point" rather than "lovely writing" if _the prose isn't good" and it reads like _slop_.


I downvote them because they are tangential to the content. They are like complaints about scroll bars and back button hijacking, or annoyances about the website's color scheme. Valid complaints, but contrary to the HN guideline:

    Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
I don't like AI slop articles either, but I also don't like articles where the text is formatted in a tiny column in the middle of the browser. Neither are really useful to complain about here. By the end of 2026, 90% of the articles here are probably going to be AI slop, and it will be totally useless to complain about each and every one of them.


>Valid complaints, but contrary to the HN guideline

the site not too long ago banned ai generated content. i don't know if they changed it but pointing out that something was AI would have been following/flagging the rule/violation


I want to respect the guidelines for the good of the community, but at this point it isn't serving the community well for there to not be backlash against the rising flood of AI-generated garbage.

It was really truly bad enough when it was ~half the articles either being about AI directly or indirectly. Now it's that, plus half of it is written by Claude too.

What meaningful community is going to be left for these guidelines to protect?

Moderation needs to put their foot down in some cases, as a matter of necessity. Sometimes users need to put their foot down, too.


I'm all for banning AI slop articles. The HN guidelines were recently updated to address slop comments[1], but they have not put their foot down yet about slop articles.

1: https://news.ycombinator.com/newsguidelines.html: Don't post generated comments or AI-edited comments. HN is for conversation between humans.


> I downvote them because they are tangential to the content. They are like complaints about scroll bars and back button hijacking, or annoyances about the website's color scheme

I don't agree with you. They are not at all like the examples you mentioned. Calling something "AI slop" signals that the writing either fails to raise any important point or, even when it raises a decent point, it is so repetitive it wastes time of readers. This is not only a style problem.

To put it in LLMish: It's not 'tangential to the content' – it's directly addressing (the lack of) the content.

If LLM worked perfectly we shouldn't have noticed the text was generated. I and others did. I feel it's important to point it out, if we don't want low-quality texts fully flooding us.

> By the end of 2026, 90% of the articles here are probably going to be AI slop, and it will be totally useless to complain about each and every one of them.

Using the policy you personally adopted it surely will be so. I don't think news aggregator comprised of junk information is something which should be embraced, so maybe reconsider your position?



I think HN is just fucked. A lot of people either genuinely don't see the problem with having a bunch of AI-generated slop garbage on the frontpage, or they are themselves posting it so they have a personal stake in not seeing anything wrong with it.

Don't be too surprised: there are literally comments that are just blatantly written by Claude on HN, which seem to be coming from human accounts that predate Claude. Which means that there are people here who, in trying to respond, actually ask Claude to basically do it for them. I find this utterly stunning and honestly, truly alarming. Even if the person behind the keyboard is technically alive, what exactly are they becoming? Are they even going to think for themselves, or will they just ask Claude what they're supposed to think from now on?

And as much as HN moderation has been genuinely pretty great at keeping the community under control with a relatively light touch, it's already too late. Dang and friends needed to do something much sooner, and they didn't. It literally doesn't matter what they do now, so there's no point in bugging them, not that I expect they would be interested in listening anyway.

I'm not going to make a lot of dramatic "I'm leaving Twitter" type comments, but I'm losing respect for HN's rules and guidelines the more I see this page overran with literal CRAP. And just so I can make my opinion clear, it's not crap because it's AI generated, it's crap because I can tell it's AI generated, full of fluff, cliches and a lack of substance.

It says a lot about the taste of the average person voting on HN that this is what we get now, and it fucking sucks because I don't really like any of the competing news aggregators either. I actually had to log in to post this comment because lately I've been staying logged out of HN and visiting less frequently now that I'm not sure what I get out of it.

At least I won't miss HN when the internet becomes an inaccessible hellscape in part due to AI crap outnumbering human posts 1000:1 and in part due to horrible legislation screaming ahead at breakneck speeds with literally no opposition from anybody.


Intelligence for HN posters is like boobs for strippers: everyone knows that bigger is better when it comes to the attention they seek, so if they are lacking, or feel inadequate in that department, they seek augmentation which anyone can tell is fake but seems to get the job done.


how would you solve this problem? with AI detecting AI at scale here and killing posts? I do get what you are saying but I am wondering what would you do if you were put in charge of tackling this problem today?


> it's not crap because it's AI generated, it's crap because I can tell it's AI generated, full of fluff, cliches and a lack of substance.

Yes, exactly this.

If I notice it means your PhD-level 2027 ASI technology failed. Since when HN is a place for boasting about failed projects?


I'm very surprised according to results people struggled with identifying [3] and [4] as AI.

IMO both are simply bad and both contain usual telltales in spades (continuity problems, failed or trite metaphors/analogies, semantic failures, overall feeling of 'wtf is even being attempted here').

I'm not so surprised that people struggled with identifying [1] as human - the confounding factor is that this flash story is unpleasantly written, and it's not easy to realize that its failure modes (eg. trying to cram too much in too short a text) are rather human like. And I'm sure the fact that arguably the hardest to digest and rather bad human story opens the poll might somewhat influence the further analyses.

As others in the poll I failed to identify [5] as AI even though in hindsight the telltales are also there. That's because I rather liked it, and as a result it was harder to be vigilant. I also was very undecided on [8]. Finally I scored 6/8, but I wouldn't say it was easy.

Shame that comparing to the previous contest https://mark---lawrence.blogspot.com/2023/09/so-is-ai-writin... is not straightforward. In that one I scored 9/10 while having very easy time (I didn't even finish reading some of them before making up my mind). I also felt completely excused with my only failure, incorrectly identifying as AI the story written in the style of exhaustingly banal fan fiction. But frankly I found almost all the human stories in the previous edition better then the current ones.

In retrospect ChatGPT4 was a terrible writer. ChatGPT5 seems to be an improvement to the admittedly worrying point. Still not impossible to discover though.

However these are my impressions only and it looks maybe I was lucky and I should not generalize it? According to the website people had serious trouble discerning gpt4 writing also 2 years ago. And I'm rather shocked they did. And that they scored some of those banal AI stories positively.

If it's not luck on my part, then maybe discerning AI writing is a skill very different from 'writing' or 'being deeply interested in literature', skills of people who usually frequent this blog?


I really don't get why some people like to pollute conversations with LLMs answers. Particularly when they are as dumb as your example.

What's the point?


Same, we all have access to the LLM too, but I go to forums for human thoughts.


ok, agree with your point, i should have got the numbers from chatgpt and just put them in the comment with my words, i was just lazy to calculate how much profit we would have with 10-bit bytes.


umm, i guess most of the article is made by llm, so i did not see it as a sin, but for other cases i agree, copy-pasting from llm is crap


On hacker news the comments need to be substance written by a person, but the articles can be one word title clickbait written by LLMs.


I wouldn't pay too much attention to answers from this respectable subreddit when they express more what is a historiographic opinion than a fact. And when at the same time they are fighting strawmen.

The European Dark Ages narrative was indeed overblown and needed correction. But this correction went too far. It seems to be now at the stage of explicit and vigorous denial of any downfall of fortune in the Western ex-Roman provinces. I'd posit that such a denial is even more overblown than the initial myth it aimed to correct.

I can offer you a link to an author arguing for this position: https://slatestarcodex.com/2017/10/15/were-there-dark-ages/


I understand the point and personally regard "the dark ages" (500 - 1,000 CE) as a period lacking in scrolls, commentary, and stonework compared to the Roman Empire .. but largely life as it was for those that were never an integral part of the Roman Empire workings .. eg: those that delivered food to the Romans in the UK never wrote or built much in stone themselves and once the empire faded in half to the east they carried on without record.

Regardless, your link is an interesting argument that doesn't provide anything especially compelling to the contrary .. a void, here a void in the record, is easily filled with speculation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: