Hacker News new | past | comments | ask | show | jobs | submit login

Yeah but to reach that point you will probably need those "useless AI detectors" (as stated by the comment I was replying to). That was my point - we're not there yet therefore those tools can be useful.



But how do you know we’re not there yet? Not across the board, but isn’t it possible there’s a small yet growing portion of written content online that’s AI generated with no obvious tells?


I think we have a misunderstanding - I don't mind if I'm reading AI generated content as long as it doesn't look like "the typical AI content" (or SEO slop). In my point of view companies/writers might use AI detectors to continue improving the quality of their content (even if it's written by hand, those false positives might be a good thing). We're not there yet because I still see and read a lot of AI/SEO slop.

I agree with you that the "portion of written content online that’s AI generated with no obvious tells" is "small yet growing". That's exactly the thing - it's still too small to "be there yet" :)


I don't follow how you're reaching your conclusion. You only mind reading AI content when it's obviously AI/slop and you conclude the vast majority of decent content is not AI generated. In your conclusion how were you able to identify good content as being written by AI or not?

E.g. it's perfectly possible that in terms of prevalence "AI slop > AI acceptable > human acceptable" instead "AI slop > human acceptable > AI acceptable" and nothing noted explains why it is one instead of the other.


Semi-automated such is probably widespread by now.

Like imagine the Rust Evangelic Task Force, but for the next big thing, will probably be shilled by bots.

Low quality content like Youtube and Reddit comments are probably mostly LLM bots who comment on anything to hide the actual spam comments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: