Posting 100% AI generated content should be against the rules. (Outside of exceptions where it is relevant.)
But where should the line be drawn when a user collaborated with an AI on a comment? As an english-as-a-second-language speaker, I've been for years using tools like Grammarly or Hemmingwayapp to improve my writing. I will gladly use a GPT-based proofreader/editor browser plugin eventually, why not?
I agree but the alternative is the the end of HN + the end of the rest of the open internet in a year or five.
When you soon will only meet bots that are trying to manipulate you or sell you something - the value for everyone goes to zero pretty quickly.
I'm not sure how this will be solved besides most people ditching the open internet and 100% engaging in tiny groups of people they already know the mental capacities of.
Christ, this really is the end of the "social internet" where you could find inspiration and new perspectives isn't it?
It makes me want to revisit The Web Of Trust[1], and apps like Keybase where users have a cryptographically verified social graph comprised entirely of people who were verified by another human that knows them. That whole idea goes directly against anonymity though, so maybe that will become a more pronounced way to split the internet: verifiable human identities, and anonymous bots and humans.
This is a solution against botnets, but not against humans who use AI to enhance/write their comments for them, like the ancestral poster was accused of doing.
Might well be. It's also an opportunity to study (meta-study?) the behavior of populations under these changes. It's a lot like an A-life experiment writ large, and played out in real life.
But where should the line be drawn when a user collaborated with an AI on a comment? As an english-as-a-second-language speaker, I've been for years using tools like Grammarly or Hemmingwayapp to improve my writing. I will gladly use a GPT-based proofreader/editor browser plugin eventually, why not?