They've been trying to manipulate HN with multiple accounts and voting rings—which hasn't worked—and baity titles, which unfortunately has. Note how they're doing the exact same thing with this one.
GPT-3 is a red herring; the issue was the generic, baity title on a popular theme. Those routinely get upvotes because people see words like 'procrastination' or 'overthinking' and instantly think of their own experiences and ideas and want to talk about them. Such threads are not about the article, they're about the title, which the author admits writing ("I would write the title and introduction, add a photo"). Title plus introduction is already more than most people read, so this case is not what they say it is—which is consistent with their other misrepresentations, including the false claim "only one person noticed it was written by GPT-3".
Edit: as YeGoblynQueenne and minimaxir point out, given the other misrepresentations I'm not sure why we should believe that the article body was exactly "written by GPT-3" either, especially because the author describes it this way: "as unedited as possible"—in other words, edited. That doesn't much matter, though, because the salient inputs for the HN thread were the parts they wrote themselves.
We usually downweight such submissions, but occasionally we let one through because it seems healthy and interesting to have such discussions in the mix—just not too many. If there was a failure here it was that the moderators didn't look at the article. Or maybe we did; on my computer the title, photo, and introduction cover the entire first screen. Maybe that seemed good enough to let the discussion run.
Given this new information about the author's practices I'm inclined to think that everything they say in the above article should be treated with suspicion, especially since they accept that a) they do not have ready access to GPT-3 and b) they edited the "GPT-3" article to some extent. For example I would not be surprised to learn that, unable to use GPT-3 for a significant amount of time, they genereated some of the text of the article and then created the rest by hand, or similar. Not because I don't believe that readers on the internet can mistake GPT-3 as something written by a human, but because the author's ethics sound just... dodgy.
> they edited the "GPT-3" article to some extent. For example I would not be surprised to learn that, unable to use GPT-3 for a significant amount of time, they genereated some of the text of the article and then created the rest by hand, or similar.
You should assume all GPT-3 related text (including app demos and screenshots) is cherry-picked unless you can see the text generated in real time for yourself.
Cherry picked, yes, but the author says they also edited the "GPT-3" article to some extent. The extent to which this was done is what I'm wondering about.
Given the algorithm; generate text, cherry pick, edit. We could make a random number generator pass the turing test. It is just a mater of processing power and time.
That's also my take. Anyo can say anything about GPT-3, and it's impossible to negate, because (1) most people don't have access, (2) the authors rarely provide complete input data, so even those who have access can't verify these claims.
Really, this whole "OpenAI" is an oxymoron and I hope a truly open alternative will appear.
These are perennial issues. We've worked hard on anti-voting-ring measures. The voting rings in their case didn't work. Baity titles are a harder issue and require moderation. If you want more information about this, I'd need to see specific links. Most of the things people say about these matters aren't true; they're overgeneralizations at best, and often just imagination.
As for downvoting, people have been saying all the same things for well over a decade. The discussion circulates endlessly. Does downvoting get abused? Sure, but evaluating the system by specific examples is like evaluating an immune system by looking at what a couple of white blood cells did. You don't reprogram your immune system based on that—especially because the examples that stand out are always the cases that the system didn't handle perfectly.
> You don't reprogram your immune system based on that
Yet HN clearly did reprogram their system, by requiring a minimum karma before an account is able to use the downvote button, and not allowing users to downvote replies to their comments, instead of just working like Reddit.
What was the reason for that?
Someone seems to have recognized the downsides of allowing everyone to bury comments, and made an attempt to limit that.
That limit is no longer effective, as often seen in the drive-by downvoting on divisive topics and the burial of facts if somebody doesn’t like them.
As the total number of HN users increases it becomes easier for all accounts to reach that minimum karma, e.g. by posting funny comments (HN is not immune to memes) and riding on bandwagons.
That karma limit has been in place for 10 years or more.
As I said, this discussion circulates endlessly; if you use HN Search you will find that all of these concerns/complaints go back to the beginning of the site. People have passionate, conflicting views about how downvoting should work, and commonly feel like the site is deteriorating because it doesn't work the way they would design it.
Good moderation is inherently tolerant of some friction and seeming imperfections. Trying to craft some Perfect social climate leads to horrifyingly draconian and oppressive stuff.
I'm here for the life-giving, messy breath of fresh air not found in most online spaces.
The reality is that moderation is about helping people tolerate people unlike themselves and about buffering the worst communication gaffs. But beneath it all, sometimes people are just grumpy or mean and there is no fixing that. You can only mitigate the worst expressions of it, not stamp it out.
Right now, there's a Pandemic on. The world generally is pretty cranky and wanting to insist on someone fixing Something so they can feel like their world isn't spinning out of control.
Members successfully bullying mods into fiddling with voting rules in a short-sighted manner to get some momentary sense of control in life isn't going to fix the pandemic. It's just going to undermine long-standing best practices for a well-run forum.
You're assuming that it's possible to satisfy everybody. It isn't. For one thing the different requests are contradictory amongst themselves.
One must make design choices. I think HN's design choices in this area are good and I've not heard sufficiently good reasons to change them. Not everyone agrees, but everybody will never agree.
> I think HN's design choices in this area are good and I've not heard sufficiently good reasons to change them.
I find it hard to believe that in a discussion which supposedly “goes back to the beginning of the site” you haven’t heard sufficient reasons yet.
I’ve often thought about gathering a collection of comments that were perfectly reasonable, well-behaved and factual, but still grayed out because somebody didn’t like them. (It takes, what, just 2 or 3 downvotes to make a comment practically invisible.)
Perhaps enough such examples would be sufficient to recognize that there is a problem.
Sure, if you make such a list I'd be interested to see it.
Btw, if a comment is faded and you're having trouble reading it, you can click on its timestamp to go to its page, and in that case it should be rendered normally.
GPT-3 is a red herring; the issue was the generic, baity title on a popular theme. Those routinely get upvotes because people see words like 'procrastination' or 'overthinking' and instantly think of their own experiences and ideas and want to talk about them. Such threads are not about the article, they're about the title, which the author admits writing ("I would write the title and introduction, add a photo"). Title plus introduction is already more than most people read, so this case is not what they say it is—which is consistent with their other misrepresentations, including the false claim "only one person noticed it was written by GPT-3".
Edit: as YeGoblynQueenne and minimaxir point out, given the other misrepresentations I'm not sure why we should believe that the article body was exactly "written by GPT-3" either, especially because the author describes it this way: "as unedited as possible"—in other words, edited. That doesn't much matter, though, because the salient inputs for the HN thread were the parts they wrote themselves.
We usually downweight such submissions, but occasionally we let one through because it seems healthy and interesting to have such discussions in the mix—just not too many. If there was a failure here it was that the moderators didn't look at the article. Or maybe we did; on my computer the title, photo, and introduction cover the entire first screen. Maybe that seemed good enough to let the discussion run.