Hold on, there's some inaccuracy here. Only one of those comments got pushback, and that comment wasn't simply matter-of-fact; the problem with it (from my point of view anyhow) was that it added a gratuitous insult ("or the human equivalent"). That made the whole thing read more like snark than straightforwardly raising a question. The other comment was more matter-of-fact about calling GPT-3 and didn't get any pushback.
The problem is that the cases legitimately overlap. That is, "sounds like GPT-3" gets used as an internet insult (example: https://news.ycombinator.com/item?id=23687199) just like "sounds like this was written by a Markov chain" used to be (example: https://news.ycombinator.com/item?id=19614166). It's not surprising that someone interpreted the first comment that way, because it contained extra markers of rudeness. That may have been a losing bet but it wasn't a bad one. Perhaps the other comment didn't get interpreted that way because it didn't throw in any extra cues of rudeness—or perhaps it was just random. Impossible to tell from a sample size of 2.
Not to take away from the glory of lukev for calling it correctly. I just don't think the reply deserves to be jumped on so harshly.
The problem is that the cases legitimately overlap. That is, "sounds like GPT-3" gets used as an internet insult (example: https://news.ycombinator.com/item?id=23687199) just like "sounds like this was written by a Markov chain" used to be (example: https://news.ycombinator.com/item?id=19614166). It's not surprising that someone interpreted the first comment that way, because it contained extra markers of rudeness. That may have been a losing bet but it wasn't a bad one. Perhaps the other comment didn't get interpreted that way because it didn't throw in any extra cues of rudeness—or perhaps it was just random. Impossible to tell from a sample size of 2.
Not to take away from the glory of lukev for calling it correctly. I just don't think the reply deserves to be jumped on so harshly.