
The Unfriendly Robot: Automatically flagging unwelcoming comments - JasonPunyon
https://stackoverflow.blog/2020/04/09/the-unfriendly-robot-automatically-flagging-unwelcoming-comments/
======
narag
When Stack Overflow started, I tried to contribute but never succeeded, there
was some kind of chicken-egg issue that prevented any answer to get through, I
don't really remember what it was, just gave up since I wasn't interested in
karmas enough to jump through those hoops.

So it became a read-only resource for me, fair enough. Now most of the useful
information I find is decorated with comments from some moderator or
contributor talking shit about the question or the answer. Somehow I feel the
unfriendliness directed towards me, as if I should feel ashamed to want the
wrong answer to the wrong question. Then I read the first sentence of this
article:

 _We all want Stack Overflow to be a welcoming and friendly place._

I do, actually. But not sure about _all_.

------
rendall
Stack Overflow had a great thing for a long, long time. Once upon a time, the
Stack Overflow karma score actually reflected one's domain knowledge and
community spirit. But, as with anything that becomes successful and _measures_
success comparatively, it attracted the attentions of people who are
competitive, smart and a little ruthless. The score itself became the goal for
these people, not the actual content.

Unfortunately, via the powers granted through its reputation system, it also
rewarded these smart, competitive, a little ruthless people with tools to
restrict and control content that others produce. So smart, competitive, a bit
ruthless people control content for the rest of us, who just want to ask and
answer questions about tech.

So, a sincere good luck with techno-solutions to a deep-seated, cultural
problem over there. I sincerely hope it works. It would be nice to have the
old Stack Overflow back.

~~~
throwaway2048
Yep, plenty of ways to be sarcastic, hostile and smarmy without triggering
some neural net.

~~~
szhu
I had the same thought, but I think this except from the post assuaged some of
those concerns:

"We want to make clear here that even though the current version of the robot
appears to be performing amazingly well, our human flaggers are just as
important as ever. The only reason the robot is able to do as well as it’s
doing is because of the 100,000+ flags that humans have raised and moderators
have handled. The robot is a way to find things that look like things humans
have flagged before, but we don’t think it can identify truly novel modes of
unfriendliness. Only the humans in our system can do that. So if you see
unfriendliness in the comments, please flag it. You’re helping to make Stack
Overflow better."

~~~
rendall
Unfriendly comments are the least of their problems. The fundamental problem
is their broken reputation system.

Questions are outright closed by people with a high enough karma even if it is
outside of their knowledge domain. People who leave helpful feedback after a
downvote will receive a retaliatory downvote in turn. People with massive
karma will receive upvotes even if their answer is not a good one. To boost
karma, it's better to be first than good, in general. New questions are closed
and redirected to old questions with outdated answers. Non-native English
writers get downvoted for grammatical errors, even if the question is fine.
One downvote will result in more, as the mob piles on. Bringing up any of
these issues in their "meta" results in downvotes.

So, Stack Overflow will focus on its moderation system and "unfriendly
comments" while its entire system encourages a kind of arbitrariness that is
itself unfriendly.

~~~
grey-area
Exactly, the biggest problem is the culture of existing users, which was built
up by stack overflow encouraging a competition for internet points which leads
to all kinds of antisocial behaviour and attempts to game the system and
discourage participation by rivals.

------
ilaksh
The worst part for me is when you ask a hard question and rather than
admitting they don't know the answer they decide to close it because they
think they have got you on a technicality about the validity of the question.

Or a similar variation, it's a hard problem, and they don't know, so they
assume you are a complete idiot and ask you to verify things that were
implicit in the question. Like, you said you did X and Y, and it's almost
impossible to do those things without doing A and B first, but they give you
this patronizing tone saying you should have done A and B first.

Here's another one: you write a decent answer to a question. Someone with a
massive score comes along behind you an hour later and takes your answer and
elaborates on it a bit. They get the upvote and check, you get a zero. The
reason they got the points was because they spend all day gaming Stack
Overflow and are happy to take advantage of any way to steal points and so
have a high reputation.

I really feel like their data science team is not trying hard enough to find
the dyed-in-the-wool pathological sons of bitches that are on Stack Overflow.
There are plenty of them on there.

~~~
throwaway2048
The problem is the pathological users ARE the active, participating userbase,
the site is built on their input to cater to them and their desires.

Its the same reason that online communities almost inevitably degrade over
time, they optimize for the most active, loudest users above everything else,
which is not necessarily the path to a better community.

------
DeathArrow
While I find Stack Overflow very useful, I find it's model very restrictive.

Sometimes you can get more insight on how to solve a problem from a discussion
than from an answer. Why and when can be as important as how.

Also, if people dislike your question, even it obeys the rules, it gets down
voted. They can dislike your question because they dislike the problem you are
going to solve or the way you are trying to do it.

Some comments instead of being related to the question at hand, suggest that
you shouldn't solve your problem but instead do things another way. Which can
be fine, but doesn't fit well the site format because it doesn't allow
discussions.

However, I don't see alternatives. You can ask on reddit but there are not
many people willing to answer.

~~~
leggomylibro
If you're lucky, your language/library/etc might have a fairly active IRC
channel or similar. Popular languages often have a myriad of channels
dedicated to different subtopics.

They're nice because like you said, discussion can be much more helpful than a
single answer.

~~~
james-skemp
Great suggestion.

However, one issue I've run into with the rise of chat and SO, and removal of
forums, is that if your question isn't the right fit for SO, and is a bit
tricky/time consuming, it's easy for your query to be buried.

I reached out to the TypeScript community via Discord about assistance on how
one would add TypeScript definitions to an existing JS repo. Got a little
discussion, IIRC, and then because of other factors, was buried.

------
wodenokoto
I know HN likes to berate on SO as if it’s some has-been that nobody uses
anymore.

Just the other day I asked a fairly specific question, putting in some effort
of writing code examples and add links to relevant documentation and within a
day I had a four paragraph, in-depth answer.

No, I did not get any upvotes or smiley-stickers, but I got for free what I
wouldn’t even know how to buy.

Moreover, I end up on SO daily when programming.

The utility of SO is second to none and it has been that way for over a
decade.

That is pretty amazing in my book.

~~~
throwaway4585
SO is still good, what they mean is that it has peaked or in the process of
peaking. Just like previously-cool bars or festivals are still good but just
'not the same' and on the verge of becoming bad. It's a very specific mindset
where you must be growing (not just growing, the growth rate must _also_ be
growing) and avant-garde all the time, and it's not really surprising to see
it on a forum about startups.

I'll also add that although SO was of course invaluable to me as a reader, the
few times I've had to actually ask questions because I couldn't find what I
needed, other users were stumped as well. Just adding my little anecdote to
yours

------
pdimitar
I feel they never truly recovered moderator goodwill after that fiasco months
ago so they tried to promote more people into doing it, me included (1.3k to
2.1k reputation overnight)

Apparently it didn't work (I'll not edit people's questions!) so now they'll
try automating it. Good luck to them but I'm not seeing them succeed in an
area where nobody has.

And, as others said, SO isn't the same as before. It's mostly a homework
finder and it doesn't encourage newer versions of code.

------
DeathArrow
Stack Overflow karma system is broken. It turned into a playground for people
who like to show off their power given by the points while becoming a
searchable library of code snippets for the others. The code snippets tend to
be old, since asking new questions is discouraged and nobody likes to update
old questions with new answers.

------
drewcoo
That's a dark pattern.

They didn't like having to openly apply human judgement so they applied AI. AI
that has some human judgement built in. And bias along with it. But now nobody
is responsible because the machine did it.

~~~
joncampbelldev
TFA appears to address this, the automated system is used to help identify
comments for human moderators. The human is the one responsible. And according
to stack overflow they plan to use this system only to help the human
moderators.

If you don't believe them and suspect they will have a fully AI flagging
system in the future (in direct contradiction of this post) thats fine but it
might be better stated as such, since human judgement is currently still
paramount.

------
throwaway4585
The main issue of Stack Overflow is the YX problem. The YX problem is, of
course, the reverse (perverse?) of the XY problem, whereby, confronted by a
question that stumps them, an SO user will, instead of admitting so, do one of
the following:

-Attempt to second guess your use case for you. Are you sure you _really want_ to do X? Don't you want to do Y instead? (No thanks, I really _must do_ X and can't do Y because I have Z, A, B and C constraints.)

-Claim your use case is _wrong_ , should not exist, or at the very least is exceedingly niche, and they couldn't possibly imagine _why_ someone would have your use case

-Claim your use case is literally impossible to fulfill with the constraints you've given. While this may well be the case it is a way stronger claim than most people realize.

-If someone _does_ provide an answer to your use case, will scramble to insist that this solution is clumsy and shouldn't have be written in the first place because it could _give people wrong ideas_.

-Insist you didn't do the research and if you did you'd find the solution is Y. (Yes I did, and it really isn't Y because my problem is X, thank you very much.)

-In the worst case, wrongly mark your question as a duplicate of Y

The strength of SO is that it is purely a question and answer website: I give
question, you give answer. Forums always sucked because the question askers
were often new and inexperienced and formulated questions the wrong way, and
the entrenched users were patronizing and insisted a certain way of solving
questions be used or say useless stuff like "I don't owe any of my time to
you" (yeah well why are you bothering writing this post in the first place
then?). With SO's system of questions, answers and comments, everything is
clear or clarified, all the chitchat is dispensed with and no room is given to
subjective stuff like 'is your question correct?' Because no matter how good
of a hacker you are, your knowledge can't possibly cover 100% of all computer
programmers' use cases.

I'm not pessimistic as other people as to the future of SO and I still think
it is and will be an invaluable resource in the following years but if trends
of the YX problem go for the worse it could mean a return to the forums that
sucked.

------
uk_programmer
More robo-flagging of human interaction based on the faulty axiom that you are
somehow in control of how someone else perceives something. They even admit
that malice isn't intended.

> The problem is the tone the _reader_ experiences. Most of the time, it
> doesn’t appear that commenters are actively trying to make their comment
> condescending, dismissive, or any of the other subtle variations of
> unwelcoming we see. These are people earnestly trying to help others, even
> if their tone is off.

What tone the reader experiences and how they deal with it is entirely up to
them. I am fine with on a site that abusive language is flagged, but something
that is perceived to be unfriendly by a robot. This is just folly.

I certainly won't be contributing anymore. I don't want to be on a site that
has a robot flagging human interaction.

EDIT: Abusive language filters usually don't work that well. I put in
someone's real name into a system and the system flagged it for "abuse" their
surname was "Cummings".

Also the term "faggots" (which in the UK is a meal you can buy in the
supermarket) has been censored on social platforms.

[https://www.rt.com/uk/468436-google-censors-faggots-
peas/](https://www.rt.com/uk/468436-google-censors-faggots-peas/)

Because guess what the context in which you are using a word is important. So
most of theses places can't get abusive filtering right. They won't get this
right either.

------
mjcohen
I'm just on math.stackexchange.com, and I greatly enjoy it. Maybe it's a good
thing I am there instead of overflow.

------
papermachete
No moderators left to ban, I see.

