
Advice when mentally ill flock to your service? - noconspiracy
I run a service that uses a mix of automation and manual help to deal with customer service issues (throwaway account). 97% of the time, things work really well, we have great customers, we&#x27;re small and bootstrapped but growing and are very happy with the organization we&#x27;re building.<p>But for whatever reason, we&#x27;ve become very popular on conspiracy theory websites, and maybe 3% of our customers ultimately end up using our service in a somewhat harassing way. We&#x27;ve modified our terms of service to ban a variety of forms of harassment, but it&#x27;s often hard to weed out the problem users from legitimate ones until things get bad, and we&#x27;re trying to come up with consistent policies of weeding out bad users rather than we-know-it-when-we-see-it.<p>For reference, these users are likely very mentally ill. The New York Times did a piece on the segment of the Internet that has found us:<p>http:&#x2F;&#x2F;www.nytimes.com&#x2F;2016&#x2F;06&#x2F;11&#x2F;health&#x2F;gang-stalking-targeted-individuals.html?_r=0<p>Anyone run into anything similar? Advice?
======
petervandijck
I ran into something like this when a discussion board I ran attracted the
wrong users and ended up having to close it down.

I would fight this hard, zero tolerance. Your policy should be "we know it
when we see it, immediate removal, and it's not open for discussion". Don't
try to come up with rules etc., because that will just open you up to infinite
discussion. One things trolls are good at is sucking up your time.

In your shoes, I would be very though and never discuss your decisions. (aka
we removed you, not going to discuss it.) NEVER EVER get into discussion with
the 3%.

------
helen842000
Harassment can be flagged up either by the type language used or the
repetition of an action.

It has to start with a manual we-know-it-when-we-see-it process. You collect
data about the types of users that are using your service in a harassing way.
Where they come from, how they use the service etc. Over time you will see
patterns and trends that will likely lead to you being able to flag certain
sign ups in that way. The more accurate it gets over time the less manual
intervention you will need.

It maybe that certain referrers or keywords drive this type of traffic.
Perhaps there are similarities in the first actions of this type of new user
or the content that they create.

The only way to recognise it is to start gathering examples and learning from
them.

------
_Marak_
Regardless of the user-base, there will always be a small segment which
accounts for the majority of your business's support costs ( filtering /
banning / rate-limiting / etc ).

The best advice I can give you is that you have to treat every user as a good
customer until proven otherwise. The last thing you want is false negatives
affecting your revenue stream and alienating potential customers.

It's unfortunate you are experiencing issues. Do your best to keep internal
metadata on every user you interact with. Document and tag all related support
communications. Get a good CRM. Once you've collected a bit of data, look for
common signs of a troubled user and see if you can make any changes to policy
or software to prevent future issues.

------
scholia
It would be nice if some sort of captcha could weed them out, but that sounds
unlikely.

Is there any way of creating a hierarchy of users? Newbies would have
probationary status and limited powers until promoted to standard level. You
could also promote your best users to a higher level, and they could help
police the probationary users.

If users are only dealing with your own staff, maybe an internal Reddit-style
points scoring system would be a start. It might help if it made the "bad"
users visible sooner.

But I expect you've already thought of these and similar ideas....

Thanks for the link: interesting story.

~~~
noconspiracy
Thanks. I think some kind of internal "meta moderation" may be the best way,
similar to how some gaming services have tribunals. Appreciate the feedback
and advice.

------
sogen
Hi, can you contact me or any way to contact you? I work improving userflows
and solving issues, and would love to give it a shot.

For my contact info check my profile.

------
framebit
I have no useful advice to offer for your troubling situation, but I wanted to
comment and say that the NYTimes article you linked to is scary and
fascinating. I know you weren't sharing it out of interest but out of
illustration of your predicament. Still, thank you for sharing nonetheless.

------
saluki
Can you add a request for some personal information during the sign up that a
regular user wouldn't think twice about providing but your 3% would object to
and bail out of your signup process?

~~~
noconspiracy
Unfortunately, that would probably have the reverse effect: The 3% would put
in way more work, while "good" customers would be more likely to abandon. The
3% are willing to endure endless tedium for their conspiracy theories.

~~~
codezero
The upside here is that you can blacklist based on some identifiable
information as long as it can be easily confirmed.

I dealt with quite a few mentally ill antagonists when moderating Quora.

The best thing to do though is to let them use your service, but to shadow ban
them as much as possible, and ideally make them think they aren't.

Feel free to contact me if you want to chat further.

------
pmiller2
I don't understand. It's your service. Just ban them or otherwise don't allow
them to use it if they're more trouble than they're worth.

~~~
DannyB2
Saying "just ban them" is like when the MPAA / RIAA says that Google should
somehow magically remove all copyright infringing links from its search
results.

Doing it manually doesn't scale.

So how do you do it with automation?

How can automation recognize who the bad users are in order to "just ban them"
?

~~~
pmiller2
OP doesn't give enough specifics to speculate regarding automation, other than
_mumble mumble machine learning_.

How about investigating reports of abuse as they're surfaced, like most
companies do?

