
In Screening for Suicide Risk, Facebook Takes on Tricky Public Health Role - mindgam3
https://www.nytimes.com/2018/12/31/technology/facebook-suicide-screening-algorithm.html
======
DoreenMichele
I've been hospitalized twice on a suicide watch. I've also had supposedly
"well meaning" strangers call the cops on me when I was homeless to "check on
me."

The cops advised me I shouldn't be walking in the bike lane that everyone
walked in because there was no sidewalk on that side of the road. Having
advised me of this, had they stopped me again, they would have ticketed me. I
then had to routinely expend extra energy to go out of my way going "home"
every single day.

No, the cops absolutely weren't going to help me solve any of my actual
problems. No, they absolutely didn't care about me, no.

Calling the cops on a stranger because you supposedly care about their welfare
is asshole behavior. I don't care at all about your supposed justification or
algorithms etc.

If you care, you check on them yourself or maybe call an ambulance. If you
don't know them well enough to do that without it going weird and shitty
places, then mind your own damn business.

~~~
nradov
In mostly places there's no difference between calling police or an ambulance.
Everything is handled out of the same dispatch center, and the dispatcher will
send whichever first responders they think are appropriate based on policy.

~~~
bjt2n3904
Former EMT. If you called in that you had suicidal ideation and had the means
to harm yourself, it would be treated like a hostage situation, where you are
both the hostage and the captor--especially if you have a weapon like a knife
or gun.

I can see few situations where they would send only an ambulance. The closest
is if you said you simply overdosed on pills. However, at this point you need
to go to the hospital. The ambulance has no ability to force you if you change
your mind. Police would likely be sent along to make sure you didn't change
your mind.

------
bjt2n3904
This. Isn't. Facebook's. Job.

No, seriously. Facebook should host content and serve ads. That's it.

They don't need to worry about being the arbiter of truthful news.

They don't need to stop "hate".

They don't need to make calls about who needs a psych eval.

They just need to host content. But if they truly want to do this, what
happens when someone sues them for not making the call?

I made the new year's resolution to delete my account yesterday. I almost
chickened out, but then this article comes up.

~~~
crocodiletears
I mentioned in a previous HN thread that there likely wasn't a line Facebook
could cross that might make me delete my account. I'm not there yet, but I'm
willing to admit before that I was wrong. If I posted on Facebook, I would
leave the platform here and now.

I've had two friends who went to take advantage of university counseling
services for anxiety and depression that they wanted help with, only to be
committed to a local hospital involuntarily for saying the wrong thing until
their families intervened to have them discharged. The episode would have
merely been stressful for them, but they were each restricted from returning
to their classes until the next semester. Neither of them were at any level of
risk for suicide or self-harm, but the universities they went to naturally
were obligated to err on the side of caution.

Such events are relatively rare, but keep many students who are suffering from
mental illness from taking the risk of using available resources.

I'd honestly hate to be committed because I was a little too sarcastic in a
Facebook comment, and much like public universities I fear Facebook is
incentivized to make the least risky call.

~~~
jackpirate
In the US, state laws strictly regulate who can and cannot be involuntarily
committed to psych wards. Staff at the university really can't be any more
"strict" or "harsh" than staff at any other councelling center.

In recent decades, psych-laws have become more strict because as a society we
have decided that it is better to involuntarily detain 20 people for a few
days rather than let one person end their life. That seems like a pretty fair
tradeoff to me, although one that reasonable people could easily disagree on.
If you disagree, you should be lobbying your state psychological association,
not complaining about random therapists at colleges.

~~~
electricEmu
States often allow psychological holds and extensions.

There’s little to no evidence required to keep someone for days or weeks under
“observation”. Some states setup entirely separate court systems without due
process, since commitment is a “civil matter”.

Some amount of blame falls with states allowing this type of behavior.

The burden is on psychologists to demonstrate the practices work and hospitals
are properly equipped.

------
protomyth
_The officer who took the call quickly located the woman, but she denied
having suicidal thoughts, the police report said. Even so, the officer
believed she might harm herself and told the woman that she must go to a
hospital — either voluntarily or in police custody._

It’s an imperfect predictor and Facebook will make a mistake which will
eventually be followed by a police shooting. After all, how many stories have
been published of police called for mental health issues that resulted in the
shooting of the person the caller was worried about.

I hope the programmers are ready to explain how it worked because I get the
feeling the family’s attorney is going to have an easy time with black boxes,
machine learning, and no true tracing of why.

~~~
wtracy
Regardless of the reasons for the call, can you really pursue damages from
someone who made a good faith call to emergency services?

The only legal issue I could see is violation of privacy, but I bet their EULA
covers them there.

Now, if they screw up bad enough, I could see it precipitating some new
legislation that they might not like. Until then, I think they're in the
clear.

It's another one of those situations where everyone is incentivized to "do
something" whether it's actually helpful or not. "Student dies after symptoms
of suicidal depression went ignored" is a splashy headline. "Student unable to
graduate after involuntary psychiatric detention" is not.

~~~
protomyth
_Regardless of the reasons for the call, can you really pursue damages from
someone who made a good faith call to emergency services?_

Yes, you can try to sue for anything. Also, yes, because the lawyer will try
to show it was not "good faith" but a failure of software that caused the
tragedy. I'm sure the discovery stage will yield enough to make it painful for
Facebook. The current feelings about Facebook are going to come up.

A EULA will not protect you from the families of dead people.

~~~
wtracy
> Yes, you can try to sue for anything.

Point taken.

> the lawyer will try to show it was not "good faith" but a failure of
> software

AFAIK, the phone calls are being placed by Facebook employees, not software.
Presumably, these employees have to look at the actual post content to even be
able to make a coherent phone call. You would have to argue either that the
employee was incompetent, or that the employee was coerced into making an
inappropriate call.

~~~
protomyth
What’s the chance that the employees are licensed social workers or
psychologists? Also, the software used in the initial data mining is going to
come up. I expect discovery is going to be really painful.

------
shiado
The political power afforded to those who get to make calls about the
psychological state of individuals is almost endless. I fully expect social
media giants to use their data to classify the mental state of their users for
political and social ends. For an idea of the potential for abuse read here.

[https://en.wikipedia.org/wiki/Political_abuse_of_psychiatry_...](https://en.wikipedia.org/wiki/Political_abuse_of_psychiatry_in_the_Soviet_Union)

------
magicalhippo
Reminds me of high-school when a buddy was goofing off in class, taking an
online quiz which accompanied a news article about suicide and depression. The
quiz asked "Are you depressed?".

A few of us were following what he was doing, and at the end of the quiz he
got the result effectively saying seek immediate professional help. Shocked,
he exclaimed "What?! I'm having the time of my life!"

Turns out there's an overlap between certain symptoms of depression and being
madly in love... and he just got a new GF a week earlier.

~~~
jazoom
> Turns out there's an overlap between certain symptoms of depression and
> being madly in love

Such as? I'm having a hard time coming up with any.

~~~
alangpierce
Two that come to mind are trouble sleeping and lack of concentration, plus
maybe social withdrawal (at least from most friendships). But still, seems
like a stretch; most depression symptoms are pretty unambiguously negative.

~~~
jazoom
Yeah. I'm a doctor and I've never heard a similar description of symptoms from
a depressed individual and an individual in love.

My guess is the wording in the quiz was poor.

~~~
magicalhippo
I can't recall the details, but it were things like lack of appetite, lack of
sleep, unfocused or lack of attention.

I recall the that the questions made sense given the goal of the quiz, but the
questions were probably too broad and, which was kinda my point, they lacked
control questions to rule out other causes for those effects.

------
rixrax
Anyone else think this is Super Creepy? Not long a go I would have expected to
find this article from Onion News.

So now platform that arguably actively participates in making people depressed
has launched an human assisted AI to report people to authorities. How
Orwellian is that.

~~~
Kaveren
No, I do not think that a company reporting suicide attempts or messages that
appear to indicate an immediate desire for suicide is creepy. This is not just
a thing Facebook does, either.

This is not "Orwellian". And the keyword in the claim about Facebook making
people depressed is "arguable".

------
perfmode
As a black man in America, I really don’t want Facebook sending police to my
home for any reason whatsoever.

~~~
emanreus
Perhaps you feel a bit depressed sir? Don't worry, help is on its way
[http://oi64.tinypic.com/97nokm.jpg](http://oi64.tinypic.com/97nokm.jpg)

------
cemregr
Is the ask here for Facebook not to do anything to intervene?

Times was giving Facebook crap that Facebook Live promoted/publicized
suicides, so doesn’t quite compute that their attempt to do something about it
is also bad.

~~~
TelmoMenezes
> Times was giving Facebook crap that Facebook Live promoted/publicized
> suicides, so doesn’t quite compute that their attempt to do something about
> it is also bad.

If Facebook's algorithms were promoting suicide-related posts, then doing
something about it might be good, but it does not follow that they should also
take an active role in suicide prevention.

You are creating a false dichotomy, both things can be bad at the same time.
E.g.: being a thief is bad, being a vigilante is also bad.

------
alangpierce
> Mason Marks ... argues that Facebook’s suicide risk scoring software, along
> with its calls to the police that may lead to mandatory psychiatric
> evaluations, constitutes the practice of medicine. He says government
> agencies should regulate the program, requiring Facebook to produce safety
> and effectiveness evidence.

Safety/effectiveness studies certainly seem like a good idea, though I'd be
really hesitant to prohibit Facebook from doing this, at least not from worry
about false positives (the privacy implications are a different issue).

Facebook is just the first in a line of common-sense checks: a Facebook
algorithm detects that someone seems to be suicidal, this escalates to a
Facebook employee who reviews it and in extreme cases escalates to law
enforcement, law enforcement gets the information from Facebook and decides
how to respond, and may end up pulling in psychiatric medical help, which of
course involves trained people making judgement calls.

Facebook's role is to escalate, not to intervene, and even that escalation
process is done by a human. It seems very unlikely that it would ever result
in a situation where someone gets involuntarily hospitalized because some
machine learning algorithm had a bug.

~~~
dannyr
Law enforcement would be forced to act if Facebook informed them.

They would like to cover their asses in case something bad happens.

If they didn't act, they would be susceptible to lawsuits.

~~~
alangpierce
That's fair, I guess in my ideal world, law enforcement would just act on the
factual information rather than being biased by the fact that a Facebook
person reported it as a concern. But as evidenced by things like swatting,
police are certainly not the best at responding to reports in a calm and
unbiased way. I guess my hope is that law enforcement can learn to
appropriately interpret and respond to these reports (including deeming that
it's actually not a concern), but maybe that's unrealistic.

------
hnaccy
Taking people by implicit threat of force because they've been marked by some
global surveillance system is bad.

------
sn41
Oh. my.

Please do not play with people's lives based on an algorithm whose
classification output we have no way of explaining.

------
Jedi72
Besides the obvious question of 'does it work' I see a myriad of other
problems with this. Does Facebook have the internal procedures in place to
prevent abuse? Will this data ultimately be used to advertise things like
anti-depressants at people? I already get ads for "mindfulness apps" and other
BS cures (I dont doubt the principle technique as such, but these things are
pretty useless and just techies cashing in on a fad IMHO).

~~~
chaseha
Slightly off-topic, but mindfulness approaches have been shown to improve
depression and are an important part of CBT/DBT used by therapists everywhere

~~~
quantum_magpie
I don't want to improve depression, I want to get rid of it! :)

------
brudgers
This story illustrates why _New York Times_ has fixated on aggressive negative
coverage of Facebook. Suicide prevention is another area where Facebook is
wiping the floor with newspapers. As an information channel, newspapers can't
compete in over most of the landscape. The article includes the head of a
premier suicide prevention program, John Draper, praising Facebook's
initiative.

Data collection may be bad, but suicide prevention is a beneficial byproduct.
Data collection may be bad, but the _New York Times_ won't be accused of
leading by example. The article is packaged to ping Google, Amazon, and yes
Facebook among it's 32 scripts. It's unlikely any of this there to serve a
higher common interest than suicide prevention.

The examples in the coverage show how far Facebook outclasses the _New York
Times_.

    
    
      Facebook said its suicide risk scoring system worked
      worldwide in English, Spanish, Portuguese and Arabic 
    

Facebook is working on something relevant to suicide prevention in Mozambique,
Algeria, Dominican Republic, and Australia. Beyond it's role in US public
policy, the _New York Times_ will always be approximately irrelevant to people
in these places.

It's reporting is parochial. Interviewing two experts in New York, four police
forces on the east coast and Ohio. To me as a non-absolutist, judgment about
the initiative's tradeoffs ought to consider stories from Brazil, Morocco,
Mexico, and Canada. Those people don't enter the parochial conversation at the
_New York Times._

Preventing suicides is the kind of difference journalism is supposed to make
in people's lives. The _New York Times_ is driven of "weapons of mass
destruction" reporting on Facebook's suicide prevention initiative because it
is so completely outclassed as an instrument for journalistic impact on the
lives of individuals.

------
wtmt
Facebook the company and platform need to be broken up and the pieces/parts
isolated from each other. For all the benefits that people see in using it,
Facebook has become the most dangerous online company in the world.

~~~
antpls
As far as I remember, for every features of Facebook, like private messages,
sharing pictures and status, etc, there is at least one alternative, and
people are free to move to several competitors. There is no technical reason
to force the Facebook company to change their products, but they might have an
obligation to educate people about the usages of its website.

You need strong proof to claim "Facebook has become the most dangerous online
company". Recently, Facebook groups were used to organize the yellow vests
movement in France. For non-techies, Facebook does provide a new way to
communicate.

You could always argue "oh yeah, but it already existed 15 years ago with X",
but Facebook made it available to the masses.

The security/social concerns happening recently create new opportunities for
Facebook's competition, so I wouldn't be that alarmed.

Google, on the other hand...

------
mindgam3
“Facebook’s rise as a global arbiter of mental distress puts the social
network in a tricky position at a time when it is under investigation for
privacy lapses by regulators in the United States, Canada and the European
Union — as well as facing heightened scrutiny for failing to respond quickly
to election interference and ethnic hatred campaigns on its site.”

~~~
yostrovs
The New York Times must be the global arbiter of who is the global arbiter of
mental distress.

------
yadongwen
So should we applaud Facebooking for doing this or not?

~~~
nradov
We'll applaud and curse Facebook simultaneously. They're damned if they do and
damned if they don't.

------
smarttack
"You haven't logged onto Facebook in over a month. We were concerned so we
sent over an over an officer for a welfare check"

------
ec109685
“Earlier that day, a local woman wrote a Facebook post saying she was walking
home and intended to kill herself when she got there, according to a police
report on the case. Facebook called to warn the Police Department about the
suicide threat.”

Amazing this is considered bad by the commenters here.

~~~
DoreenMichele
So, you've never said something like "Oh, just shoot me!" in exasperation
without actually being suicidal? You don't see how something like that could
be wildly misinterpreted in written form, without context, without voice tone,
etc?

It's easy to feel like what you say online doesn't really count. People feel
that way all the time and that fact often causes problems. Usually, though,
those problems don't involve someone calling the cops on you "for your own
good." At most, you get banned from some forum.

~~~
ec109685
> without context

They do have context given how much of a person’s internet life they can
“see”. While you can argue that is bad, if they have enough signal a person is
likely to do harm and the person says something like that, it seems like it’s
a net positive for Facebook to alert someone to help the person out.

~~~
DoreenMichele
I do a helluva lot of commenting online via twitter, various forums and a
bunch of blogs. Over the years, I've gotten much more careful about the things
I say online on specific topics because people routinely imagine they know me
better than they do based on having read my writing for so many years.

So my experience really doesn't fit with what you are saying at all. My
experience is that what people say online is the tip of the iceberg of their
life and the world imagines it has greater context than it really has and
leaps to weird and inaccurate conclusions on a pretty regular basis.

Trying to figure out how to express myself so I feel like I'm actually
understood takes an inordinate amount of my time. I have reason to believe
most other people don't spend anywhere near as much time and effort as I do on
trying to sort that out.

Most of the time, most people don't seem to get as wildly and horrifically
misunderstood as I seem to so often end up being. Or if they are
misunderstood, it isn't that big of a deal in many cases.

But I remain aghast at the idea of someone calling the cops on me because of
something I have said online that people imagine they understood that was not
a "threat" against anyone but me.

On top of being skeptical that Facebook understands you as well as you imagine
they do, I am also pro right to die.

In a nutshell, if people give a damn about my sorry ass, they could choose to
do something meaningful in the here and now to help me make my life work
instead of being part of the problem, making my life harder than it already is
and then pretending to themselves they are Good People by calling the god-
damned cops on me for leaving a fucking courtesy note somewhere online if I
decide to finally check the fuck out of here.

Note to self: Don't bother to leave a courtesy note online informing anyone of
fuck all. They really don't deserve such courtesies. Geez.

------
Kaveren
It seems like there's a vendetta against Facebook here. The term "mixed
results" is used why? Everything they're doing is completely voluntary. NYT
somehow managed to paint Facebook alerting the police about a possible suicide
attempt as something negative, just because the police already helped the
person. Or about them being too late to report it, again, this isn't
Facebook's fault.

You can argue on the term "mixed result", but I think it's reasonable to
assume they're indicating that non-success is a negative result, even when
Facebook is going entirely above and beyond here to begin with, so there
really isn't any "negative result" to be had.

Do people dislike big tech companies so much that reporting immediately
concerning messages and suicide attempts to the police must be a bad thing?

This is common practice anyway. If you write that you're about to kill
yourself on Reddit and a moderator notices it, I sure hope the police are
contacted. Nothing I see to criticize here at all.

