
Facebook Offers Tools for Those Who Fear a Friend May Be Suicidal - danso
http://www.nytimes.com/2016/06/15/technology/facebook-offers-tools-for-those-who-fear-a-friend-may-be-suicidal.html
======
lalos
This has a ton of implications. First people 'tagging' profiles as depressive
and probably training an algorithm behind the scenes, so now if you behave a
certain way you the chances you are depressed are 94%. Second, who owns this
valuable health information? Can they sell it to third party's? Insurance or
recruiting companies?

Not against the idea but thought provoking the several implications of what
this data means.

~~~
tajen
Just watched a documentary about Foxconn which tries to filter out suicidal
employees. Sure Facebook's profiling would be useful for many companies who
put a high pressure, and their HR depts. Not sure humanity is progressing,
though.

~~~
walrus01
> which tries to filter out suicidal employees

The cynic in me says they don't particularly care about the high suicide rate
of assembly line factory workers in China, as long as said factory workers
don't off themselves while on company property or at the company dormitory.
Because that's bad PR, guys.

------
Amorymeltzer
The Facebook tack is interesting, because it's technically not a service for
the suicidal individual; rather, it's a service for their friends who "really
want to help, but... just don’t know what to say, what to do or how to help
their friends." That's a unique angle, which sets it apart from, say Apple,
which has taken significant steps since the first roleout of Siri to enable
her to respond appropriately to certain queries (e.g., "I was raped" or "I
want to commit suicide).[1,2] The conversation on how we want our electronics
and services to behave is really interesting and one that is only going to get
more complex as we get better at language and mood
processing/modeling/profiling.

1: [http://abcnews.go.com/Technology/apples-siri-now-prevent-
sui...](http://abcnews.go.com/Technology/apples-siri-now-prevent-
suicides/story?id=19438495)

2: [http://www.telegraph.co.uk/technology/2016/04/04/siri-
update...](http://www.telegraph.co.uk/technology/2016/04/04/siri-update-
offers-support-for-rape-victims-and-suicide-question/)

------
kome
Perhaps it is time to start a conversation on why suicide is seen as a
problem, tout court.

I always thought of suicide as a possibility among others. It’s a normal
feeling to be tired of life.

I would have suicided long time ago if suicide wasn’t so complicated and
scary, with a huge risk of failure. I don’t want to feel pain, I just want to
go. Like in euthanasia.

I checked, and unfortunately not even Switzerland offers the possibility of
euthanasia (good death) to healthy people. I really think that’s a society
shortcoming, then need to be addressed in the future. What's the problem with
suicide at all?

~~~
styrophone
I don't think suicide is inherently a much worse cause of death than many
others, but it is often associated with a mental illness, outside of which the
sufferer would not choose to die. Althoigh the illness manifests as free will,
it is an illness nonetheless that might be treated. I don't mean to say that
it is impossible for a person of sound mind to choose not to live any longer.
However, given how often treatment results in improved will to live and
improved quality of life, I think it would be irresponsible to assume a
healthy mind without challenge and go to euthanasia as a first resort standard
of care. We certainly wouldn't do that for any other presentation of symptoms.

~~~
s3arch
Some people who are suicidal are not mentally ill. Sometimes they lose
perspective of their life, and this may continue for many months/years. They
feel that the pain they are in now, is much worse than death. They run out of
answers for "why they must live".

~~~
ionised
Yeah, ritual suicide for example was not carried out by depressed people.

It was a cultural phenomenon, not mental illness.

------
droithomme
Many questions are never asked in these articles.

Is suicide bad? Is it immoral? Is it a crime? Should it not be permitted? If
it is wrong, immoral, and a crime, should people who attempt it be punished?

~~~
x3n0ph3n3
I'm in the camp that suicide is a human right, and I get plenty of heat for
having that opinion. People seem to feel entitled to the existence of other
people, which I can't seem to understand.

~~~
Declanomous
I'm sure the majority of the heat you receive is because your stance has
absolutely no nuance to it. For instance, the majority of people who attempt
suicide and live never attempt suicide again.

Similarly, suicide mostly seems to be impulsive -- if access to a certain
method of attempting suicide is reduced, the corresponding decrease in suicide
attempts via that method is not reflected in a corresponding rise in suicide
attempts via other methods.

Lastly, suicide rarely occurs in the depths of depression. People who attempt
to take their own life are much more likely to be heading into or out of a
depressive episode.

While these aren't the only reasons that we should attempt to prevent
suicides, I feel they provide sufficient counter-evidence to your argument. I
will say that I am in favor of physician-assisted suicide, and there is
definitely a contingent of people who are hell-bent on killing themselves, but
for the most part most people who are suicidal will later be thankful to be
alive.

If you'd like to know more, I suggest reading the article on suicide on the
Stanford Encyclopedia of Philosophy:

[http://plato.stanford.edu/entries/suicide/](http://plato.stanford.edu/entries/suicide/)

Edit: Section 3.4 (Libertarian Views and the Right to Suicide) responds to
your criticism very directly[1]

If you find that interesting (of if you still find yourself unable to
sympathize with those who believe they should attempt to prevent suicides),
I'd suggest taking a course on either biomedical ethics, or ethics in general.
These topics have been debated for centuries at this point, and there are a
number of very compelling arguments on both sides.

[1] This position is open to at least two objections. First, it does not seem
to follow from having a right to life that a person has a right to death,
i.e., a right to take her own life. Because others are morally prohibited from
killing me, it does not follow that anyone else, including myself, is
permitted to kill me. This conclusion is made stronger if the right to life is
inalienable, since in order for me to kill myself, I must first renounce my
inalienable right to life, which I cannot do (Feinberg 1978). It is at least
possible that no one has the right to determine the circumstances of a
person's death! Furthermore, as with the property-based argument, the right to
self-determination is presumably circumscribed by the possibility of harm to
others.

~~~
x3n0ph3n3
Nothing in that linked article is new to me. The religious and deontological
arguments are uncompelling.

No one chose to exist, but I think everyone is entitled to not exist,
regardless of their reasons. Even if someone is thankful that they were
prevented from taking their own life, that's essentially an argument of the
form "the ends justifies the means."

As Camus said, "There is but one truly serious philosophical problem and that
is suicide." I think it's up to everyone to decide that for themselves.
Preventing people from doing such is unethical, imho.

~~~
Declanomous
>Even if someone is thankful that they were prevented from taking their own
life, that's essentially an argument of the form "the ends justifies the
means."

You have completely misrepresented my argument. Nowhere did I present anything
that could even remotely be construed as a consequentialist argument.

I provided that point as a direct rebuttal to your point that if we look at
agent-focused moral systems, and ignore agent-neutral moral systems,
preventing people from committing suicide doesn't make sense.

> I think it's up to everyone to decide that for themselves. Preventing people
> from doing such is unethical, imho.

Your opinion is indefensible and insane. What you have implied constitutes
prevention is absurdly broad. For instance, from a consequentialist
standpoint, calling a friend you fear may be suicidal and asking them to hang
out would constitute prevention at least some of the time. Ignoring that
issue, the evidence I've presented effectively argues _against_ the idea that
most people who attempt suicide wish to kill themselves. Most suicidal
individuals are impulsively responding to what the French might call _L 'appel
du vide_, or the call of the void. If they were rational, they would not
attempt suicide in the first place.

I cannot imagine you actually understand the points Camus is trying to make if
this is the conclusion you have come to. Please seek out further education on
the subject. Your knowledge of ethics and philosophy is appalling, and
ultimately harmful.

~~~
x3n0ph3n3
"Prevention" implies force, and that's all I meant by it. I take no issue with
talking to those whom have expressed the desire to commit suicide. My primary
issue is when it escalates to calls to police/medical professionals and force
of law is brought to bear.

I'm sorry you find my ethics so abhorrent, though I often feel that way about
the ethics of others.

------
malandrew
We often hear that social media can make people more depressed. It's worth
seeing if that effect can not only be cancelled out but reversed.

With this in mind, it would be nice if they detected those who might be
depressed and tailored the news feed to try to make people happier.

~~~
Olscore
Ah yes. Keep them in a perfectly balanced, harmonious state. /s

------
krapp
I find this incredibly disturbing.

Facebook's entire business model is built around manipulation and control.
They do psychological experiments on their users and skew their news feeds to
influence their political opinions.

And now Facebook wants me to tag my emotionally vulnerable friends, possibly
for further abuse, surveillance and manipulation? For profit? For the good of
the state? How much will insurance companies and my future employers need to
pay Facebook for the improved psychological profile they're building?

No. If I see anything on Facebook that makes it seem as if someone I know is
depressed or needs help, the last thing I'm going to do is _make Facebook
aware of it._

------
Archio
Mildly interesting observation: the suicide prevention team is 100% female.

~~~
aganders3
The photo shows four women, but the article states that "Facebook has a team
of more than a dozen engineers and researchers dedicated to the project."

------
orik
I think this sort of thing is dangerous because it's (probably) not data
backed and you're really playing with dynamite.

Do we know if this sort of message causes a person to be more or less likely
to take their lives? How can you A-B test that?

Does anyone else think your phone telling you it thinks 'you aren't being
yourself anymore' could set someone over the edge?

~~~
chillacy
> Do we know if this sort of message causes a person to be more or less likely
> to take their lives? How can you A-B test that?

Actually... you probably could train a model to detect suicidal posts. I had a
friend who committed suicide a few months ago, and I didn't see it coming. But
looking back, there were signs in his facebook posts (in line with this
article's example of thanking everyone), but our conversations were normal. It
is at the very least a wishful line of thinking for me that ML can recognize
this type of pattern.

------
DanBC
Twitter is also looking at suicide prevention.

Machine classification and analysis of suicide-related communication on
Twitter [http://orca.cf.ac.uk/76188/](http://orca.cf.ac.uk/76188/)

Analysing the connectivity and communication of suicidal users on twitter
[http://www.sciencedirect.com/science/article/pii/S0140366415...](http://www.sciencedirect.com/science/article/pii/S014036641500256X)

I think they had a talk with Prof Louis Appleby (who pretty much leads suicide
prevention in UK):
[https://twitter.com/ProfLAppleby/status/740957809924308992](https://twitter.com/ProfLAppleby/status/740957809924308992)

------
dlitz
Is there any way to see exactly what this system does, aside from faking a
suicidal episode on my timeline and getting friends to report it?

I've heard that early versions of this system basically inserted barriers
between suicidal people---who are already overwhelmed---and their support
networks.

Considering Facebook's poor track record when it comes to handling vulnerable
populations (nym policy, location support in Messenger, etc.), I'd strongly
discourage anyone from using this reporting system without a complete
understanding of what it actually does.

Heed should also be paid to the effect of having more than one of these
systems being triggered at the same time. The last thing I need when I'm
feeling overwhelmed is a dozen apps on my phone suddenly changing their
behavior.

------
cs2818
I have mixed feelings about this, but I think it could be positive depending
on how data related to a user's suicidal state is protected.

I recently started conducting studies dealing with technology and
psychological interventions and had to find ethical (privacy protecting)
mechanisms for responding to participants who express suicidal intentions or
show several indicators suggesting depression. We ultimately settled on an
automated mechanism that suggests resources to the participant directly in
cases indicating depression and human intervention on cases expressing
suicidal intentions. It isn't perfect but seemed to be the best balance
between privacy and helpfulness.

------
schnevets
I understand other people's skepticism, but I have a Facebook friend from high
school whose behavior has been extremely volatile. He was always an outcast,
but lately he has been reaching out to random people from our grade and
leaving incoherent comments. He would make up stories that clearly never
happened, and sometimes would become extremely sympathetic, thanking them for
"all they tried to do".

I have been watching this strange story from the sidelines, but I know other
people have shown concern as well. If this research can help people like him,
I think it would be well worthwhile.

------
Aelinsaar
I can't think of many worse outlets for this type of concern, or forums to
deal with it. Social media is very good at connecting people initially, but
it's piss poor at forging much deeper.

------
Animats
What if Facebook developed detection of drug users? Good? Bad?

~~~
artursapek
They probably have it

------
danso
Somewhat trusting person than I am, I can see this being an initiative borne
out of genuine sympathy without sinister motive...but I can see a few
potential pitfalls that, depending on how they're handled, could lead to
unintended consequences.

One of the more obvious obstacles will just be noise. The ideal situation is
someone who posts a clearly suicidal note late at night and a single friend
sees it in time to act...this is the kind of signal that a huge network like
FB can uniquely catch because of its size.

But what about all the more ambiguous situations? A friend with hundreds of FB
friends has gone through a horrible breakup and other real life failures, and
he starts posting notes that are potentially signs of suicidal thought...or,
at least a dozen of his hundreds of friends think so, and they all flag it as
suicidal. What should FB do then? At least a good number of these incidents
will be false positives and people will start complaining about FB's human-
backed algorithm for discerning potentially suicidal users.

The imminent danger is that someone will post notes that are suicidal in
retrospect, people will flag it, and the user will commit suicide anyway. It
won't be FB's fault...after all, AFAIK, they're just offering to send a
friendly help screen to the user, not call the cops. But that won't be how the
media or the deceased user's friends will see it, and the revelation that FB's
powerful network has a human/algorithm hybrid element will be controversial in
the same way that the trending news kerfluffle was. Perhaps there will be
demands that FB should have done more, and that cops should be involved in the
decision making...

But the seemingly inevitable decision would seem to be: why even bother
waiting for humans to flag suicidal behavior? For a user who has a history of
regular activity on the service, I'm sure there's a general profile they fit
in terms of sentiment of messages and reduced engagement on the service, just
like FB is reportedly able to tell how soon given couple might break up.
Here's no reason why FB couldn't augment a profile with a hidden "depression"
index to help the suicide response team triage reports...but the existence of
such a calculation will undoubtedly be controversial.

And then there's the question of: if FB can calculate such a depression index,
why wait for it to reach a high level before taking action? And, why should
action be limited to sending an explicit anti-suicide message? Shouldn't FB be
obligated to populate the user's news feed with happier messages, more
reminders that they have a family, fewer reminders that their former lovers
have found new lovers, etc?

And of course, for more controversy, we could always replace "suicide" with
other kinds of unwanted behavior.

~~~
forgottenpass
_And of course, for more controversy, we could always replace "suicide" with
other kinds of unwanted behavior._

I can't shake the feeling that social networks are sleepwalking towards the
very primitive versions of whatever the human analogue of a Skinner box is.

I know gamification is a thing. Along with the other types of psychological
manipulation in games. But this feels different to me.

I get that suicide prevention is a prima facie positive, but I wonder what
Facebook's ethics board has to say about this type of behavior in general.

I guess in 20 years it will probably be just like advertising. We know it's
inherently manipulative, but try to ignore what we know to be it's goals. And
lie to ourselves and eachother about the extent to which it can alter our
behavior.

------
andrewfromx
when I try and report a post I don't see that option. Where is it? I just see
"report spam, report not appropriate, etc."

------
artur_makly
what's next? Flag as Muslim Extremist?

~~~
ionised
I think that already exists, just not publicly.

