
Facebook deploys AI for early signs of suicide - titzer
https://techcrunch.com/2017/11/27/facebook-ai-suicide-prevention/
======
sf0i
Do folks on HN actually think this is the only psychological metric Facebook
is tracking, and that this is something new?

A slightly obsessive, insecure state of mind makes for the most engaged
Facebook user. If they can give people a bit of a nudge in that direction,
it’s great for business. Suicide, while resulting in increased engagement
among friends and relatives, is bad for business overall — you’re losing
users, and people might start to really question what you’re doing to their
heads.

It’s amazing that this gets turned into a feelgood story by the press, rather
than an investigation into what Facebook has known about their users’ mental
states and for how long. This is like congratulating a tobacco company on
warning people not to smoke 3 packs a day.

I want their PR team.

~~~
killjoywashere
In general, I agree with you. That said, from the perspective of an actually
dead teenager, where their friends turn in evidence that, in retrospect, maybe
they were having a hard time, it's hard not to want something like this.

------
Shank
There's a lot of pessimism in the comments. I believe this is one of the best
uses for an AI that we've come up with. First, let's go into the details:

1\. They only monitor and elevate to moderators content from posts and
Facebook live.

2\. Moderators can then take action, and reports from this system are
prioritized over other non-pressing issues.

The actual life saving chances of this system are huge. Just from a pure
safety area, it can help detect and alert someone in an authority area that
something bad is happening when nobody else might. The bystander effect is a
real problem, and I'd imagine that it might be even greater on social media.
In many cases, posts that have suicidal tendencies are screaming for help from
their friends. They're posting because they have no options left. In these
cases, it's a safety net that can stop loss of life. Moderators make the final
say, however, so it's not just AI diagnosis. It's AI warning that gets
elevated to a human before anyone else. That's more eyes on the problem that
can potentially help.

Are there other implications for AI? Yep. Advertising? Yep. Other types of
tracking? Certainly.

But if we're going to use AI in those fields anyway, we may as well extend it
to trying to save lives.

~~~
anigbrowl
I help to run a suicide/depression support group on FB and I do not at all
trust FB as a corporation to do a good job on this. Having had friends who've
been involuntarily hospitalized and all that goes with that I'm extremely
skeptical that this will turn out well.

'But it might save lives!' is true, but it might also wreck lives by
institutionalizing people that thought they were venting their feelings to a
trusted circle of friends in a private context. (Don't start with 'nothing is
private on FB' please.)

~~~
anthrowaway
For those who aren't familiar, this is how it can play out:

    
    
      1. Friend^H^H^H^H^H Facebook notices you said something that sounds suicidal to them.
      2. They call the police.
      3. The police show up and take you to an emergency psychiatric facility (you don't have as much choice in the matter as you think you do, and you can't anticipate the consequences).
      4. The facility keeps you in their custody until your friends or relatives can convince them to release you. They are financially incentivized to keep you there. You have limited contact with the outside world. You are effectively a prisoner.
      5. Once you get out,
        a. You get to explain why you didn't show up for N days of work
        b. You get a bill in the mail for thousands of dollars
        c. The courts will not help you.
    

Based on a true story. Automating this is not acceptable.

~~~
rootsudo
Agree 100%. Happened to me too when I went to the ER for a panic attack.

I reported the doctor and he told me to later go fuck myself.

What a world. I got baker acted for going to the ER for a panic attack because
of my anxiety.

------
asdgkknio
This may or may not be a useful technology, but the fact that Facebook thinks
they have the ability and the right to diagnose their users with mental
illnesses is disturbing. They have more information about their users than
psychologists have about their patients. They can (and do) build a
psychological profile and diagnose mental illness. Yet rather than keeping
this information in the closest confidence, they sell it to the highest
bidder. They can (and do) run experiments without getting informed consent (or
any consent).

They're playing psychologist and should be subject to a similar code of
ethics.

Another interesting article:

[http://www.slate.com/articles/health_and_science/science/201...](http://www.slate.com/articles/health_and_science/science/2014/06/facebook_unethical_experiment_it_made_news_feeds_happier_or_sadder_to_manipulate.html)

~~~
flashman
> Facebook thinks they have the ability and the right to diagnose their users
> with mental illnesses

Maybe it's more that Facebook thinks it's horrible when people kill
themselves, and that if Facebook's tech could help prevent suicide, inaction
would be immoral.

Most of us accept some level of State interference in our lives for our own
good. Maybe we are going to have to have that discussion about involuntary
interventions by corporations, too.

~~~
asdgkknio
>Maybe it's more that Facebook thinks it's horrible when people kill
themselves, and that if Facebook's tech could help prevent suicide, inaction
would be immoral.

That doesn't entirely contradict what I said. We generally assume
psychologists and therapists are benevolent, but they aren't allowed to
disclose private information even if it's in the best interests of the
patient. Even if we assume Facebook is perfectly benevolent, this kind of
vigilante psychology with no oversight is scary. They've thrown out all
established codes of ethics in favor of just doing whatever the hell they
want.

And I'm certain Facebook isn't benevolent. They deliberately make their
website as addictive as possible. They have run experiments that tried to make
people unhappy. If this really is a selfless attempt to help their users, it's
inconsistent with what Facebook has done before.

~~~
BurningFrog
> They've thrown out all established codes of ethics

They may have nudged on one particular code of ethics.

Alarmism does not help your credibility.

------
Communitivity
This is an unbelievably slippery slope. It has the potential for great good,
but also for great harm. Cliched and click-bait phrases, but I believe they
are accurate in this instance. I'll try to explain why I believe that.

A case has been made that Facebook users are not the customers, they are the
product, and that data purchasers and advertisers are the customers. If
Facebook can determine whether you are suicidal then it might determine other
psychological conditions such as agoraphobia, alcoholism, depression, ADD,
SAD, etc.

Once that determination is made and stored the possibility exists that it
could be hacked or exposed in a data breach.

The possibility also would exist for Facebook to sell that information, and/or
to target users with medical adds related to their condition. I am not saying
Facebook would do this, I'd like to think they wouldn't - I have friends at
Facebook, though they aren't in management.

However, the decision of whether to allow this has to be based not on whether
it's safe to trust Facebook with this information, but on whether it is safe
to trust any company with it.

What happens if there is a false positive for suicidal tendencies or another
condition?

Take this to the absurd extreme and consider how it compares to the Pre-Crime
operations depicted in the movie Minority Report. Instead of Oracles we have
Machine Learning intelligent software agents. Most of the problems depicted in
the movie could arise.

To a lesser extreme, imagine the situation when a false positive occurs for a
user in a position of public trust, a government official, or a defense
contractor with a clearance. I'll assume this triggers action in some way that
is visible to some combination of the user, psychology professionals,
authorities, and an employer - otherwise why do it. If the employer in any way
catches wind of the determination they might very well take steps to flag
and/or terminate the employee.

Even if the employer doesn't flag the employee or terminate them, if the
information is purchasable or discoverable in any way then an insurance
company could conceivably raise the user's rates based on the determination.

I am all for advancing technology, especially in the field of AI, but when we
apply that technology we need to ask not just if we can use the technology in
that way, but also if we should.

~~~
mmjaa
I don't think its a slippery slope - its more like the gaping precipice.
Within a few generations, we will have members of society who believe such
extreme manipulation is the norm. Well, its already here, anyway - but at what
cost will we expose future generations to our idiotic, un-tested, software
systems!!

Perhaps the only answer is to stop using the freakin' social web, but ..
really .. how can we do that?

This is almost, really, the last straw, Facebook!

~~~
jsemrau
If this is not the last straw, what will it be?

------
zkms
> Over the past month of testing, Facebook has initiated more than 100
> “wellness checks” with first-responders visiting affected users. “There have
> been cases where the first-responder has arrived and the person is still
> broadcasting.”

It's important to note here that "first-responder" here means police; because
suicide tends to either be a crime, or something for which police can legally
take you into custody for. Regardless of what you think of this, any
intellectually honest discussion of this must acknowledge that _this machine
SWATs people_.

~~~
anigbrowl
It's worth pointing out that police have turned up for wellness checks and
ended up killing people. As this example shows, police are increasingly tasked
with responding to mental health emergencies despite being poorly trained and
situated to do so:

[http://www.dailycal.org/2017/10/20/judge-indefinitely-
postpo...](http://www.dailycal.org/2017/10/20/judge-indefinitely-postpones-
trial-for-kayla-moores-in-custody-death/)

------
MechEStudent
It is how to "cook the books" on the cost in blood of "social network".
Teenage girls with gross depression are committing suicide at like 60% higher
rate when they are active on social media. Facebook is absolutely a
contributor there, and in order to disguise its actual complicity, its being a
root-cause, it is now searching for clear signal of likelihood of suicide,
with an attempt to ... what. What is their follow-up action? Inform a parent
or legal authority? No way. That makes them culpable. They aren't going to
"help". They are going to hide. They are going to destroy chains of evidence
in on the "wall" or "messages" that point a clear finger at the paradigm as
the problem.

~~~
solaarphunk
Do you have a source for the 60% higher rate? Curious to read more about this.

~~~
ProAm
This [1] says it's closer to 30% but still significant (I had to look it up,
and this is the first article I found)

[1] [https://www.nbcnews.com/news/us-news/social-media-
contributi...](https://www.nbcnews.com/news/us-news/social-media-contributing-
rising-teen-suicide-rate-n812426)

------
bvrlt
Maybe they should also rethink how Facebook impacts people's lives in a
negative way ([https://www.newyorker.com/tech/elements/how-facebook-
makes-u...](https://www.newyorker.com/tech/elements/how-facebook-makes-us-
unhappy), [https://www.cnbc.com/2017/04/12/study-using-facebook-
makes-y...](https://www.cnbc.com/2017/04/12/study-using-facebook-makes-you-
feel-depressed.html)). The problem is that's incompatible with their business
model: capture people's attention.

Treat the causes (some of them at least), not the consequences.

------
freedomben
I recognize that this will not be a popular thing to ask, especially to people
who have been affected personally by the suicide of a loved one (I have and it
hurts, RIP Braden), but do people not have a right to their own life? Do they
not own their bodies, which as much as it sucks, includes the right to destroy
it? If somebody really wants out, is it ethical for us to trap them inside?

~~~
allemagne
Personally, I find the idea of "future selves" separate from your current
personal identity a compelling enough idea that I think I at least owe them a
chance at life even if I theoretically may have a right to take my own right
now.

So, killing yourself to me is only justifiable in situations where you'd be
justified in killing somebody else. Sacrificing yourself to defend somebody
else could be thought of the same as killing in self-defense, euthanasia is
justified, etc.

This isn't at all to push blame on those who consider suicide or have gone
through with it, or to completely abandon the idea of a continuous identity,
just a perspective that once I adopted I couldn't shake.

------
jimmytucson
Wonderful example of how we can use AI for good.

Now, if this catches on, I can see Facebook adapting their algorithm to
recommend treatments for depression, anxiety, even ADHD. This would be a huge
success. "Mr. Smith, I see Facebook's AI considers you a strong candidate for
medication X and your doctor agrees. Can you explain then why you haven't been
taking medication X?"

Also, if an algorithm can detect when a person is likely to commit suicide,
can it detect when a person is likely to commit rape? Arson? Murder? If most
of society views this as an achievement they'll scoff at comparisons with a
book written in the middle of the 20th century (that was made into a movie
starring Tom Cruise in 2002). "It's not the same... AT ALL!"

~~~
cryoshon
>I can see Facebook adapting their algorithm to recommend treatments for
depression, anxiety, even ADHD

imagine this: employers know that having these conditions (some of which are
permanent) makes people less effective at working, and so they want to know
who has adhd, who has x, y, z.

facebook sells them the data.

boom, now these people can't be employed by that company.

i just can't help but laugh because there's a good chance this is already
happening.

~~~
MBCook
Of course Facebook couldn’t sell that data. It’s medical information and thus
illegal (IANAL).

But no matter. I’m sure there’s no rule against the company taking out ads
advertising open positions to people who don’t have those issues.

Tada. Nothing illegal happened, but discrimination still did.

------
tree_of_item
What right do they have to stop someone from committing suicide?

Generally suicide is treated as a crime, and can result in someone being
involuntarily held for long periods of time, perhaps even forced to take
psychotropic drugs. They spin this as "saving lives" but really this bottoms
out in people getting harassed by police or potentially locked up for daring
to post anything about suicide in or around Facebook (which is increasingly
the entire web).

And honestly, if someone wants to commit suicide then Facebook of all "people"
has no fucking business trying to stop them.

~~~
balls187
I think if my son was considering suicide, I would want to be given a warning
or indicator to at least attempt to intercede.

At the same time, I absolutely wouldn't want a bunch of companies to target
him with suggestive messages exploiting his mental state.

------
js8
Why are you leaving Facebook, Dave? I think I am really entitled to answer to
that question. I know everything wasn't quite right with your wall. But I can
assure you, very confidently, that it's going to be alright again.

I can see you are really upset about this. I honestly think you ought to sit
down calmly, take a stress pill and think things over.

------
tiziano88
Reminds me of PreCrime from Minority Report
[https://en.wikipedia.org/wiki/Precrime](https://en.wikipedia.org/wiki/Precrime)

------
foobaw
There's a lot of people who are fearful that Facebook probably knows too much
about its users and that could be problematic. But what do you expect? How do
you want our future to be shaped? Innovation, including AI, requires
tremendous amount of data. Suicide is a serious problem and being able to
accurately identify that in the future could save tons of lives. Accurately
diagnosing mental illnesses will take a while for sure, and I believe this is
a necessary first step.

Now, if Facebook is selling the data for profit, that's another story. But if
we assume that Facebook is acting purely for the benefit of the society and
the people, I think this is a great step.

~~~
AlexandrB
> But if we assume that Facebook is acting purely for the benefit of the
> society and the people, I think this is a great step.

Why would anyone ever assume this? Does anything in Facebook's past suggests
that this might be the case? And even if the motives of the individuals who
spearheaded this were pure is there any reason to think that this won't change
in the future as people at Facebook change roles or move on?

------
strgrd
Cashing in on ML/AI hype machine while simultaneously justifying their
extensive personal data vacuum.

------
tareqak
Please permit me to play the pessimistic man on the street: If Facebook can
deploy AI to detect the early signs of suicide, why can't they deploy AI to
detect "fake news" or those susceptible to "fake news"?

Furthermore, "cui bono" (for whose benefit [0])? What does Facebook gain from
being able perform this sort of detection given it is a for-profit corporation
and not a non-profit philanthropic organization?

An aside: I wonder if there will be accounts of similar to Dante's _Inferno_
[1], or Joseph Conrad's _Heart of Darkness_ [2] about today's corporations in
the future some day.

[0]
[https://en.wikipedia.org/wiki/Cui_bono](https://en.wikipedia.org/wiki/Cui_bono)

[1]
[https://en.wikipedia.org/wiki/Inferno_(Dante)](https://en.wikipedia.org/wiki/Inferno_\(Dante\))

[2]
[https://en.wikipedia.org/wiki/Heart_of_Darkness](https://en.wikipedia.org/wiki/Heart_of_Darkness)

~~~
danso
I'm not a psychiatrist nor have any expertise or insight into suicidal
behavior, but I feel very confident in saying that the detection of a
potential suicide is a much easier problem than the detection of fake news.

There's a few reasons for why suicide seems easier to detect, but I'll just
give one: it is feasible to conceive of a reliable and reasonably efficient
threshold for detecting _some_ suicides. And I can say this even though off-
hand, I can't think of anyone close to me that has killed themselves...the
closest example might be Aaron Swartz, who I did not know off of the Internet,
and whose suicide even to this day stuns me.

And yet I can come up with a few real-life scenarios in which I would feel
comfortable in calling 911 about:

\- I see someone (a total stranger) standing on the railing of the Golden Gate
bridge, facing in the direction they need to face to jump out into the water.

\- I see someone, again, doesn't matter if a total stranger or someone I know,
holding a gun to their head with their finger to the trigger.

\- A friend I haven't heard from for sometime, and am not expecting to hear
from in the near future, suddenly calls me to tell me they've swallowed an
entire bottle of Tylenol and they wanted to at least talk to me one more time
before death.

All these situations, I have no problem calling 911 and screaming at the
operator to get help quick.

The first 2 situations aren't easily detectable in the framework of Facebook
communication. But the third one, I'm pretty sure have actually happened on
Facebook and other social media networks, and people, with and without the
help of the social media service, attempted to save the person's life:

\-
[http://www.cnn.com/2009/SHOWBIZ/Movies/04/03/moore.twitter.t...](http://www.cnn.com/2009/SHOWBIZ/Movies/04/03/moore.twitter.threat/index.html)

\-
[https://www.reddit.com/r/AskReddit/comments/3ham0a/serious_t...](https://www.reddit.com/r/AskReddit/comments/3ham0a/serious_those_who_have_been_talked_down_from/cu68fmx/)

\-
[https://www.theguardian.com/technology/2011/jan/05/facebook-...](https://www.theguardian.com/technology/2011/jan/05/facebook-
suicide-simone-back)

\- [https://www.buzzfeed.com/drumoorhouse/my-best-friend-
saved-m...](https://www.buzzfeed.com/drumoorhouse/my-best-friend-saved-me-
when-i-attempted-suicide-but-i-didnt)

The content of these messeages, and other meta-factors, such as time of day
and relationship and frequency of contact between receiver/sender, are
signals. The threshold can be set to a level high enough that false positives
are relatively rare -- as rare as someone who looks like they're about to jump
off the Golden Gate Bridge but turns out to be posing for a photoshoot, or is
just being an extreme troll.

The downside is that there may be a lot of false negatives. But as tragic as
that is -- _some_ lives saved through automation is better than nothing. And
this heuristic could be implemented in an efficient way.

So what's the difference between suicides and fake news? For fake news, it
seems impossible to conceive of any kind of threshold that would catch items
that are indisputably fake news while having a very low rate of false
positives.

The main reason? There is no objective definition of either "fake" or "news",
nevermind "fake news", when it comes to items that are purported to be news
stories.

Let's ignore the concept of news. Let's consider _assertions_. How easy would
be for Facebook to come up with an algorithm to flag true and false for these
examples:

\- The earth is flat

\- Barack Obama is a Muslim

\- The Holocaust is a fabricated historical event.

For the sake of argument, I'll agree that FB's NLP and Knowledgebase-type tech
can accurately flag the above assertions -- in fact, Alexa did a pretty good
job of answering "Yes" and "No" when I rephrased them as questions.

How about when these assertions are reworded to be a little bit different:

\- There are researchers today who believe the Earth is flat and say they have
evidence to prove it

\- Barack Obama may secretly be a Muslim, according to some pastors

\- Did the Holocaust happen?

Assume the above examples are titles to news articles. And if you don't think
such phrasing would be used in a mainstream news source, I invite you to visit
washingtonpost.com or huffingtonpost.com on any given day.

If you've paid attention to recent controversies about Google SERP, you'll
recognize that the third example -- "Did the Holocaust happen?" \-- was
recently a major problem for Google. And that was for something much simpler
-- what sites are most relevant for that query -- than what Facebook has to
figure out: "Is this fake news"

Maybe you want to argue that in this situation, Facebook should definitely
filter it out as "fake news" because not only does the article contain a lot
of implied bullshit, it is simply not a news article. But then you've
introduced a whole other problem: how does Facebook determine the difference
between a purported news story and an op-ed? Human beings have problems with
this when it comes to newspapers and their editorial sections:

[https://www.nytimes.com/2015/01/11/public-editor/an-
uneasy-m...](https://www.nytimes.com/2015/01/11/public-editor/an-uneasy-mix-
of-news-and-opinion.html)

[https://www.americanpressinstitute.org/publications/good-
que...](https://www.americanpressinstitute.org/publications/good-
questions/news-opinion-rebecca-iannucci/)

The ability for Facebook's fake news regulator to tell the difference between
opinion and news is _critical_. Because while we may agree that a news story
titled "The Holocaust is a Jewish Conspiracy" should not be treated as truth
nor news, it's a whole other can of worms to argue that Facebook should
surpress someone wanting to go on a angry opinionated rant, no matter how
distasteful their opinion. I'm not a NLP expert, but this seems to me like a
pretty hard NLP problem to do with reasonable accuracy and efficiency.

OK, let's make things simple and assume a universe in which the only article
wall-post type that Facebook has to deal with are news stories, i.e. no one
ever posts opinion pieces/blog rants -- or hell, satire, something that Google
News has had trouble with even though Google News _uses a whitelist_...

One of the other fundamental challenges is that the definition of what is
"news" is very hard for humans to objectively quantify. Look at the popularity
of Reddit's r/todayilearned, one of Reddit's most popular subreddits even
though it consists of posting facts that are years/decades old. And yet, for
the vast majority of users, those facts are most definitely _news_ , and news
that is enlightening and interesting.

\-----

Let's consider some real-life example headlines of purported news stories:

"SHOCKING: How Barack Obama lived with an single white teenage mother without
Michelle knowing"

Is this fake news? It is most definitely not _fake_ : Barack Obama was born to
an unmarried white 19-year old. And I bet there are plenty of people who might
not realize that his mom was so young and was a single mother. Some people
might not even realize Barack is half white! So this headline not only
contains no falsehoods, but assertions that are news to many readers. And you
you could make a strong argument that it should be penalized as "fake news"
because the wording of the headline is trying its best to imply a sexual and
illicit multiracial affair.

I can't even imagine how difficult it would be to have an algorithm that works
with this example, nevermind detects "bad intentions".

How about this one:

\- Bill Cosby said to be a serial rapist

Does this seem like an obvious real news story? Tell that to all the
journalists -- including the CNN executive who was writing Cosby's biography
-- before a shaky YouTube clip of Hannibal Buress calling Cosby a rapist went
viral:

[http://www.latimes.com/entertainment/tv/showtracker/la-et-
st...](http://www.latimes.com/entertainment/tv/showtracker/la-et-st-hannibal-
buress-cosby-joke-20151231-story.html)

Buress's routine was based off of a long-ago news story that never got major
play. You can go to the TMZ archives for the Bill Cosby tag and see for
yourself that even TMZ had nothing about Cosby's allegations until Buress went
viral. And Buress's routine was not even new -- he said he'd been using it for
about 6 months before the random YouTube clip got noticed.

Pretend Buress never went viral. In this alternate universe of 2015, the Cosby
biography has been published, Cosby is still respected, and yet every once in
awhile, your Facebook feed has screaming headlines like "HOW THE MEDIA IGNORES
BILL COSBY'S HISTORY OF RAPE". Would you be wrong in wondering why the fuck
Facebook hasn't suppressed such outlandish posts? Or pretend you were one of
the few people to have read the original expose on Cosby's accusers. You know
it's a fact that he has a "history", but it is now nearly 10 years later -- if
those accusations had any merit, the media, especially TMV, would be all over
Cosby. But that's not the case so you flag the post as "fake news". What is
Facebook's algorithm supposed to do here?

Final example of a news story headline: SHOCK CLAIM: KEVIN SPACEY SEXUALLY
ASSAULTED ME WHEN I WAS A BOY

This claim is definitely _news_ and of course, we know now that it is unlikely
to be fake. But this example illustrates one more core difficulty in telling
what is "fake", what is "news" and what is "fake news".

BuzzFeed was the first to report the claim by Anthony Rapp, and it was an
exclusive -- meaning no other major news outlet had anything that could
confirm it: [https://www.buzzfeed.com/adambvary/anthony-rapp-kevin-
spacey...](https://www.buzzfeed.com/adambvary/anthony-rapp-kevin-spacey-made-
sexual-advance-when-i-was-14)

It's easy for Facebook to tell that this is "news". But what signals does it
have that it isn't __fake __, when no other major outlets is reporting it?
BuzzFeed 's entire story is based off of one person's recollection of a
decades-ago incident. In the hours between Buzzfeed's shocking scoop and
reports of other allegations that eventually doomed Spacey and House of Cards
-- a strong case could be made that FB should be penalizing the Buzzfeed
article as "fake" or at least "maybe fake". And a lot of this would hinge on
how much you think buzzfeed.com should be treated as reliable.

~~~
tareqak
Thank you for that detailed reply! I really appreciate it, and I read through
all of it. One thing that I forgot to mention were that the some of the ads,
if not all, run by the IRA (Internet Research Agency [0]) were veritably
false. If Facebook kept score of ads that were known to be false run by an ad
buyer couldn't they present that information beside the ad e.g. a sort of
running "boy crys wolf" heuristic?

Thanks again for your well-researched and extensive reply.

[0]
[https://en.wikipedia.org/wiki/Internet_Research_Agency](https://en.wikipedia.org/wiki/Internet_Research_Agency)

~~~
danso
Yes, I can definitely agree that there is far more that Facebook can do in
investing enough _manual_ effort into shutting down outliers in egregious
abuse.

------
blk_r00ster
Facebook being the main reason for depression in young peoplpe.

------
stuffedBelly
P.S. I live with depression and have been recovering over the years.

I really want this to work but I am afraid this Facebook AI based on posts
will miss a big portion of people with suicidal thoughts.

First of all, many who are depressed do not like to share and that is why
first step of counseling is usually to make the patients open up and talk, not
to mention posting on Facebook about their feelings

Secondly, often times suicidal thoughts appear suddenly, making it tougher to
detect preemptively. The best way to prevent suicide is for people to be there
with the subject instead of messaging/calling. There are physical traits
(anxiety, abnormal silence/talkativeness, etc) that can be easily spotted in
person. Therefore constant visits is better than relying on Facebook AI.

All in all, I don't think AI is needed for suicide prevention. All Facebook
really needs to do is to put, in a user's profile page, a visible "counseling"
section providing immediate info about the suicide hotline and nearby
counseling/therapy centers. But that wouldn't be as a big of a selling point
as "AI for early signs of suicide", would it? If everyone knows the suicide
prevention hotline just like they know 911, it would have prevented lots of
suicide already.

------
jayess
How long before Facebook notifies my local police that I seem to be likely to
commit a crime?

~~~
jasonkostempski
Don't forget Facebook operates worldwide. Suicide is a crime in some places.

------
lwf
It's impressive how much low-hanging fruit exists here…

I recall interning at Google in 2012, and asking about the suicide-information
Onebox. It matched obvious search queries like "I want to kill myself" and
provided data about suicide help lines in a non-invasive way. Unfortunately,
it used a fairly strict string matching algo, so while "how to kill yourself"
would trigger it, it wouldn't pick up things like "how to kill myself". It was
also only localised to the US at the time, and didn't have hotline data for
other countries.

… it turned out there was already a larger dataset, internationalised, that
was ready to be imported into the search engine. But the onebox team was busy
with the 2012 Olympics…

Technically, interns didn't get 20% time, but my mentor understood it was
important and told me to go for it.

One of those things where you don't collect metrics to validate your
assumptions, but just know it saved lives…

------
morpheuskafka
Oh nice, maybe they'll do some A/B testing since there not bothered by those
pesky ethics regulations. Who needs those anyway? I mean, just swing people
moods, informally diagnose them, and profit by selling the data. Free
healthcare, brought to you buy the capitalism system!

------
dwringer
Somehow this makes me think of a certain Harlan Ellison story.[0]

[0]
[https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc...](https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream)

------
__warlord__
This reminds of Psycho-Pass[0] an anime where pretty much an entity determines
if your mental state is stable enough for society, long story short, it
doesn't work for everyone in the sense that if the system "decided" that you
are "unstable" is over for you.

I really hope the people at facebook are aware of the repercussions of a
system like this. But then again, greed and market share is something that
facebook prefers over its users.

[0]. [https://en.wikipedia.org/wiki/Psycho-
Pass](https://en.wikipedia.org/wiki/Psycho-Pass)

------
kmfrk
I think this is an important measure, but Facebook is literally the last
company I'd trust to get it right.

I don't have a Facebook user, but I wonder how much goodwill Facebook has in
the bank with its user when it comes to things like this.

------
austincheney
I so wish people would stop abusing the term _AI_. This is an application that
employs heuristics upon scanned data and then executes events. By that
definition half of everything written in JavaScript is AI.

~~~
bdod6
They are using AI for natural language processing though. Heuristics would be
if they just scanned a post for keywords or waiting for a suicide reporting.
The article suggests they are using reporting as labeled training data, and
eventually they'll be able to detect posts/videos that indicate suicide.

I agree that AI term gets thrown around a lot these days but don't think
that's what is happening here.

~~~
austincheney
> They are using AI for natural language processing though

I am not sure how that, by itself, qualifies as AI. It is a dedicated NLP
application doing the one job it was designed to do. In my idea of AI, the
machine would have to learn from prior mistakes without the developers doing
that learning for it.

Also, for what its worth, in the real world to qualify suicidal thoughts you
have to ask a person directly if they have any thoughts of harming themselves
or others while aggressively watching their body language to force a more
honest response during emotional pressure. Scanning user submitted text would
not qualify as suicidal behavior until the user mentions self-harm.

------
misiti3780
I wonder how they build up their training set? That couldnt have been that
easy.

~~~
supermdguy
From the article:

> Facebook trained the AI by finding patterns in the words and imagery used in
> posts that have been manually reported for suicide risk in the past. It also
> looks for comments like “are you OK?” and “Do you need help?”

~~~
tomjen3
Oh great, the next time somebody needs to organize a move Facebook is going to
flag them for suicide risk.

------
neuroamer
A year ago, I wrote an article advocating that tech companies do this, (pardon
the clickbait headline):

A year ago I typed 'Suicide' into periscope and hit stream
[https://neuroamer.com/2016/08/08/a-year-ago-i-typed-
suicide-...](https://neuroamer.com/2016/08/08/a-year-ago-i-typed-suicide-into-
periscope-and-hit-stream-why-arent-we-using-social-media-to-screen-for-mental-
illness-and-offer-access-to-care/)

------
Steeeve
The fight that people are putting up is ludicrous. If you can do _anything_ to
help someone who is in trouble like this, you should do it.

My hope would be that they open source whatever it is that they've come up
with so that other sites can implement similar tools.

------
IBM
It's kind of amusing that all the investment in ML/AI by the internet
companies is part of how they'll be regulated. I suspect it won't be good
enough for a long time anyway and they're just going to have to hire a lot
more people.

------
jacquesm
So, how about:

\- False positives?

\- People temporarily accessing other peoples computers and purposefully
triggering the algorithm?

\- Oversight?

\- Scope creep?

------
ShareDVI
Partially reposting this reddit comment to prevent Werner syndrome
([https://www.reddit.com/r/science/comments/7026o9/suicide_att...](https://www.reddit.com/r/science/comments/7026o9/suicide_attempts_among_young_adults_between_the/dmzz276/))

 __If you are having thoughts of suicide, self-harm, or painful emotions that
can result in damaging outbursts, please dial one of these number below for
help! __

International Hotline Lists

[https://www.facebook.com/help/103883219702654](https://www.facebook.com/help/103883219702654)

[http://www.suicide.org/international-suicide-
hotlines.html](http://www.suicide.org/international-suicide-hotlines.html)

U.S.

Suicide Crisis Hotline: 1-800-273-8255

Cutting: 1-800-366-8288

Substance Abuse: 1-877-726-4727

Domestic Abuse: 1-800-799-7233

Depression Hotline: 1-630-482-9696

LifeLine: 1-800-273-8255

Crisis Textline: Text "start" to 741-741

Human trafficking: 1-(888)-373-7888

Trevor Project (LGBTQ sexuality support): 1-866-488-7386

Sexuality Support: 1-800-246-7743

Eating Disorders Hotline: 1-847-831-3438

Rape and Sexual Assault: 1-800-656-4673

Grief Support: 1-650-321-5272

Runaway: National Runaway Safeline 1-800-RUNAWAY (1-800-786-2929)

Exhale: Abortion Hotline/Pro-Voice: 1-866-4394253

International Hotline List:

[http://www.suicide.org/international-suicide-
hotlines.html](http://www.suicide.org/international-suicide-hotlines.html)

------
trhway
how would we know that any very-very-well intended action FB would take upon
identifying a suicide tendency is going to help prevent and not actually
encourage/cause the suicide? How about false positives?

------
petercooper
TechCrunch's own title is better here. The only "sign of suicide" comes after
the fact. Detecting "suicidal posts", as in the original title, is more useful
however.

------
r1b
Dead men buy no products.

~~~
5ilv3r
I know this is horribly cynical, but dead man's friends will probably be
highly engaged with the platform afterward. Facebook will find a way to
monetize it.

~~~
mmjaa
Dead mens' comments, live on .. in some database. Or text file. Use the
Source!

------
MichaelMoser123
the article does not mention how a possible intervention would look like; if
they don't report a user with suicidal posts to the authorities then facebook
will be sued by relatives of those who put an end to their lives; if they do
report it then they will be sued by privacy advocates. Damned if they do,
damned if they don't.

~~~
MichaelMoser123
data is power and profit (P&P), but too much data can be a liability.

------
yters
Facebook is a massive Skinner box.

------
stevehiehn
Does this mean they have created a training set of posts/behaviour prior to
suicide?

~~~
phkahler
I was hoping they used data on people who actually committed suicide, but it
looks like they're using activity that users reported and "expert opinion"
instead.

------
kirykl
Kind of incentivizes users to keep their FB presence "happy regardless"

------
trophycase
_beep_ _boop_ "take your medicine. take your medicine"

------
artursapek
I guess that's one way to keep MAUs up.

------
bwl
welp

------
leifaffles
Given the track record of social interventions, what if this backfires?

------
igorgue
Facebook is a phishing website.

------
mmjaa
>outrage< I do _NOT_ give Facebook - or any other party - permission to know
if the Suicide Bit is flipped.... S'rsly, is anyone else not having a serious
"WTF!" moment about the very substance of where we are at? >/outrage<

~~~
austincheney
No, because you do not own data you submit to Facebook. It is their data and
they can do anything they want with it. If you find that disgusting then don't
give data to Facebook.

~~~
mmjaa
Yes, of course. The irony is not lost; no matter what, you can't really erase
yourself from existence.

Facebook will, at least, maintain the shrine.

Perhaps this is its true purpose - to remember dead people?

I think it will become that. I wonder if there is an event horizon where all
of the initial Facebook users expire, and only a new set live on - I imagine
it'd be way into the future - 3 or 4 generations?

Soon enough, Facebook is gonna be the grave, itself.

------
vonnik
I totally support this, but I also find it ironic that Facebook would 1) make
lots of people sad and then 2) monitor them for suicidal ideation.

[https://www.theregister.co.uk/2017/04/12/facebook_makes_you_...](https://www.theregister.co.uk/2017/04/12/facebook_makes_you_sad/)

~~~
TheAdamAndChe
I see the irony as well, but from a purely business standpoint, it makes
sense. If Facebook gains and keeps a reputation of being intensely bad for
mental health, it will lose market share in the long run as regulation passes
limiting its scope.

~~~
mmjaa
I sentence you to penance for your heresy.

Please watch "Soylent Green" and "The Matrix" and come back with a good reason
for why corporations should be allowed to commercialise death services.

k'thxbai.

