

Facebook Adds New Feature for Suicide Prevention - jessehu
http://www.huffingtonpost.com/2015/02/25/facebook-suicide-prevention_n_6754106.html

======
michaelpinto
What worries me is if Facebook has or hasn't studied how someone might react
to what they may see as being informed upon? They might actually wind up
pushing someone over the edge. Looking at that interface I can see that this
feature could be abused: A bully use this as a way to threaten someone.

And on larger level there's something worrying about Facebook holding that
data over your lifetime. What if on a future date Facebook shares that
information at some future point with potential employers or law enforcement
officials? Or what if Facebook is hacked and this data is put into the public.

~~~
ubercore
I hear you on the data concern, but FTA:

"...partnering with Now Matters Now, the National Suicide Prevention Lifeline,
Save.org and Forefront: Innovations in Suicide Prevention, a nonprofit
operating out of the University of Washington's School of Social Work"

So I'd guess that the messaging and interaction has been thought through
pretty clearly to be effective at best and harmless at worst. That's a solid
lineup of consulting partners for a feature like this.

~~~
michaelpinto
But are they the best partners? For example the National Suicide Prevention
Lifeline may really get phone interaction better than anyone else, but that's
different than social media in terms of interaction from a users point of
view.

This is really a different medium, and one that may generate false positives
just for starters. So it may not be harmless because you could alienate a
false positive, or abuse it to bully someone.

And I'm basing this as an interaction designer on just the few screens that I
see there (so if I'm seeing that, what else is being missed?).

