
Show HN: CheckUp – Social network suicide prevention in Go - rkirkendall
http://www.CheckUpApp.org
======
phfez
I can understand why you think this might be a good idea. But it completely
doesn't take the depressed person into account at all. It thinks it can solve
suicide by checking up on the individual? Do you think the individual wants to
be checked up on? Maybe the people checking up on the person are the actual
problem in that person's life, which could create an even more serious dilemma
for that person.

If there is anything that would drive me to suicide, it would be more people
thinking that they can 'solve the problem' in this manner.

~~~
smclaughlin
That's an interesting perspective. I disagree. I think if someone is tweeting
about suicidal intentions then it follows that they are not concerned about
the privacy of those intentions.

~~~
phfez
Many people don't understand the terms and conditions of social networks like
twitter and facebook, and privacy with those services is ambiguous and
difficult to manage.

If a person is feeling suicidal because the people in their life are not
noticing their pain, I really doubt that knowing an algorithm had to do it
will make them feel any better.

------
wyager
This seems a little accidentally sinister to me... Something about doing
automatic sentiment analysis on someone's data without their permission seems
morally questionable. I mean, almost none of us are happy when the government
does it, even though it's allegedly with the (very questionably) "good"
intention of "fighting (terrorism|drugs|bogeymen)".

~~~
fmdud
It's different when it's public data, though - it's data _willingly shared_.

~~~
phfez
It's not willingly shared to you

------
jwise0
Ricky,

Thanks for this service.

Sometimes, all it takes is to remind someone that people are there for them,
and care about them. I don't terribly like Twitter, but I try to read it once
a day or so just to keep a view on how people are doing; this service would be
incredibly useful to me to make sure that I don't have things that I'd like to
pay attention to falling through the cracks.

I know a bunch of people here seem to wish that the service were less
aggressive (or that it didn't exist at all): for my application, I'd almost
prefer it be more aggressive! Three negative sentiments is a high bar to meet,
and I fear the false negative rate could be quite high.

Anyway, thanks very much for writing this. As a whole, classes of applications
that help me mediate my interaction with social networks are things that I
support, and in specific, this application is incredibly valuable to me.

~~~
rkirkendall
Glad you like it! Thanks for kind words.

------
conistonwater
Have you thought at all about the ethics of meddling in people's lives? [1]

This "service" seems contrary to basic notions of privacy and freedom.

[1]
[http://www.davidhume.org/texts/suis.html](http://www.davidhume.org/texts/suis.html)

~~~
jwise0
The only thing the service does is alert the friends of someone who appears to
be about to end their own life, but who, for some reason, is posting about it
on a public forum.

I can assure you that the service does no meddling whatsoever.

Someone who is swept by the depths of depression, and takes a last moment to
emerge long enough to ask their friends for help -- well, I think that it is
only fair to counterbalance the feed algorithms that try to keep the rest of
us blissfully ignorant, and let them have the support that they ask for.

~~~
conistonwater
> alert the friends of someone who appears

That's meddling all right.

------
cjslep
I think this is really cool. Clarifying question from these two statements
that seem to conflict:

> The goal of the CheckUp project is to detect any serious sign of depression,
> self-harm or suicide posted to a social network and provide peer support by
> notifying a concerned party.

> The app works by checking the tweets on your home timeline every few minutes
> and sending you an email notification if a tweet is flagged.

Shouldn't it instead make the person signing up the "concerned party" to be
notified via e-mail, and instead have that concerned party specify which
twitter feeds to watch? I'm probably missing something here.

~~~
rkirkendall
Hey thanks for the feedback! I will try to update the site to clarify the
language a little more –– what you described actually is how it works. The
user that signs up is the concerned party that we would notify. The app
watches all tweets on that user's home timeline. By "home timeline", I was
referring to all tweets posted by the user and everyone followed by that users
(so, your Twitter feed). That's how it's described in Twitter's API docs, but
I can see where that may be a point of confusion.

------
DanBC
This looks like a useful tool.

Assuming for the moment that this is very accurate.

What then?

Ann discovers that Bob is suicidal. What is Ann supposed to do then? (My
suggestions are to ask Bob if he intends to die; and to help Bob access
medical help. (Emergency help if he says he intends to die soon)).

Perhaps some set of accurate, international, flowcharts showing what you're
supposed to do if a colleague / friend / relative / etc is suicidal would
help. (In UK: Contact their GP; or an ambulance if suicide is in progress;)

You also seem to be ignoring ages, which is tricky. What do your ethics team
say about reporting suicidality and people under 18?

~~~
rkirkendall
Good idea on the flow chart! Thanks

------
bussiere
hum, interesting.

Kind of minority report here.

Machine learning on depression syndrome ?

I'am a hacker and not truly white, are you not afraid that people used this to
take advantage of depressed people ?

A man can use this to spot depressed women , some are manipulator born people.

I find the project interesting but it have a lot of ethic problems. I think
that it will be more a tools for the people in wealth.

maybe it could be more interesting to have the global mood of people than the
mood of one people.

~~~
fleitz
ML would be a huge step forward, this just matches a couple keywords.

~~~
bussiere
if you need some advice on ML i will be glad to give you some.

I'am mainly use python for this but there's other solution.

------
matart
How accurate do you think this can be? What happens if I write a facebook
update that says, as an example:

I don't believe I have ever said "I feel depressed"

Would this be flagged? Does it only take one post to be flagged or is it
looking for recurring behaviour?

~~~
rkirkendall
Good question. Part of the ongoing nature of this project is to expand our
phrase detections. I pulled the original phrase list from a white paper
written by BYU last year because I figured that would be a good starting
point. There is room for vast improvement though.

If the last few posts leading up to the flagged post are classified as
predominately negative (currently looking at the last 3 tweets before the
flagged tweet), then we send the notification. The intuition is that if a
person is comfortable enough with social to post seriously suicidal content,
he or she has probably already made some preceding negative remarks.

~~~
matart
Very interesting. Thanks for the response!

------
rubiquity
This sounds like a great service but I really don't give a damn that it is
written in Go.

Also:

> _This application is temporarily over its serving quota. Please try again
> later._

~~~
rkirkendall
Hey! Thanks for the heads up. We picked up more traction than I had
anticipated, but we should be back online now! And some people may be
interested in the Go repo ;)

