Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: CheckUp – Social network suicide prevention in Go (checkupapp.org)
28 points by rkirkendall on Aug 11, 2014 | hide | past | favorite | 46 comments



I can understand why you think this might be a good idea. But it completely doesn't take the depressed person into account at all. It thinks it can solve suicide by checking up on the individual? Do you think the individual wants to be checked up on? Maybe the people checking up on the person are the actual problem in that person's life, which could create an even more serious dilemma for that person.

If there is anything that would drive me to suicide, it would be more people thinking that they can 'solve the problem' in this manner.


Hey thanks for the feedback. I understand your point on the topic, but I would like to point out that CheckUp isn't trying to 'solve' suicide in this manner. The philosophy behind the project is that you, at some level, care about the people you are socially connected to, and you may care if they are contemplating something very bad.

The goal is to create a service that will prioritize self-threatening posts from people you care about above the usual noise of your social network. Publicly posted cries for help can indicate very serious intent and we just want to make sure they don't go unnoticed.


I tried to comment the other day, but apparently I met my quota for replies.... but I wanted to respond to this.

You are making assumptions incorrectly. You are determining right from wrong, good from bad, and everything else for someone that you don't know, and you do not know that person's circle of friends or family, so you can't assume that they care as much as you'd like them to care.

When you create a program, you should be trying to solve a problem.... I'd like to know what problem it is that you are trying to solve?


As somebody who was affected by the suicide of a close friend, it would be wonderful if something existed that could help others with suicidal feelings.

But there is something sad about the idea that an algorithm would be better at detecting suicidal feelings than your closest friends.


You think it would be wonderful because of the way that you felt, not the way your friend felt.

And there is no algorithm that can detect suicidal feelings. I can't help but think of Alan Turing in this scenario.


Yes, if only this existed for turing, we could have thrown him in jail for not wanting to live in our bigoted society.

You hit the nail on the head, most people want a tool like this so every now and then they can stop being a shitty person for a few days before they go back to the navel gazing: "No one had talked to him in months, we had no idea he was depressed."


We should be aiming to make the world a place people want to live in, not encouraging people to want to get out of it.


> And there is no algorithm that can detect suicidal feelings.

Not so certain about that. I suffer from major depressive disorder and tweeted a while back speculating about this - I'd just come out of a depressive episode and noticed that my patterns of using social media were different than when I wasn't depressed - generally negative mood, lower interaction in general, mentions of drug abuse in chat, etc. I wondered if something monitoring my IM and Twitter could detect an incoming depressive episode before it fully hit, allowing me to 'prepare' a little more.


> And there is no algorithm that can detect suicidal feelings.

I get what you're saying, but given the miracles that do exist in modern medicine, I'm not prepared to say it's impossible. I suspect better availability of mental healthcare, preferring counselling over drugs, and less stigmatising would all be better 'solutions' than an algorithm, but if an algorithm did exist that could genuinely help then it would be a good thing.


You did not address the first part of his comment,

> You think it would be wonderful because of the way that you felt, not the way your friend felt.

Why not?


Not a conversation that I think is particularly relevant to HN, nor possible to have effectively online.

But, in short, I genuinely believe that, though my friend's despair at the moment she took her life was very real, she could and would have got through that despair. Her suicide had a huge impact on those around her, especially her parents, and it would have been better both for her and those around her if she had been helped through the despair instead of just ending it.


If a "depressed" person is posting things that flag a SUICIDE watch app, then I'd say they are obviously posting those things to draw attention to their issues. Therefore someone noticing would probably be just what they want.


Exactly, which would make this app unnecessary.


Twitter feeds move pretty fast. People might not notice a cry for help until it's too late.


If a person is feeling suicidal because the people in their life are not noticing their pain, I really doubt that knowing an algorithm had to do it will make them feel any better.


My friend posted a 'suicide note' on MySpace back in the day. He was a funny guy and everyone thought it was a joke about drinking too much. It wasn't.


Cynical. Depression is often the cause of suicide. Its not 'caused' by people trying to socialize. Makes perfect sense to me to monitor my mental state externally, just like I check my blood pressure and weight.


Do you think an external program can understand all of the complexities of the human drama within the context of a single person's life?


No all, but that's not a reason not to try.


That's an interesting perspective. I disagree. I think if someone is tweeting about suicidal intentions then it follows that they are not concerned about the privacy of those intentions.


Many people don't understand the terms and conditions of social networks like twitter and facebook, and privacy with those services is ambiguous and difficult to manage.

If a person is feeling suicidal because the people in their life are not noticing their pain, I really doubt that knowing an algorithm had to do it will make them feel any better.


It'd probably be more successful if it had access to your IM/chat/email logs, where you're more likely to be emotionally revealing to people you're close to. That opens up a whole other can of worms re: privacy, though.


Exactly, which would make this app unnecessary.


This seems a little accidentally sinister to me... Something about doing automatic sentiment analysis on someone's data without their permission seems morally questionable. I mean, almost none of us are happy when the government does it, even though it's allegedly with the (very questionably) "good" intention of "fighting (terrorism|drugs|bogeymen)".


It's different when it's public data, though - it's data willingly shared.


It's not willingly shared to you


Ricky,

Thanks for this service.

Sometimes, all it takes is to remind someone that people are there for them, and care about them. I don't terribly like Twitter, but I try to read it once a day or so just to keep a view on how people are doing; this service would be incredibly useful to me to make sure that I don't have things that I'd like to pay attention to falling through the cracks.

I know a bunch of people here seem to wish that the service were less aggressive (or that it didn't exist at all): for my application, I'd almost prefer it be more aggressive! Three negative sentiments is a high bar to meet, and I fear the false negative rate could be quite high.

Anyway, thanks very much for writing this. As a whole, classes of applications that help me mediate my interaction with social networks are things that I support, and in specific, this application is incredibly valuable to me.


Glad you like it! Thanks for kind words.


Have you thought at all about the ethics of meddling in people's lives? [1]

This "service" seems contrary to basic notions of privacy and freedom.

[1] http://www.davidhume.org/texts/suis.html


The only thing the service does is alert the friends of someone who appears to be about to end their own life, but who, for some reason, is posting about it on a public forum.

I can assure you that the service does no meddling whatsoever.

Someone who is swept by the depths of depression, and takes a last moment to emerge long enough to ask their friends for help -- well, I think that it is only fair to counterbalance the feed algorithms that try to keep the rest of us blissfully ignorant, and let them have the support that they ask for.


> alert the friends of someone who appears

That's meddling all right.


[deleted]


Thanks for looking at the repo. A couple of things to keep in mind. - This list is only a starting point I pulled from a white paper. I agree that it does need to expand, hence the ongoing and open source nature of the project.

- If you have a suggestion for how this could be done with machine learning, please feel free to submit a pull request. Just keep in mind that the project only aims to catch language that can be reasonably interpreted as self-threatening.

Lastly, if you have a problem with how the project is implemented PRs are more productive than comments. It is far from perfect, but I believe it is still better than nothing at all.


I think this is really cool. Clarifying question from these two statements that seem to conflict:

> The goal of the CheckUp project is to detect any serious sign of depression, self-harm or suicide posted to a social network and provide peer support by notifying a concerned party.

> The app works by checking the tweets on your home timeline every few minutes and sending you an email notification if a tweet is flagged.

Shouldn't it instead make the person signing up the "concerned party" to be notified via e-mail, and instead have that concerned party specify which twitter feeds to watch? I'm probably missing something here.


Hey thanks for the feedback! I will try to update the site to clarify the language a little more –– what you described actually is how it works. The user that signs up is the concerned party that we would notify. The app watches all tweets on that user's home timeline. By "home timeline", I was referring to all tweets posted by the user and everyone followed by that users (so, your Twitter feed). That's how it's described in Twitter's API docs, but I can see where that may be a point of confusion.


I don't have Twitter, but I'm pretty sure "home timeline" means everyone you follow.


This looks like a useful tool.

Assuming for the moment that this is very accurate.

What then?

Ann discovers that Bob is suicidal. What is Ann supposed to do then? (My suggestions are to ask Bob if he intends to die; and to help Bob access medical help. (Emergency help if he says he intends to die soon)).

Perhaps some set of accurate, international, flowcharts showing what you're supposed to do if a colleague / friend / relative / etc is suicidal would help. (In UK: Contact their GP; or an ambulance if suicide is in progress;)

You also seem to be ignoring ages, which is tricky. What do your ethics team say about reporting suicidality and people under 18?


Good idea on the flow chart! Thanks


hum, interesting.

Kind of minority report here.

Machine learning on depression syndrome ?

I'am a hacker and not truly white, are you not afraid that people used this to take advantage of depressed people ?

A man can use this to spot depressed women , some are manipulator born people.

I find the project interesting but it have a lot of ethic problems. I think that it will be more a tools for the people in wealth.

maybe it could be more interesting to have the global mood of people than the mood of one people.


As i said the problem that i have with this is that i could help a manipulative pervert to spot his next victim.

Allowing anyone to use it is a ethic problem for me.

Manipulative pervert have a natural tendency to spot people in weakness state you could just help them more with this ...

Take care of this, i think maybe a way to avoid this will be to give it access to people only with a good reputation.

(reputation system are a things that a lot of people are working on it).

But the project and the technology interest me.

Did you see the project go learning ?

There are some association that use phone line to help people and prevent suicide. Your product could be useful for them.

We can imagine them using it online to prevent suicide or contact people in private.

regards and i wish you to sucess.


ML would be a huge step forward, this just matches a couple keywords.


if you need some advice on ML i will be glad to give you some.

I'am mainly use python for this but there's other solution.


How accurate do you think this can be? What happens if I write a facebook update that says, as an example:

I don't believe I have ever said "I feel depressed"

Would this be flagged? Does it only take one post to be flagged or is it looking for recurring behaviour?


Good question. Part of the ongoing nature of this project is to expand our phrase detections. I pulled the original phrase list from a white paper written by BYU last year because I figured that would be a good starting point. There is room for vast improvement though.

If the last few posts leading up to the flagged post are classified as predominately negative (currently looking at the last 3 tweets before the flagged tweet), then we send the notification. The intuition is that if a person is comfortable enough with social to post seriously suicidal content, he or she has probably already made some preceding negative remarks.


Very interesting. Thanks for the response!


This sounds like a great service but I really don't give a damn that it is written in Go.

Also:

> This application is temporarily over its serving quota. Please try again later.


Hey! Thanks for the heads up. We picked up more traction than I had anticipated, but we should be back online now! And some people may be interested in the Go repo ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: