Hacker News new | past | comments | ask | show | jobs | submit login

Well, I came in skeptical about this, but the analysis seems pretty solid.

To;dr: fairly strong and reliable correlation between depression and posting images that are bluer, grayer and darker. This is predictive in that users can be identified as depressed before they are disgnosed. # of faces appearing in user pics was also indicative of depression. Fewer faces per picture correlated with depression. # of comments more weakly correlated with depression, and # of likes was negatively correlated. Mechanical Turk-tasked humans were also able to fairly accurately identify depressed users, but often identified different users than the machine.

Statistical methods: Bayesian feature extraction with uninformed priors. 100-tree random forest for classification.

Some points of caution: depression is a broad, fairly fuzzy term. The authors acknowledge that this complicates matters. Some self-selection bias possible, as users had to provide permission to access Instagram streams and many users opted not to.




The study has examined 166 individuals. Some sub-group of these were depressed, and many depressed people could be identified based on photos.

This is then compared with GP's rate of success at diagnosing depression. I fear that's a slightly misleading comparison, because the study works with a different pool of patients to be diagnosed than the typical GP. The Instagram methodology solves a different, maybe easier, problem.


Yeah, and the samples don't seem like they come from the same population.

You have to go to the Appendix, but it appears that 71 of the sample were depressed (which presumably means that they have answered yes to the depression question). It's not clear if they used the CES to identify depression.

The remaining participants were classified as healthy (N=95). These do not seem like balanced samples to me, at least (it would be nice if we knew what proportion of depressed vs not depressed agreed to share IG data).

Additionally, the appendices also mention that gender was not available for the depressed sample, which leads me to believe that they collected one sample of depressed participants and then another from the general population. This is a little shady (at least to me).

All that being said, I really like this paper. I think that the approach is novel, they use pretty good methods and it actually represents a contribution (if small) to human knowledge.

And I definitely went in with a prior against it. The sample size is far too small to support their inferences, but that's a less hard problem to fix :)


    70% of all depressed cases (n=37), with a relatively low number of 
    false alarms (n=23) and misses (n=17).
Not sure but is n=40 "wrong" not a lot more than found cases????


Not just that, but if we take 54 out of the 100 mentioned by that quote as being depressed (37 correctly classified and 17 misses), that means of the 46 non-depressed there are 23/46=50% false positives... I mean, maybe that's acceptable, but imagine being not depressed and getting targeted incorrectly with the same chances as a coin toss... seems like an undesirable aspect of that model to me.

Edit: they mention an alternative model too that gives about 30% true positive rate for depression but is more accurate for non-depressed (probably since it mostly classifies people as non-depressed). The whole survey methodology and everything sounds suspect though, this study is just not something I'd put alot of weight on.


Thought exactly the same.


>> "Some points of caution: depression is a broad, fairly fuzzy term."

I've only had a quick scan through this but it seems like they were simply asking people if they were depressed. Is that correct? If so it seems very broad and unreliable. Why wouldn't they just use the standard PHQ-9 questionnaire to diagnose?


I think the idea is to "learn" with human assistance and then use the "intelligence" to classify other individuals without having to ask them.


I interpret the issue k-mcgrady brings up as the inconsistency inherent to self-assessing issues like these.

Standardized assessment has its issues (it doesn't replace a professional), but at least it works with a single definition that doesn't rely on calibrating each participant's understanding of depression. People struggling with depression often fail to recognize or be willing to admit it, and the opposite can be true as well.

By testing against participant answers, this study is actually determining if your photos correlate with saying you're depressed, which is a different and less predictable thing.


It seems they worked with two groups. One consisted of individuals clinically diagnosed with depression.


This seems pretty intuitive to me. I don't need this paper to know that depressed people post darker pictures. But it is kind of interesting to see this 'formalized'

Also, I think detecting depression premtively via social media is a terrifying idea.


Well I was skeptical, wondering if maybe people posted darker / grayer images because of where they live; so seasonal affective depression (SAD) was causing the depression, not that depression was swaying people's filter choices. However this study seems to account for that causation (I only skimmed it quickly):

"We also checked metadata to assess whether an Instagram­ provided filter was applied to alter the appearance of a photograph."

[...]

"A closer look at filter usage in depressed versus healthy participants provided additional texture. Instagram filters were used differently by depressed and healthy individuals. In particular, depressed participants were less likely than healthy participants to use any filters at all. When depressed participants did employ filters, they most disproportionately favored the “Inkwell” filter, which converts color photographs to black­ and­ white images. Conversely, healthy participants most disproportionately favored the Valencia filter, which lightens the tint of photos."

> Also, I think detecting depression premtively via social media is a terrifying idea.

Yeah, they also called their model "Pre­-diagnosis". :)


That doesn't account for editing outside of Instagram, though, which many people do exclusively.


You certainly do need this paper because saying "I have a hunch" without supporting it with anything does not further discourse and it definitely is not actionable.


It's more than just 'I have a hunch' - I'm sure there are studies that show depressed people prefer darker colors. For example I found this after a quick search for "colors and depression"[0] and there are many more like it. Why wouldn't that carry over to Instagram too?

[0] http://www.academia.edu/3880952/RELATIONSHIP_BETWEEN_COLOR_A...


You need to prove that it carries over to social media in a meaningful manner.


Whoosh.

You support "I don't need this study" with prior studies.


It's very very important for even intuitive things to be backed by research.


Very true, though on the other hand, we might have finally discovered an actual useful side of this "social media".


When I said I was skeptical, it wasn't about the concept or theses per se. It was that the study would be conducted well and seem solid, versus being attention-grabbing... which sadly all too much "science" lately is.


yeah, all of those buts and ifs make the analysis not statistically significant. It's not a random sample and asking people whether they are depressed is not meaningul, Depression is a disease that is quite distinct from simply being sad or in a melancholic mood.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: