Admittedly this article a high-level overview, but I see a lot of hand-waving use of statistical terms.
For example, how is “30 percent of the participants met the criteria for depressive symptoms” in line with “10 to 40 percent of college students at some point experience such symptoms”. It’s not: 30% have depressive symptoms now vs 10% having symptoms at some point? One of those two numbers is way off, or they are comparing apples to oranges.
There is no mention of how strong the correlations are (though one would expect such in the full paper). This is a small number of people – around 200 – all of which are of a similar age & occupation.
The idea that this evidence would lead one to consider monitoring implies a certainty and generality about the results which appears unjustified.
It's interesting seeing my alma mater show up here. I wouldn't be surprised if Rolla students exhibited a higher rate of depression than the norm. Having spent 4 years there, I knew a lot of people that were bitter about being there. It's a small state engineering school in the middle of nowhere, without any particular pedigree or notoriety, known mostly, at least among other students in the midwest, for it's St. Patrick's Day festivities and drinking culture.
At any rate, I'd certainly want to see a sample from a different or at least more diverse population before I would think my browsing habits said anything about my general mental state. Beyond what was mentioned about, the sample at Rolla was presumably first- and second-year students living in the dorms as the rest would not be sharing an internet connection in a way that makes monitoring per student feasible.
That's a major problem with many (maybe even most) psych studies. I don't think anyone who's thought about it at length really thinks these kinds of populations of convenience ("recruited N students at my university") are a good way to do science, but they are... convenient, so they keep getting used. At some schools, psych students have to participate in a certain number of studies for credit, which makes it easier to get participants, while likely further biasing the participant pool.
It's part of the traditional split between quantitative and qualitative psychologists (and sociologists), each of whom gets one side of the equation stronger. The qualitative side typically goes "out into the world" and tries to interview a target community, building an understanding and iterating to fill in gaps in the understanding. They often succeed in quite good coverage, but don't produce statistical results. The quantitative side typically wants to measure statistically significant results on pre-specified, narrower questions, but often does so only on populations on convenience, rather than going "out into the world" and really understanding the populations that are out there.
This is very, very true. In fact, Heinrich et al wrote a wonderful paper decrying this very problem.
Unfortunately, its not going to change, because of the pressures on academics to publish, publish, publish inclines them to take full advantage of these samples of convenience.
Its actually even worse, because the majority of psych papers are based on American psych students, so there's not even much of a good geographic spread in the ridiculously limited samples psychologists use.
I wonder if engineering school students are more depressed on average. It was certainly my impression while attending one that the students were more stressed than they average college student at other schools.
"For five of the past nine years, one of the three colleges—the University of Missouri-Columbia, Stephens College, and the University of Missouri-Rolla—has landed at No. 1 on the Princeton Review's list for the school with the 'least happy students.'"
(University of Missouri - Rolla being the former name of Missouri S&T.)