This is such a horrible article. Many have already mentioned the conflation of humanities and social sciences. Then by using the example of books (because most would agree books would loose their essence if we just describe them by numbers) it constructs a straw man that somehow we loose something (the beauty?) by using quantitative measures.
Sure many studies have flaws (not just in the social sciences), but what is the alternative that we simply use theories because of their "beauty" (whatever that means)? Shall we start psychological therapies just because someone thought it sounded good, instead of measuring if it works?
Given the title I expected to find an interesting article on what can go wrong if we overly rely on quantitative methods in the humanities. But this article doesn't distinguish between badly-applied quantitative methods and where the limits of those methods are even if they are executed well.
For example:
> We look at instances where the effect exists and posit a cause—and forget all the times the exact same cause led to no visible effect, or to an effect that was altogether different
This just sounds to me like bad quantitative modelling.
There is a huge argument to be made for qualitative research, and there is much-needed criticism of the idea that "hard" methods are more valuable than "soft" methods. I think this article manages neither.
When the measurement is flawed you will not know if the therapy works. Sorry.
And the measurement is flawed! Actually, there is a crisis of reliability concerning modern research papers.
(probably this is the source that many try to build castle out of dough?...)
Flawed is not completely useless. Even a flawed p-hacked measurement can distinguish between big effects efficiently.
Plus even if all the previous measurements were totally useless, that doesn't mean we should just give up and stop trying to measure soft stuff.
No, it's quite the opposite. Instead of many small underpowered experiments and studies we should be spending on fewer but well designed and run larger ones. (Even if that's naturally harder.)
So are you arguing that useful measurements are impossible for specific topics or are you just stating that flawed research is less useful than it could be (which it obviously is)?
I can recommend having a look at the book „how to measure anything“ to get a sense of how to apply measurement techniques to „soft“ contexts.
> Shall we start psychological therapies just because someone thought it sounded good, instead of measuring if it works?
Yup. All psychotherapies were started just because someone thought they sounded good; and are still being practiced because they sound good. There are hundreds of different schools of psychotherapy today. There is a movement toward testing the effectiveness of therapies, but they just show whichever therapy the researcher fancies is the best, and metastudies reveal about equal effectiveness of all the major therapies.
The point is, I think, that quantitative measures are not lifting the scientific status of a field automatically; and can mask the lack of sound foundation.
That's incorrect. Meta-analyses show all widely accepted therapies are efficient exactly because public health authorities only fund and support scientific therapies. There are plenty of great-sounding therapies which are not effective and thus not widely used.
You've missed the entire point. A sort of conflation is the problem - not by the author, but by people in and around soft sciences and humanities throwing a bit of statistical jazz into their papers and then drawing ostensibly rigorous conclusions which influence social policy.
The reality is that by their vary nature, both soft science and humanities (there is a lot of overlap) cannot be held to the same rigor as, say, mathematics, physics, chemistry. These sciences are pure theory (like gender studies), non experimental (like psychology), and fundamentally unfalsifiable in the majority of cases...but laymen, and apparently government officials, either don't understand or pretend they don't understand - either way shitty policy and legislation is passed and innocent people (society) are worse off frequently.
This is 2020 and while in the past attitudeX or behaviourY might have been acceptable we have now come to understand (through Social Science Studies or NY Times bestseller that a particular academic has written) that both are wrong, toxic, in fact.
Please address the issues directly and name your sources rather than just giving a general progressive word salad. I'm ready to support you but you don't convert people by brushing them off as out of date.
That reminds me of a quote from Kurt Tucholsky: “Sociology was invented so people could write without experience”. A crisp way to summarize your point. Although I don't think that sociology can't be better than that.
I've never seen a half-decent study in social sciences that draws rigorous conclusions: what I have seen is those studies noticing a correlation and then mainstream media picking it up as de facto conclusions (to put out a simplest example we've all seen: "people who have more sex are happier", directly implying that having sex leads to happiness, whereas studies noticed a correlation between people who claim to be happy and sex frequency).
But you are right about what happens next: you top it off with academically uneducated (or simply unaware of scientific rigour) politicians like Trump (he's just an obvious example, far from being the only one) making calls on different social topics.
> I've never seen a half-decent study in social sciences that draws rigorous conclusions.
Perhaps you should look harder. The dominant approach in economics for 20 years has been to reject correlational studies and try very hard to get at casuality, by:
* Running randomised controlled trials, often at scale (see eg Esther Duflo);
* Laboratory experiments, which have provided a body of robust paradigms and results;
* Seeking natural experiments;
* Statistical techniques like regression discontinuity and instrumental variables.
There's plenty of bad work in the social sciences. So is there elsewhere in the natural sciences (cough Lancet). There's plenty of good work too.
FWIW, I realise I didn't phrase it correctly. What I wanted to say was that any non-terrible ("half decent") study does not attempt to draw a final, black or white (I wrongly used "rigorous") conclusion, but that mainstream media will do that instead by choosing a particular interpretation of the study results.
I did not want to imply that social science studies are non-rigorous, I was actually trying to defend their scientific nature, but with an incorrect phrasing.
Economics is not really something people think of when they talk about humanities or social sciences.
Indeed, in Econ grad school I learned a lot about control theory, statistics, dynamic programming, etc. But I was never told to read Foucault, Levi-Strauss or even Marx - something that sociologists and other people in humanities usually have at least a basic understanding of.
If we judge what is science by level of quantitative rig our then economics is the only social science.
There are definitely areas of overlap with social sciences--especially these days. Behavioral economics (for which Richard Thaler won a Nobel Prize a couple years back) grew directly out of behavioral psychology for example.
Sure many studies have flaws (not just in the social sciences), but what is the alternative that we simply use theories because of their "beauty" (whatever that means)? Shall we start psychological therapies just because someone thought it sounded good, instead of measuring if it works?