
Spin found in clinical trial abstracts in top psychology and psychiatry journals - Indirector
https://sciencebeta.com/spin-psychiatry-journals/
======
gervase
I think this is a natural outcome of the peer review process; you're not
targeting research papers at statistical models, you're targeting them at
human reviewers. This kind of behavior is equally prevalent in many fields,
_including_ computer science.

I recently reviewed a paper from a research team in China, on a scalable and
ubiquitous ad hoc sensing platform. The presented "spin" was that it could
potentially be useful for monitoring power distribution networks, thereby
reducing blackouts, which was certainly true. However, it could also have been
used for an inescapable, totalitarian panopticon (this part was implicit and
unstated).

As a reviewer with explicit instructions to evaluate the ethical implications
of the work, is it my responsibility (A) to interpret the stated "spin", (B)
extract and evaluate strictly the technical contributions of the work, or (C)
all of the above, and the implicit "anti-spin" interpretation that was
provided courtesy of my Western worldview?

And here's a twist - six months previously, a similar situation occurred _with
my own work_ , in which a technology that I envisioned (spun) being used for
disaster relief caught the eye of an alphabet agency for low-cost, secure data
collection in third world theaters. This was deeply shocking to me at the
time, because I hadn't considered this potential use case when I was
developing the system. I had spun its impact so effectively in my own mind, _I
didn 't even see the negative alternative_.

It's my observation that human brains will inject their own spin if it doesn't
already exist, using the social and cultural norms to which that brain is
accustomed. I don't hold a researcher's desire to sell their work as
effectively as possible against them, especially if the person to whom they're
selling most effectively is themselves.

~~~
gentran
I'm a novice in my IT career. If you're able and willing, would you mind
explaining to me how software (spun as it may be) for disaster relief, could
be used for data collection outside the scope of the objectives you had in the
name of disaster relief? If this question misses the mark you're welcome to
ignore it. Thanks!

~~~
AstralStorm
It's probably image analysis from overflight, if I were to guess, or an ML-
augmented database.

------
Noumenon72
This article might be spin itself, given how it captures attention with the
accusation but hides what was actually tested with the circumlocation "a
previously published definition of spin". For all I know, they just counted
every abstract that contained the word "may" and called it spin.

------
pacbard
The issue is probably two-pronged: 1\. There is pressure from the department
(and funders, as the article mentions) to produce publications. The need to
publish leads to researchers to spin null results as a way to "cash in" the
time and effort put into a research project (I'm assuming good intentions here
on the part of the researchers). 2\. Articles get published if the authors can
argue that their work makes a new and unique contribution to the field. This
argument usually rests less on the article's results and more on the
discussion of these results. I am not surprised that authors spin the results
in their discussion sections.

As a side note, most readers only read the introduction and discussion section
of a paper. Those two sections usually have the most spin as they are written
to "hook" readers in the paper (and to an extent, to be accessible to a non-
technical person sometimes outside of the field). Lit review, methods, and
results sections are usually more dry as they report on the technical aspects
of a research project.

~~~
your-nanny
huh? we read the methods and results first, then the spin (exception, the
abstracts)

~~~
Fomite
Several major journals, including nature, put the methods at the end.

~~~
your-nanny
so? do you read scientific papers in order? I don't.

------
SubiculumCode
Abstracts are typically between 150 and 250 words, yet studies, especially
clinical trials, are frequently complex affairs. Fitting nuance into an
abstract is just often not possible, and frankly, the clearest most succinct
language usually oversells the results.

------
your-nanny
Ah, but if they had not found spin we would never have heard of their study...

------
quickthrower2
Also: "Spin" found in over 50% of top quantum mechanics journal abstracts.

------
notadoc
They're called soft sciences for a reason, they're more about interpretation
and opinion than hard irrefutable evidence. So this is not surprising, it's
probably quite a bit higher than 50% too, and in all of the soft science
fields where agendas or political motives may be present; sociology,
psychology, political science, some archaeology, etc

~~~
wallace_f
If you can't follow the scientific method most of the time, should their work
be considered science at all?

~~~
mattkrause
Are you actually arguing that randomized controlled trials aren't following
the scientific method?

~~~
wallace_f
Clinical trials are not true repeatable, scientific experiments. They're
really just controlled studies which are the best that can be done in the
absence of the possibility of a controlled experiment. In psychology, this
however is just the tip of an iceberg of problems with such research being
unscientific.

~~~
mattkrause
I don't follow.

An experiment needs to be repeatable _in principle_. You don't actually have
to re-run it with the exact same specimens or subjects; they can be sampled
from some larger population that you're studying (e.g., patients with
depression). Nevertheless, you can often do something like a cross-over design
that lets you repeatedly test the same patients in different conditions.

