
Retraction of 60 articles implicated in a peer review and citation ring - phreeza
http://www.uk.sagepub.com/aboutus/press/2014/jul/7.htm
======
phreeza
I know there is someone affiliated with the SAGE group who is a contributor to
HN from previous comments, which I seem to be unable to find now. I wonder if
they can share any details on how this was discovered, just something seeming
fishy to an editor, or some kind of fraud detection algorithm?

~~~
dougmccune
I'm on the board. I actually don't have any more information about this right
now, but I'll be following up to see what I can learn. I imagine that similar
to voting ring detection on a site like HN, the specific details about what
tipped us off are probably going to remain semi-private. But the issue of
fraud detection in academia is incredibly interesting and a good example of
where publishers (both via software and human intervention) are playing an
important role. I'll see if we can share any more of the technical details
publicly.

~~~
jcr
It seems safe to assume most "known" ring detection systems being used are
private/secret since their goal is to prevent abuse and revealing their
details might thwart their effectiveness. None the less, at least some ring
detection work is being done publicly, and I've recently been trying to learn
more about the methods being used. The following should be helpful to anyone
who has to fight this sort of abuse:

"Doppelganger Finder: Taking Stylometry To The Underground"
[https://www.eecs.berkeley.edu/~sa499/papers/oakland2014-unde...](https://www.eecs.berkeley.edu/~sa499/papers/oakland2014-underground.pdf)

Code: Find multiple accounts (doppelgangers) of a user
[https://github.com/sheetal57/doppelganger-
finder](https://github.com/sheetal57/doppelganger-finder)

[https://psal.cs.drexel.edu/index.php/JStylo-
Anonymouth](https://psal.cs.drexel.edu/index.php/JStylo-Anonymouth)
[https://github.com/psal/jstylo](https://github.com/psal/jstylo)

~~~
dougabug
Sounds something akin to "security through obscurity." The perceived
advantages of proprietary secret security measures are often dwarfed by the
benefits of opening up a system to careful scrutiny.

~~~
danielweber
Knowing that "security through obscurity" is a bad thing is a fine heuristic,
especially for beginners.

Thinking "security through obscurity" is automatically and always bad is
incorrect.

The people running this detection software don't do it to satisfy your
intellectual curiosity. They do it because they are attempting to prevent
attacks on their systems.

------
gwern
Papers coming out of China and Taiwan need to be regarded skeptically; this
reminds me a lot of the fairly recent "China's academic scandal: call toll-
free hotlines to get your name published"
[http://www.wired.co.uk/news/archive/2013-12/02/china-
academi...](http://www.wired.co.uk/news/archive/2013-12/02/china-academic-
scandal)

~~~
seanmcdirmid
Thanks a lot! Not all if us publishing out of the PRC are corrupt, and any
bias by PC members based on country of origin would be unethical and probably
racist!

~~~
jerf
Your response is the natural instinct, but it's ultimately misplaced. It is
completely rational for people to notice that corruption is highly correlated
to certain locations in the world, and you're not going to be able to just
assert your way into making that false. The correct response is to make it
irrational by making it _not true_ , and working to root out the corruption in
question.

It's _hard_. I don't even have words to describe how hard that is. But it does
at least stand a chance of working.

(This seems to be ground deeply into the human psyche. In general, a group of
any kind is best off in the long term if the group polices itself for
misconduct more aggressively than any outsider could. But the human instinct
is to band together, assist each other to various degrees in covering over
problems (often not even deliberately per se, but as a side effect of the
system's structure), and band together to attack anyone external who notices
anything amiss. This, alas, works in the short term quite well, but is death
in the longterm.)

~~~
jrochkind1
> It is completely rational for people to notice that corruption is highly
> correlated to certain locations in the world

What you just 'notice' is anecdotal and biased by, um, your biases. It's not a
correlation until you actually demonstrate it in a validly constructed
analysis.

Of course, the trick is that legitimate attempts at validly constructed
analyses are also effected by biases of the analyzers. When your conclusion is
something that 'everyone knows' (but not neccesarily based on legit analyzed
evidence), I think it's especially at risk of confirmation bias (when you try
to muster the legit analyzed evidence).

It would be very interesting to find historical examples in the US of
something that, for a while, scientists simply believed in, and believed that
valid objective scientific experiments have shown to be so -- but then later
the scientific community came to the opposite consensus. (ESP in the early
20th century might be one example, believe it or not. Look it up!) Science
isn't quite as foolproof as one might like or assume.

~~~
judk
For example, the scientific "fact" that Caucasians are a superior races than
Negros and Mongoloids.

~~~
jrochkind1
Yeah, good point, I don't know why you got downvoted -- at one point, that was
consensus science -- and there was all sorts of scientific research, the
overwhelming majority of scientific evidence, that people believed
demonstrated it.

At the moment, consensus science is that there isn't even such a valid object
of inquiry as 'race', so you can't even make claims about it. (Although this
is changing in interesting ways too, post-genomics)

------
new_test
Just randomly looked up one of the publications. It's not my area, so I can't
talk about the meat of the paper itself, but what does the discussion section
have to do with anything?

[http://waset.org/publications/11354/applying-half-circle-
fuz...](http://waset.org/publications/11354/applying-half-circle-fuzzy-
numbers-to-control-system-a-preliminary-study-on-development-of-intelligent-
system-on-marine-environment-and-engineering)

~~~
GrantS
It's more of a motivation or justification of the work than anything -- they
want to apply the paper's methods to control marine-related engineering
systems which are becoming more important to the future of Taiwan.

It sounds like a non-sequitur because it was probably copied and pasted from
the grant proposal to get funding for the work. The acknowledgments show that
funding came from the "National Science Council of the Republic of China,
Taiwan", so that organization would naturally want to know why they're funding
work on fuzzy numbers and how it benefits the country.

~~~
new_test
Right, it just seemed like they didn't make any connection whatsoever between
this particular research and a concrete naval/engineering problem. Kind of
like saying "look this is math, engineering is important to Taiwan, engineers
use math".

~~~
GrantS
Ha, yes, that is a pretty accurate assessment of what they did there.

------
forgottenpass
I don't know anything about SAGE outside of what I've read in a psychiatrist's
blog, but isn't the reputation of SAGE Publications' journals trashy already?

~~~
GrantS
Strangely enough, I just started getting spam from SAGE this morning about
subscribing to their Sociology journals. My area is computer science so it
seems they're just spamming university email addresses. Not too classy.

------
OedipusRex
Why would someone do this?

~~~
p4bl0
Because of bibliometrics.

Academics are judged on stupid factors depending on the quantity of
publications and citations rather than on the quality of their research.

The idea of bibliometrics is that you can compare on a single dimension
individuals who do not work on the same subject, who use different methods,
who do it in different communities which have different publication habits,
etc. Of course this is completely stupid.

It's like saying that a cyclist who has participated in 10 Tour de France is
better than a judoka who has participated to only 3 Olympic games. Only here
you also give a job and funding to do it appropriately to the cyclist and say
to the judoka to go f* himself until he participates in more Tour de France.

Of course, the thing is that you can't even have good criteria, because
necessarily if you give criteria they become the goal, and instead of doing
good research and publishing what is necessary when and where it makes sense,
academics are forced to do research and have a publication policy that satisfy
the arbitrary criteria. And even if the unlikely case where the criteria match
what's better, it is impossible for it to be the case in the so many different
fields of research.

The only good solution would be to judge academics by the two or three most
relevant publications they have on the matter of the grant or job. That would
require reading the publications, and if possible the report of the peer-
review process (which would be enabled by _open peer review_ ).

~~~
dkhar
Here are some examples of bibliometrics:

\-
[http://en.wikipedia.org/wiki/H-index](http://en.wikipedia.org/wiki/H-index)
The big one. Used to judge researchers.

\-
[http://en.wikipedia.org/wiki/G-index](http://en.wikipedia.org/wiki/G-index)
Another one for researchers. I haven't really come in contact with this one
very much.

\-
[http://en.wikipedia.org/wiki/I10-index](http://en.wikipedia.org/wiki/I10-index)
Google's own little algorithm.

\-
[http://en.wikipedia.org/wiki/Impact_factor](http://en.wikipedia.org/wiki/Impact_factor)
Older. Used to judge journals

The first and last articles have fittingly long "criticism" sections. A key
point to make here is that these bibliometrics succeed at their basic goal
(providing one platform by which to judge all academics/publications) at the
price of misaligning incentives.

Resultantly: you _do_ get to judge all academics on an even platform, but that
platform is a weighted average of how well they do research and how well they
play the politics/popularity game, scaled by the popularity of their field
(good luck finding me a researcher in theoretical plasma physics with an
h-index over 30 -- I'm not sure there are even 30 theoretical plasma
physicists in the US.).

On top of this, sites like ResearchGate (like LinkedIn, but for researchers)
give people their own score, which is pretty opaque, and display it in bright
green next to everyone's profile picture. It introduces a lot of competition
to a field that doesn't really need it.

~~~
gd1
Academia really is a massive wank

------
drpgq
How good a journal is the Journal of Vibration and Control?

~~~
giarc
The reputation is a bit shaky.

 _leaves room_

------
louhike
At first, I thought it was THE Peter Chen[1].

[1]
[http://en.wikipedia.org/wiki/Peter_Chen](http://en.wikipedia.org/wiki/Peter_Chen)

------
Balgair
Lest we forget that all humans are created equal:
[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/)

Also, for the back and forth links on Ioannidis's 2005 paper:
[https://en.wikipedia.org/wiki/John_P._A._Ioannidis#Research_...](https://en.wikipedia.org/wiki/John_P._A._Ioannidis#Research_findings)

~~~
Chinjut
This is a very different sort of thing, isn't it?

~~~
Balgair
True, but there is a fair bit of demonizing the PRC here. Not that it's
unjustified, but we here in the west have our own issues as well.

------
dang
Url changed from [http://retractionwatch.com/2014/07/08/sage-publications-
bust...](http://retractionwatch.com/2014/07/08/sage-publications-busts-peer-
review-and-citation-ring-60-papers-retracted/), which points to this.

------
logfromblammo
It looks like there's an industry out there that would enrich a few coders and
crypto experts with a tiny sum of money for a more tamper-resistant academic
reputation system.

It could be worth more than Bitcoin. I only wish that I already knew more
about crypto so that I could both get it done correctly _AND_ be the first
solution on the market.

~~~
p4bl0
Having a "reputation system" at all is already problematic in the case of
research.

Research should not tend to satisfy arbitrary criteria. Nowadays with these
h-index and impact factor crap, you see for instance journals which require
(more or less officially) authors of an accepted paper to cite at least one or
two other recent (1 or 2 years old max) papers from the journal.

Any reputation system will work only for its own benefit and not for the
benefit of research.

I'm not at all saying that academics should not be evaluated. Only you can't
judge the quality and pertinence of work at the state-of-the-art of so many
domains with one (or even a few) number(s).

~~~
AnimalMuppet
There are more papers than anyone can read. So how do you tell which ones are
important?

And how do you objectively tell which professors are better than others at
doing research?

Whatever method you use, someone will try to game it.

~~~
logfromblammo
If they game you, then game them back. Make the publication (and mitigation)
of potential attack vectors into the reputation system worth more points than
could ever be earned by exploitation of the flaw, and serve exploiters a large
penalty whenever they are caught.

All you need to do is make the expected value of an attack against the system
worth less than just doing some actual research.

~~~
chaqke
"All you need to do is make the expected value of an attack against the system
worth less than just doing some actual research."

you'll always have some people willing to take a gamble on the risk to get the
reward, especially if it's a low % chance that they get any penalty at all. on
top of the conscientious choice, humans are bad at estimating low-likelihood
risks.

