
An Elaborate Academic-Journal Hoax - lukev
https://www.chronicle.com/article/Sokal-Squared-Is-Huge/244714
======
Sniffnoy
> Karl Steel, an associate professor of English at Brooklyn College and the
> Graduate Center of the City University of New York, called the trio’s work
> "simply not rigorous research" and described three objections to it. It is
> too narrow in disciplinary scope, he said. It focuses on exposing weaknesses
> in gender and ethnic studies, conspicuously ideological fields, when that
> effort would be better spent looking at more-substantive problems like the
> replication crisis in psychology, or unfounded scholarly claims in cold
> fusion or laissez-faire economics.

Yeah... that's the point! The whole _point_ here is that you have these
"conspicuously ideological" fields or subfields, which are _not_ doing real
scholarship, _not_ interested in truth, but which rather are pushing a
specific position and attempting to label as evil and shout down anyone who
disagrees; and yet they're able to get away with claiming that they're doing
real work, that this is what real scholarship _is_ , and it's seriously
damaging, especially (as the hoaxers note) to anyone who does want to do real
truth-seeking work in these areas.

Sure, to those of us who realize what's going on, more hardly needs to be
said. But _lots of people don 't_, and the "grievance scholars" are slowly
convincing the rest of academia that real scholarship consists not of neutral
methods and honest argument, but agreeing with the grievance scholars in all
things, and finding ways to inject it whether it's relevant or not. The point
of this hoax is to make obvious what already _should be_.

To put it another way -- "grievance studies" is, as Dr. Steel says, an easy
_epistemological_ target. But it's a hard _social_ target. It's built up a
layer of protection around itself consisting not of truth but of social
pressure and demands to conform or be ostracized. No claim, true or false,
should be so protected, and that shield needs to be shattered. It's only an
"easy target" in the imaginary world where people are rational and such
transparent bullshit doesn't work. But in reality, it does, which is why
there's a need to puncture it.

~~~
YeGoblynQueenne
>> The whole point here is that you have these "conspicuously ideological"
fields or subfields, which are not doing real scholarship, not interested in
truth, but which rather are pushing a specific position and attempting to
label as evil and shout down anyone who disagrees;

Yeah, look- it's not that simple. In academia, we have to allow discussion and
debate to be carried out freely, but obviously it doesn't make sense to
dictate the subject of a debate and call it "free"! And the only people who
are qualified to choose the subject of the debate are academics themselves-
because nobody else has the expertise to make that decision.

So if academics choose to debate identity politics and "Theory"\- well, that's
what they get to debate. If the debate is done in a rotten manner that is not
conducive to good scholarship, now that's another matter, but if the question
is if those fields should exist, the answer is yes, and the reason is that
they do (it's a tautology, OK?).

It's a feature, not a bug. It's a bit like in a democracy you must allow
everyone their political opinion, even those people who are against democracy
itself? In academia, we must allow all academics the ability to investigate
the subjects of their interest, else we don't have academia, we just have a
closed club of people who agree with each other- so, just like the fields
targeted by the hoax.

~~~
Sniffnoy
Huh? I'm not claiming that these topics should be off-limits; I'm saying that
the actual existing departments presently studying them are rotten. In short
I'm not sure we're disagreeing here.

Edit: If you like, you can replace my use of "fields and subfields" above with
"departments and segments of such". That's probably fairer in other ways, too.

~~~
YeGoblynQueenne
Yeah, OK, maybe I misread your comment.

------
haberman
This isn't simply a case of poor quality control. Sure, hoax articles have
made it through some (largely pay-to-publish) journals in other fields from
time to time. And yes those incidents should inspire reflection and
correction, as well as a hit to the reputations of such journals.

But this is much more. This is about a set of ideas that pass for expert
scholarship despite having lost any tethering to to the real world. Just look
at this reaction to the hoax:

> Some of the pieces were indeed outlandish (and not all were accepted). But
> it appears that some were simply based in premises (e.g. social
> constructionism) or political principles (e.g trans equality) that the hoax
> authors find problematic but we do not.

[https://twitter.com/alisonphipps/status/1047421076509261824](https://twitter.com/alisonphipps/status/1047421076509261824)

Note that word "problematic." I don't want to know whether social
constructionism is considered "problematic", I want to know whether it is
_true_.

There is an actual real world out there that scholars are purporting to study.
But when science and empiricism are considered oppressive, all we are left
with is posturing and grandstanding over ideology.

~~~
lukev
If your criteria for _publication_ is what is epistemologically _true_ you are
going to find it difficult to publish in any domain except possibly pure
mathematics or physics.

~~~
haberman
Social Constructionism and Blank Slatism are theories that can be tested.
There is plenty that can be published on these topics that is based in
evidence rather than ideology.

~~~
PeterisP
However, if you fake (or, as in this case, completely make up) your evidence,
then you'll be able to get false things published - the peer review process
has no ability to verify if what you claim is _true_ , it does not and will
not re-test the theory, that might be done by other teams after your results
are published but not before.

------
tptacek
Discussed earlier on HN:
[https://news.ycombinator.com/item?id=18127811](https://news.ycombinator.com/item?id=18127811)

The hoaxers writeup is, I think, pretty misleading. Their work tended to get
rejected from high-impact journals. They submitted multiple stories to some
journals (notably, to _Hypatia_ , which they brag about getting accepted in)
and leave a clear impression that their most ridiculous work was what made it
in, rather than a much more banal paper. They quote almost entirely favorable
feedback --- including, in their earlier writeup, from papers that were
ultimately rejected! They brag about papers that were ultimately rejected.
They submit approximately the same paper to multiple journals and are rejected
by all but the very most marginal. In at least one case, they cite the same
reviewer multiple times, leaving the impression of a consensus.

If you don't think that's misleading, it's worth knowing that Yascha Mounk,
cited warmly in this piece, was clearly taken in, dazzled on Twitter by a
rejected paper from the hoax.

What have they really uncovered?

1\. Peer review isn't replication and never could have been. Peer reviewers
get, at best, a couple hours, unpaid; submitting researchers get many months,
and are often grant-funded. The hoaxers took _ten months_ to get a few papers
accepted in marginal journals. The pop understanding of what peer review is
is, simply, broken: peer review depends on good faith, and it was never
realistic to believe otherwise.

2\. There are lots of crappy journals. Didn't you already know that? The
market responds to the demand that professional academics publish, by creating
venues that ensure professional academics can reliably publish. That's why we
have measures like "impact factors". This is not _at all_ a phenomenon
isolated to the social sciences.

If the authors even _acknowledged_ these conclusions, they'd take all the
excitement out of their hoax. We'd all nod our heads and say, "of course".
Instead, they pretend to have taken down something they've dubbed "grievance
studies", while pretending that they couldn't have done the same thing to,
say, a marginal economics journal.

~~~
darawk
> The hoaxers writeup is, I think, pretty misleading. Their work tended to get
> rejected from high-impact journals.

What's your evidence for that? A number of their papers got accepted into
decently high impact journals, including Hypatia.

> They submitted multiple stories to some journals (notably, to Hypatia, which
> they brag about getting accepted in) and leave a clear impression that their
> most ridiculous work was what made it in, rather than a much more banal
> paper

The thesis of the paper that got accepted was that "criticizing feminist
theory is unethical". That doesn't seem particularly banal to me.

> They quote almost entirely favorable feedback --- including, in their
> earlier writeup, from papers that were ultimately rejected!

Who cares if the feedback was on papers that were ultimately rejected? Does
that change the fact that they received favorable feedback on an absurd
hypothesis?

> They submit approximately the same paper to multiple journals and are
> rejected by all but the very most marginal.

This is simply not true. They got accepted by high impact journals, including
Hypatia. You seem to think that the papers that got accepted were 'banal', but
that doesn't change the fact that they were accepted.

> In at least one case, they cite the same reviewer multiple times, leaving
> the impression of a consensus.

Are you saying that they do not cite a diverse sample of reviewers? Because
that is empirically false. What you seem to be doing is latching a single
instance where they cited the same reviewer twice, and trying to use that to
insinuate that they didn't cite feedback from lots of distinct individuals,
which is patently false.

> 1\. Peer review isn't replication and never could have been. Peer reviewers
> get, at best, a couple hours, unpaid; submitting researchers get many
> months, and are often grant-funded. The hoaxers took ten months to get a few
> papers accepted in marginal journals. The pop understanding of what peer
> review is is, simply, broken: peer review depends on good faith, and it was
> never realistic to believe otherwise.

Is that so? So, you'd have no problem getting a nonsense paper accepted into a
math, computer science, physics or even economics journal? I would be
_extremely_ surprised if you could pass of work of such low quality in
journals of comparable impact factor in any of these disciplines.

~~~
gbhn
I read over a couple of the papers. The descriptions of them by the authors
did not match the content of the papers.

For instance, the one they call "Hooters" is described as "A gender scholar
goes to Hooters to try to figure out why it exists." The actual paper is
written as a record of transcripts of visits to a restaurant and an extraction
of particular conversation themes. At no point did I see the paper questioning
why Hooters exists -- the (apparently totally faked) data seems pretty
stereotypically motivated, which I think may be their point, but I'm not sure
short of a fraud investigation how reviewers are supposed to know that the
particular group this author claims to have visited Hooters with didn't say
the things supposedly directly recorded. That's a really dramatic claim for a
reviewer to make: "This conversation which the author claims is a direct
transcription from a recording couldn't possibly have happened because it
seems too stereotypical."

Yes, the methodology isn't that great, and the paper was rejected.

More generally, the "study" isn't selecting random journals to see if they
could defraud -- it is instead aimed at specific academic targets and a "test
to failure" scattershot mechanism is used. Thus we have no idea whether these
journals are any more discriminating than, say, PLOS or Nature.

Ultimately, I had some sympathy for the Sokal experiment in that it seemed to
say something about the interaction of literary theory and physics. I don't
get the same sense that there's much here other than "confirmation bias
exists, even among gender studies folks!!!" which seems like it wouldn't take
3 people 10 months to figure out, and could be done in a much more direct way,
and honestly isn't that shocking or rattling a conclusion.

~~~
darawk
> I read over a couple of the papers. The descriptions of them by the authors
> did not match the content of the papers.

So did I. The dog park paper more or less matches its description. The fat
bodybuilding paper is exactly as described. The dildos paper is as described.
Do you have any descriptions aside from the Hooters one that you take issue
with?

> Yes, the methodology isn't that great, and the paper was rejected.

Sure, but lots of them were not. I'm not sure why you would focus on a paper
that was rejected when they had 7 that were accepted.

> More generally, the "study" isn't selecting random journals to see if they
> could defraud -- it is instead aimed at specific academic targets and a
> "test to failure" scattershot mechanism is used. Thus we have no idea
> whether these journals are any more discriminating than, say, PLOS or
> Nature.

True, though I have a hard time believing you could get the same result in any
STEM discipline, though i'm certainly open to data to the contrary.

> Ultimately, I had some sympathy for the Sokal experiment in that it seemed
> to say something about the interaction of literary theory and physics. I
> don't get the same sense that there's much here other than "confirmation
> bias exists, even among gender studies folks!!!" which seems like it
> wouldn't take 3 people 10 months to figure out, and could be done in a much
> more direct way, and honestly isn't that shocking or rattling a conclusion.

I think the point of all this is that the epistemology of the field is
intellectually bankrupt. It's not _just_ confirmation bias. It's that within
the epistemology of gender studies, they aren't even wrong.

~~~
tomlock
Could you reference some texts from within gender studies that you feel are
both representative of the field, and which outline the negative aspects of
the epistemology of gender studies that you are perceiving?

------
tinalumfoil
I've been more disturbed by the response to this than the papers themselves.
Saying,well of course we were duped, you really expected the peer review
system to not to utterly fail in the presence of a few bad actors? does not
inspire my faith in the system.

~~~
eindiran
That's what stuck out to me as well. This quote: "I am so utterly
unimpressed," wrote Jacob T. Levy, a political theorist at McGill University,
"by the fact that an enterprise that relies on a widespread presumption of
not-fraud can be fooled some of the time by three people with Ph.D.s who spend
10 months deliberately trying to defraud it."

Isn't the ability to separate out the quality work from the garbage the point
of the peer review system? If the system was working as intended, the
reviewers _even with the assumption that they aren 't being defrauded_ should
have been able to sort the papers into the garbage pile.

~~~
davemp
> an enterprise that relies on a widespread presumption of not-fraud

This is section really gets me. How in the world is science supposed to rely
on “not-fraud”? Sure if you fake your data you might dupe a diligent reviewer,
but this is why results are supposed to be reproduced.

Anything that relies on “not-fraud” seems inherently unscientific and straight
up naive.

~~~
tptacek
Sussing out invalid research is the purpose of _replication_. A replication is
itself a research project. It's not something that peer reviewers do.

~~~
davemp
Yes. My comment implies that you can get invalid research published. At least
it was intended to.

The problem lies in areas where replication and formalization is difficult.
There is certainly knowledge to be gained in these areas. Though calling the
fields sciences, when the scientific method cannot be effectively applied,
seems dubious.

The solution is not obvious, but it is clear that steps should be taken to
improve rigor in the less formal (straight forward) bodies of scholarship.

~~~
foldr
It isn't possible to attempt replication of every scientific paper that's
published. A lot of people on this thread seem to have difficulty accepting
this fact, but the time and money to do that simply isn't available.

If a large percentage of scientists were to engage in deliberate fraud, there
wouldn't be any feasible way to regulate that problem away. All available
resources would end up being spent on scientists investigating each other for
fraud, and no actual science would get done. Bear in mind that in many cases,
establishing that an experiment is not fraudulent could require more time and
effort than the original experiment!

As with many other human institutions, we have no choice but to rely on most
people being honest most of the time.

------
currymj
Can someone comment on whether these are respectable journals or not, within
their fields?

Every few years someone will do a Sokal-hoax-esque thing in scientific fields,
but usually they're just getting a paper into very-low-quality predatory
journals that have only a nominal peer review process.

I'm wondering if that's what happened here. Do humanities fields even have the
same set of incentives leading to a proliferation of those sorts of journals?

~~~
Trombone12
Well, they claim to have targeted "top journals" in the field, but it's not
like "intersectional gender studies" is a particularly big field.

I looked up two journals they had published in and their articles got less
than one citation per year typically ([1]&[2]), not exactly a booming field.

[1]:
[https://www.scimagojr.com/journalsearch.php?q=145138&tip=sid...](https://www.scimagojr.com/journalsearch.php?q=145138&tip=sid&clean=0)
[2]:
[https://www.scimagojr.com/journalsearch.php?q=14798&tip=sid&...](https://www.scimagojr.com/journalsearch.php?q=14798&tip=sid&clean=0)

~~~
emmelaich
> _it 's not like "intersectional gender studies" is a particularly big field_

And let's hope it stays that way. This will help.

------
lukev
Representative quote:

> "I am so utterly unimpressed," wrote Jacob T. Levy, a political theorist at
> McGill University, "by the fact that an enterprise that relies on a
> widespread presumption of not-fraud can be fooled some of the time by three
> people with Ph.D.s who spend 10 months deliberately trying to defraud it."

~~~
malvosenior
Alternative take: A system designed to detect bad research failed to detect
blatant falsehoods, algorithmically generated noise, rabid propaganda and
other seemingly easy things to detect. That PhDs worked on this doesn’t mean
the system didn’t fail spectacularly.

~~~
ModernMech
> doesn’t mean the system didn’t fail spectacularly.

You're mischaracterizing the system; it doesn't end with publication. After
publication, the science goes out to a wider audience of scientists who can
build on the paper or refute it if they find it's wrong. Sometimes this
process can take a long time and many cycles of publicans, but that's just the
nature of science.

We have to stop pretending like the peer review process is some kind of
magical filter of true science, or that it was ever intended to be as such.
That notion leads to the faulty idea that if it's published, it's true. Junk
science gets published, even if it's not disingenuous, and that's probably
true no matter how great you make the peer review system.

~~~
pochamago
What is the point of journals if quite literally anything can be published in
them? Isn't the point that they serve as a basic filter for obvious noise?
Shouldn't we be critical of a peer review process that doesn't function at
even the most basic level?

~~~
PeterisP
A filter that does filter out the vast majority of obvious noise is very
useful even if it lets something through. And the current peer-review does
that, it throws out almost all of the spam; it pushes useful work of lower
quality/impact down to less selective venues (generally with more specific
subfields, so that I can read impactful work of domain X and less impactful
work of subdomain X.2 but not X.1 and X.3); and even for accepted papers the
process often results in significant improvements to them as a result of the
reviews.

Explicit fraud isn't that common, since it does have risks to the submitter
that far outweigh the (limited) benefits of getting an extra publication on
your CV.

------
int_19h
I see a lot of people complaining that it is done in bad faith, and doesn't
prove much.

But is there really much to prove? What's the baseline? I mean, we're talking
about a field where this got published - and it was not a hoax:

[http://journals.sagepub.com/doi/full/10.1177/030913251562336...](http://journals.sagepub.com/doi/full/10.1177/0309132515623368)

The people behind this hoax had an unenviable task of matching and beating a
paper like that (since, apparently, by itself it was not sufficient to prove
the point). How would you do so?

~~~
foldr
Bad papers get published in all fields. Linking to one bad journal article
shows nothing.

Here, for example, is a reference to a bad paper published in a field which is
not usually the target of this kind of criticism:

[https://academia.stackexchange.com/questions/9602/rediscover...](https://academia.stackexchange.com/questions/9602/rediscovery-
of-calculus-in-1994-what-should-have-happened-to-that-paper)

[https://math.berkeley.edu/~ehallman/math1B/TaisMethod.pdf](https://math.berkeley.edu/~ehallman/math1B/TaisMethod.pdf)

~~~
int_19h
True, but when such papers are discovered, they're usually torn down by people
_in_ the field. In this case, it was the other way around - all meaningful
criticism came from the outside, and in response, researchers in the field
mostly circled the wagons.

Besides, the paper in question wasn't merely published - the author got a
prestigious NSF award that lists that paper:

[https://www.nsf.gov/awardsearch/showAward?AWD_ID=1253779](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1253779)

This, to me, implies that either such papers are considered broadly acceptable
in the field, or they're not actually read. Not sure which is worse.

~~~
foldr
Have you read the paper you linked to, or just the abstract and quotes in the
right wing press?

You're confused about how NSF grants work. The publication is listed as an
_output_. The grant was awarded before that paper was published. The listing
of the paper on that page implies nothing about its quality.

My example paper was torn down by mathematicians, not diabetes researchers.

------
panarky
It's not just the humanities, computer science is also susceptible to hoaxes
built from jargon.

 _SCIgen works like an academic "Mad Libs" of sorts, arbitrarily slotting in
computer-science buzzwords like "distributed hash tables" and "Byzantine fault
tolerance."_

[http://news.mit.edu/2015/how-three-mit-students-fooled-
scien...](http://news.mit.edu/2015/how-three-mit-students-fooled-scientific-
journals-0414)

~~~
ajshankar
That paper was accepted as a non-reviewed paper. There was no peer-review
process.

~~~
Apocryphon
Which one? Both that article and this one lists multiple cases where SCIgen-
generated papers were waved through:

[https://en.wikipedia.org/wiki/SCIgen#Prominent_results](https://en.wikipedia.org/wiki/SCIgen#Prominent_results)

~~~
ajshankar
From the article:

In April of 2005 the team’s submission, “Rooter: A Methodology for the Typical
Unification of Access Points and Redundancy,” was accepted as a non-reviewed
paper to the World Multiconference on Systemics, Cybernetics and Informatics
(WMSCI), a conference that Krohn says is known for “being spammy and having
loose standards.”

------
pervycreeper
I've noticed a lot of hair-splitting in these comments over whether this was a
failure of peer review, whether to call the papers, considered as specimens of
scholarship in their appropriate fields, "fraudulent" rather than "bad", or
whether the number of hours they actually logged at the dog park is a relevant
concern. All of this is quite besides the point. The researchers passed a
version of the Turing Test among the reviewers of their papers. All that we
need from that point is an acknowledgement of the absurdity and falseness of
the papers themselves. This may not be easily forthcoming, though. One of the
reviewers of Social Text took the position, in the wake of Sokal's hoax, that
his paper constituted good scholarship, despite the stated intentions of its
author. Also confer with Poe's Law. However, when we admit this as a possible
response, we are in the territory of radical relativism, with no way to
adjudicate between the claims of these scholars and their critics, and for
that matter, those of religious fanatics, mentally ill delusional people,
confidence scammers, or indeed anybody at all. This is a position which seems,
at least, to be somewhat at odds with the fundamental scholarly enterprise. At
its heart, one has to cross the pons arsinorum of admitting that, yes, the
papers are indeed absurd nonsense. At this point, it seems impossible to
convince some people to take this step.

If the critics's objections could be distilled into something with a semblance
of validity and germaneness, then it would be that the researchers's methods
were too unfocused. But had they taken a narrower approach, the response could
just have been to rationalize and ignore. Fight fire with fire (being
generality, in this case).

------
brianberns
> I don't want to know whether social constructionism is considered
> "problematic", I want to know whether it is true.

Not all legitimate scholarly work is based on a search for scientific truth.
E.g. Interpretations/analysis of Shakespeare's work.

------
neonate
The WSJ editorial linked to has more info:
[http://archive.is/llwSP](http://archive.is/llwSP).

------
Tade0
_The trio could have reached out to colleagues in physics and other fields,
but instead opted for "poor experimental design."_

They could, but the Bogdanov brothers did just that years earlier.

------
entwife
Journals are for reports from scientists and academics. Would they be more
useful if there was some additional review of the authors reputation, and the
reliability of the particular report? Upvoting or public signing of the
documents, or public signed comments. An expert in a field is more familiar
with the reputation of their colleagues than outsiders.

------
edouard-harris
The authors state that this was a long term research project cut short. Which
suggests the question:

If you played this game long enough, undetected, could you get real scholars
to cite your fake research?

Future work, perhaps.

(I don't have any direct evidence of this, but I strongly suspect this is an
hypothesis that the authors would have wanted to test.)

~~~
PinkMilkshake
I hope there is someone out there right now, working towards a degree in
Critical <whatever> Studies for no other reason than to get a PhD publishing
nonsense and reveal it all afterwards.

------
smadge
If the author’s intention was to show the dubious quality of research in these
fields, wouldn’t a better approach have been to select high impact articles
from prestigious journals in the fields which they believed exemplified the
problem, and then written rebuttals/critiques arguing why they are flawed?

------
jancsika
> To answer these questions, this article engages feminist geography and
> broader feminist literature and draws on nearly 1000 h of public
> observations of dogs and their human companions conducted at three dog parks
> in Southeast Portland, Oregon, beginning on 10 June 2016, and ending on 10
> June 2017.

In Sokal's single hoax in 1996 he (and his buddies, I guess) made a claim,
followed it by a non-sequitur of true but uninteresting sentences about
physics, then repeated the claim. It was a sandwich with no meat in it, and
the (small, non-peer-reviewed) post-modern journal ate it up.

Sokal wanted to show that a certain style of post-modern writing substituted
scientific jargon for an argument (or at least it used references to esoteric
scientific fields as pretentious, uninteresting metaphors). The elegance of
parodying this style was that his submission contained no data at all. It
wasn't merely that the conclusion of the submission wouldn't be replicated. It
was that any conscious human should notice the complete lack of supporting
evidence for the claim!

The upshot of the journal accepting the article was that "the emperor had no
clothes," so to speak. Sokal went on to write a book showing how several
prominent post-modern thinkers used scientific non-sequiturs to make their
non-scientific ideas sound more weighty. (Just as he had in his hoax.)

These three authors, however, are going a different route from Sokal--
specifically, one that provides fake data as evidence of a claim. (Or I guess
they could have actually sat at dog parks for 1000 hours, but it doesn't
matter for the point I'm making.) By providing fake data, this hoax passes an
initial filter which the Sokal hoax did not-- namely, the filter that should
reject submissions which lack any supporting evidence whatsoever. That means
this hoax only reveals a lack of replication and/or verification, whereas the
Sokal hoax reveals a lack of both peer review _and_ basic reading
comprehension even by a non-physicist. That makes this a _weaker_ class of
hoax than the Sokal hoax.

Finally, the authors purport to be going after an entire "ideological slant"
in academia. Whatever one thinks of that aim, it is enormous and amorphous
when compared to the narrow aim of Sokal keeping scientific jargon from being
misused/misplaced.

So to recap:

* 1996: strong hoax successful against a single pomo journal, weak but persuasive claim against coopting scientific jargon for non-scientific aims

* 2018: weak hoax successful against 7 of (total number?) journals, strong and unpersuasive claim against all leftist academic fields

Now I'm curious-- did anyone in the late 90s criticize Sokal's hoax on the
grounds that his careful constraints would be ignored by future pranksters?

Edit: clarification

------
theseatoms
All of the above, because it exposes vulnerability to bad faith.

------
PinkMilkshake
I cant help but give a Gilfoyle style heh... heh... heh....

What I hope has happened is the 20 revealed papers are not actually all of
them and that they do a Wikileaks style drip release, wait on a response, then
hit us with more much later.

~~~
amw
If they had those results, they would have put them front and center. Just
like if the TSA had ever caught a terrorist, it would be on the 24-hour
networks for days.

------
msabalau
Given that is an experiment on human subjects, involving deception, the IRB
review should be interesting.

Or weren't they aren't concerned about _that_ type of ethics?

~~~
gwern
What IRB approved the existence of these journals?

Or weren't they concerned about _that_ type of ethics?

------
seany
This is the best thing I've seen in quite a while.

------
throwaway5250
If the people at the top of your discipline can't tell the difference between
quality and inane garbage, what do you really have?

~~~
AnimalMuppet
Not a discipline.

------
debacle
This wasn't a hoax it was social commentary and an experiment in prejudice.
That nazi language can be published under a progressive guise should be
chilling to anyone who is a vehicle for these ideals.

~~~
ArchTypical
It wasn't a hoax. It's a confounding detail that a meta-commentary article is
employing.

~~~
AnimalMuppet
Sorry, I can't parse that at all. Could you be a bit clearer on what you're
trying to say here?

~~~
ArchTypical
The article is attempting to explain the events, by characterizing them as a
type of deception when it shows the exact opposite (it's not a hoax that
papers are accepted on political aim, rather than evidence). It's a small
detail, but telling as to the intent and voracity without knowing most the
story.

------
ahelwer
Most published stuff is irrelevant crap that will never be cited or included
in the central canon. This is true even in computer science. Also, as we know
from computer security, system design becomes massively more difficult if it
requires hardening against malicious actors. The presumption of good faith is
a valuable component of any community that shouldn't be abused to score stupid
points.

For these reasons, Sokal Hoaxes (and their like) are damaging wastes of time
and the perpetrators should not be praised.

~~~
hnaccy
>The presumption of good faith is a valuable component of any community that
shouldn't be abused to score stupid points.

Shouldn't we be concerned with the possibility of undetected bad actors who
are abusing it for more nefarious reasons?

------
outsideacademe
[throw away account as I am a recovering academic and wouldn't want to impact
colleagues who have not been able to escape]

Everyone is distracted by the politics here when the real story is the effort.

Three people layman to these fields achieved a near tenure track publication
list/ratio in a little over a year. I have a STEM PhD and I know that me and
three of my friends couldn't do the same if we tried in a closely related STEM
discipline much less an entirely different field.

Hypatia is _the_ journal of record for feminist philosophy (and one of the few
respected feminist journals in general). It should not be possible to go from
0 to published in a "real" top tier journal. If the paper is "not too crazy"
that's even more damning! They produced a "real" paper as a joke? Do you
really think that these same authors could get a paper published in The
International Journal of Computer Vision in a little less than a year _even
if_ they were using fraudulent data? It would take them two years to know what
they could and couldn't fake without being caught.

If the liberal arts don't want the sneering dismissal of their fields as low
effort to become culturally entrenched they need to dump or reform the "X
studies" departments and journals.

