
PSU punishes prof who duped academic journal with hoax 'dog rape' article (2019) - nec4b
https://www.campusreform.org/?ID=13489
======
jxramos
Looks like it boiled down to the IRB's take on the matter.

    
    
        The school asserted that Boghossian had unethically
        conducted research on human subjects with his 
        experiment. According to the school’s Institutional
        Review Board, Boghossian would have needed to obtain 
        “informed consent” from the individuals reviewing his
        hoax articles in order for his actions to have been
        considered ethical.
    
    

Does this catch-22 mean no more investigative meta-research would be tolerated
by universities? Even if those human subjects are kept anonymous, they're
still afforded IRB protections? Is there anything we can know apriori that
those subjects aren't actually subjected to anything high risk to them? Maybe
that's a bad direction to take to even allow that?

~~~
Kednicma
I wonder how this could be done ethically but in a way that doesn't allow peer
reviewers to cheat. One thing that comes to mind is ABX testing. Show peer
reviewers both a paper which they should accept, and a paper which they should
reject, and then give them questionable papers to classify.

At what point does it become unethical simply to be a peer-reviewer for a
broken journal? That's the implicit question that should be asked in this sort
of ethics review. It does no good to punish just a submitter of papers, if the
entire journal is unhealthy.

~~~
tptacek
Low-impact-factor journals that accept papers with 1000 of ethnographic field
studies aren't "broken" just because the data was fake, because it isn't the
premise of peer review to reliably spot fake data.

You can make the argument that there are too many low-impact journals and
schools shouldn't be allocating resources towards keeping them going; I don't
think it's a super strong argument but it's at least coherent. But more
importantly: it won't have anything to do with this hoax; if you're willing to
fake data, you can get accepted in higher-impact journals outside the social
sciences as well.

~~~
Kednicma
I'm not sure how fake data is substantially different from any other part of a
fake paper. Just like how randomly-generated sentences might sound reasonable,
faked data might look like real data. In both cases, we expect peer review to
engage and to consider how data was sourced. This happened in the "Who's
Afraid of Peer Review?" incident [0], where the offending papers were
intentionally designed to have many red flags that would have helped reviewers
understand that the data was bogus.

More worryingly, data cannot model itself, so the quality of data should be
largely irrelevant to any models which purport to explain it; as a result, a
paper's contribution to science needs to be structured to work with other
results, and cannot simply proclaim that its modelling is correct because some
given equation fits the observed data. To pick a situation where your
expertise shines, imagine that somebody submits a paper about a 3-SAT solver,
and they include data that shows their solver doing not just extremely well on
standard problem sets, but asymptotically scaling better than exponentially.
The reason that you might doubt the honest presentation of the data is because
you know, from having studied the field, that such behavior is unlikely.

[0]
[https://en.wikipedia.org/wiki/Who%27s_Afraid_of_Peer_Review%...](https://en.wikipedia.org/wiki/Who%27s_Afraid_of_Peer_Review%3F)

~~~
tptacek
No, we don't expect peer reviewers to investigate the sourcing of data. A
journal reviewer might be assigned a workload of dozens of papers to be
completed, in snippets of spare time, over the course of a month or two. Peer
reviewers aren't equipped to do that kind of vetting; that's what replication
is for, and replications are their own research projects. It's really
surprising what kinds of expectations people seem to have about peer review
and what it can reasonably accomplish.

"Who's Afraid Of Peer Review" targeted fee-charging open-access journals,
which are financially incentivized to accept random papers. Journal reviewers
are generally unpaid postgraduate academics.

~~~
Kednicma
I don't want to quibble with your experiences of peer review; your experiences
are valid. But what does a journal provide, then, exactly? Are they relics of
the era before cheap self-publication?

~~~
tptacek
"A panel of experts from this field read this paper and felt it likely to make
significant contributions". For high-impact journals, you can add "so much so
that it was selected to be among the 25% of submissions this cycle to be
published".

What peer review does _not_ say is "a panel of experts carefully vetted this
paper to ensure that its conclusions are accurate". In fact, part of the
premise of replication projects is that peer review _doesn 't_ say that.

I've only been a reviewer a couple times, and only in computer science. Other
people on HN have experience reviewing for other hard science venues. Maybe
their experiences are different. But I think it's notable that you don't hear
that in comments about what this hoax exposed; has there been any HN comment
from a reviewer saying that they were expected to rigorously vet submissions,
the way a PhD board does with a thesis?

Daniel Engber makes many of these points better than I do, here:
[https://slate.com/technology/2018/10/grievance-studies-
hoax-...](https://slate.com/technology/2018/10/grievance-studies-hoax-not-
academic-scandal.html)

~~~
thu2111
The problem is not so much the system as how it's presented: the media
presents "peer-reviewed study" as a gold standard, practically synonymous with
correct. And unfortunately academia absolutely encourages this practice.

The message that peer review is largely meaningless hasn't really been made by
academia, for obvious reasons, and thus stupid papers getting accepted by
journals will continue to be interesting and news-worthy until people get the
message that peer review doesn't mean much.

The world will then move on to "replicated" as a gold standard. This will be
better but not by much. Just a few weeks ago Imperial College London published
a press release claiming their Report 9 results from their COVID-19 simulator
program had been replicated. Worth a press release because after it was open
sourced it was discovered to be filled with non-deterministic behaviour, even
with fixed RNG seeds.

Unfortunately the press release was fraudulent. The report they cited as
evidence of replication was by a friendly academic. He said he was able to
replicate the results, then admitted every number he got out was different,
some by 10%-25% different. This is the output of a computer simulation so the
allowable difference is 0%. Despite this, Nature and other outlets proceeded
to report that ICL's COVID model was "replicated".

In the end, academia is strongly incentivised to _appear_ credible, not
actually be so. _Being_ credible requires you can reliably turn your findings
into something useful, as when in the realm of pure theory you can never be
sure your findings actually hold up in reality. Corporate research has this
attribute (eventually), academic research doesn't, as academics are rewarded
for writing clever sounding papers not being correct.

~~~
tptacek
This seems totally fair!

------
lwneal
The PSU Grievance Studies Affair [1] had the same objective as the 1996 Sokal
Affair: to test peer review by submitting obviously bad papers that should be
rejected, and seeing how many of them are accepted.

The motivation behind this is entirely reasonable. Bad "true negative" as well
as known-good "true positive" articles ought to be regularly submitted to
every peer review process as a test of accuracy. A great example of a similar
idea was the 2014 NIPS consistency experiment [2].

However, there is a problem with the "dog rape" hoax article [3] as a test of
the peer review system. Specifically, the author in that paper claims to have
spent over 1000 hours carefully cataloguing the behavior of ten thousand
people and dogs over the course of one year. The paper then proceeds to
produce silly but surprising statistics: female dogs are 70% more likely to be
leashed than male dogs, 100% of dogs with shock collars are male, 847
instances of dogs fighting were observed, and so on.

The problem here is that the data was falsified: the observations and
experiments claimed never happened, the numbers were all made up. The article
was likely accepted because of its data, but the data was a lie. If this data
had been real, it might have been a minor but useful study relevant to
economists, sociologists, or urban planners, regardless of how silly or made-
up the conclusions at the end were.

Peer review works based on the assumption that the author is telling the truth
about what experiments they conducted and what numbers they measured. The
grievance studies authors could have submitted fake and silly hoax papers
without falsifying data: that's what Sokal did, and that would have been a
valid experiment testing the peer review system. But that is not what the
authors chose to do.

[1]
[https://en.wikipedia.org/wiki/Grievance_studies_affair](https://en.wikipedia.org/wiki/Grievance_studies_affair)

[2] [http://blog.mrtz.org/2014/12/15/the-nips-
experiment.html](http://blog.mrtz.org/2014/12/15/the-nips-experiment.html)

[3]
[http://norskk.is/bytta/menn/dog_park.pdf](http://norskk.is/bytta/menn/dog_park.pdf)

------
kaiju0
This looks like a valid test case. How else could you verify the legitimacy of
previously submitted articles. The emperor has no clothes.

------
avs733
An ongoing misconception here is that academic journals only publish 'facts'

Not really.

They publish research articles. Depending on the field, and the field's
philosophical perspective on the construction of knowledge, that can take
differing forms. Those differing forms are not just valid but important. And
understanding the differences is also, important. An article about medical
treatment necessarily does different things and looks different than a journal
in the humanities (for example). They make meaning from information in
different ways - and that is what all forms of research do, make meaning.
Research at its core is not about the discovery of facts...this is basic
philosophy of science, basic Thomas Kuhn. Standards like replicability may be
of use for making meaning in some fields, but they are not _necessarily_
equally useful in others, often because the level of contextual situating that
needs to occur reaches towards the impossible.

Serious members of the field would not have taken the Grievance Studies
articles as 'capital-F-FACT' they instead would have interpreted it as a
perspective, an argument, an interpretation. Specifically, the entire field of
critical studies exists to, in varying forms, critique the ways in which
'discovery of facts' is a reductivist way of looking at meaning making that
privileges certain perspectives over others - by treating some perspectives as
reality.

So the critique here, and looking back at Sokal, isn't that 'hahaha I pulled
one over on you' its that 'we assumed that you were giving us a new
perspective in good faith that we could collectively learn from' and ...'now
you are standing here laughing at us because you acted like a jerk.' It's two
different world views with one willing to be open (by choice) to outside
critique and perspectives, and the other supremely self-confident that
critique and perspective is unnecessary.

In effect, it's like a legal opinion that separates matters of interpreting
law and matters of finding fact.

------
avs733
Just a note that Campusreform.org is...a mess of a site that publishes
opinions as factual articles and is primarily writen by undergraduates.

Here is a more fair handed presentation of this case:

[https://www.insidehighered.com/news/2019/01/08/author-
recent...](https://www.insidehighered.com/news/2019/01/08/author-recent-
academic-hoax-faces-disciplinary-action-portland-state)

------
Kednicma
As a Portland State donor, I'm disappointed at PSU's refusal to back
legitimate scientific inquiry. Sending bogus garbage to journals occasionally
is a reasonable way to test that peer-review systems are working.

The events I normally donate to support are canceled due to the pandemic,
though, so I probably wouldn't donate anyway. Guess I can't really take a
moral stance here.

~~~
valuearb
As a PSU alum, I never donate to this halfassed institution, a glorified
community college.

------
free_rms
So, when Sokal did something similar in the 90s, we all had a laugh and life
went on.

Now that it's today, we're more vicious and this kind of punking will not go
unpunished. Good checkpoint on where we're at.

------
morninglight
If Mark McLellan remains VP for Research and Graduate Studies, it casts a
cloud over academic excellence at PSU. What is his background? How long has he
been in this position? What would cause him to make such a politically driven
bad decision?

It is inconceivable that someone with so little understanding of the toxic
academic publishing environment could be influencing the education of 26,000
students.

------
RcouF1uZ4gsC
As an aside for everyone coming to the comments first before reading the
article, PSU in the title is Portland State University not Penn State
University which I immediately thought of when I saw PSU.

Basically, Portland State University is retaliating against a professor who
showed that the peer review and acceptance process for various journals was
woefully broken.

~~~
avs733
another perspective is that they are _disciplining_ a professor who _practiced
research misconduct_

------
jrumbut
I don't know how you can get clearer cut research misconduct than faking an
experiment.

~~~
manfredo
That wasn't the actual experiment, though. The actual experiment was to test
whether or not these publications were capable of detecting fake papers.

~~~
jrumbut
Understood, but in the process he claimed to do a large experiment that he
didn't do. He faked data, he broke the trust that the entire research
enterprise is built on.

~~~
tptacek
I think you're going to find that most people commenting on this have never
reviewed a journal article and don't understand the process. You see it in
replication crisis threads as well: the belief that academic research is
premised on reviewers _replicating results_ before things get published.

The idea that academic journals are based on a presumption of good faith is
totally alien to a lot of HN commenters.

~~~
naasking
Reasonable critics aren't expecting reviewers to replicate the results, but
the dog park paper illustrates that reviewers can't even recognized _clearly
bogus_ data. I mean, have you even read the methods the paper said they
employed?

~~~
tptacek
I read several of the hoax papers, and skimmed one that included dog park
data. Which paper are you referring to, where was it submitted, and where was
it accepted? The hoax authors were exceptionally dishonest in their
presentation of the results: virtually all of their papers were rejected, the
accepts were generally in very low-impact venues, and the papers that got
accepted were generally not the lurid ones they highlighted in their summary.
I ask for specifics because it is not an interesting research result if they
created a paper with clearly bogus data _that was then rejected_.

The problem with the experiment, of course, is that reviewing takes a fuckload
of time and effort, and most fields barely keep up with the legitimate
workload they have. They are literally taking time and resources from program
committees, and they do have IRB obligations in order to do that.

~~~
naasking
> The hoax authors were exceptionally dishonest in their presentation of the
> results: virtually all of their papers were rejected, the accepts were
> generally in very low-impact venues, and the papers that got accepted were
> generally not the lurid ones they highlighted in their summary.

I think the other poster summarizes why quite well why this charge of
dishonesty is ironic. I'll just add a link to the paper itself if you'd like
to read it [1], and review one part:

> From 10 June 2016, to 10 June 2017, I stationed myself on benches that were
> in central observational locations at three dog parks in Southeast Portland,
> Oregon. Observation sessions varied widely according to the day of the week
> and time of day. These, however, lasted a minimum of two and no more than 7
> h and concluded by 7:30 pm (due to visibility). I did not conduct any
> observations in heavy rain. [...] The usual caveats of observational
> research also apply here. While I closely and respectfully examined the
> genitals of slightly fewer than ten thousand dogs [...]

So in the span of one year, this lone "researcher" claims to have "closely"
inspected the genitals of ~10,000 dogs. That's 1,000 hours to inspect 10,000
dogs, which amounts to 10 dogs per hour, during which they took detailed notes
on the dogs and owner's names, gender, and other associated information, while
documenting the dogs' behaviour (6 minutes per dog+owner!). That stretches
credulity to say the least.

Also, for the data to be meaningful, there must be at least 10,000 unique dogs
visiting these three dog parks during the given time span. This also beggars
belief even for Portland which features a high percentage of dog ownership.
Portland has ~264,000 households, ~70% of households own a dog, that's
~185,000 dogs across ~32 dog parks, which is only 5,000 unique dogs per park
on average.

The basic math just doesn't add up, and then the researcher disclaims their
abilities to determine canine breeds, but makes claims like, "NB: the phrase
‘dog rape/humping incident’ documents only those incidents in which the
activity appeared unwanted from my perspective – the humped dog having given
no encouragement and apparently not enjoying the activity."

So apparently they have quite a bit of insight into canine behavioural
psychology. There is a lot about the methods and the data that make no sense,
and this paper received accolades.

> They are literally taking time and resources from program committees, and
> they do have IRB obligations in order to do that.

That's a legitimate concern. Unfortunately, the hoax itself reveals that these
program committees may not be doing much meaningful work with those resources
anyway, which seems like a far more important matter.

Edit: I would add that some way to verify that peer review is doing its job
should be part of the publishing process. Periodic random hoaxes seem like a
good way of doing it. It will make everyone, particularly reviewers, more
skeptical and cautious.

> (Revise-and-resubmit, by the way, is a nice way of saying "reject").

No, it's a nice way of saying, "this is good work, you just need to massage
your presentation".

[1]
[http://norskk.is/bytta/menn/dog_park.pdf](http://norskk.is/bytta/menn/dog_park.pdf)

~~~
tptacek
1000 hours over the course of a year is the equivalent of a half-time job,
which makes sense if you're a researcher publishing in journals, in that it is
your _actual_ job. There are way more than 10,000 dogs in Portland. You're
shooting the data down because you're motivated to find its flaws, which I
agree are apparent on close inspection, but that's _not_ what motivates a
paper reviewer. Why would a reviewer for a gender studies journal have any
intuition for the usage of a dog park? It's not an epidemiology or even an
animal studies venue.

(Here's a sharper way of asking the same question: tell me, as quickly as you
can, how many dogs visit the largest Portland dog park; bear in mind that this
is a waste of your time while you're tracking that stat down, because that's
what the reviewer is thinking, too).

R&R means reject (it's a rejection _cause_ ). At Usenix, if I wanted you to
"massage your presentation", I would accept conditional on those changes
(actually: at Usenix WOOT, we would have assigned a reviewer to shepherd the
paper --- we would have _helped you_ massage your presentation).

Ultimately, to make a case that journals are accepting bad papers, you have to
look at their accepts, not their rejects, no matter how those rejects are
worded.

------
easterncalculus
This happened a bit ago, you can read more about it here:

[https://en.wikipedia.org/wiki/Grievance_studies_affair](https://en.wikipedia.org/wiki/Grievance_studies_affair)

------
zimpenfish
[2019]

~~~
dang
Good catch. Added.

------
goldcd
Would it be equally unethical to submit a paper that 'simply wasn't any good?'

i.e. any academic who doesn't think a paper should be published, but submits
it on the off-chance they think it might be?

~~~
tptacek
No. Submitting a flawed paper in good faith to a journal isn't unethical;
journals exist to evaluate papers. Deliberately submitting bad papers --- in
fact, going out of your way to shade what's bad about your papers to try to
get them further into the process --- is a waste of volunteer time. Reviewers
have offered to evaluate good faith papers; they have not offered to have
their time spent as guinea pigs. Hence IRBs, to tell you not to do stuff like
this.

------
pjc50
This website seems to be an extremely partisan culture war site basically
focused on issues academics discuss that conservatives don't like? And then
"cancelling" them?

