Hacker News new | past | comments | ask | show | jobs | submit login
A Failure of Academic Quality Control [pdf] (journalofpositivesexuality.org)
60 points by luu 6 months ago | hide | past | web | favorite | 19 comments



Independent professional reviewers should exist in the academia. Currently it's those who do research also serve as reviewers, but these people are very busy in their own research and don't necessarily review others' papers carefully. It's a shame that most of the funding goes to researchers who perform novel research, not too much incentive for reproducing results and reviewing papers. This is as if the science is built upon an unstable foundation, and lots of man- hours and powers will be wasted from that.


In this case the thing being criticized is a book, which is a somewhat different case. Reputable academic presses do send out book manuscripts for review, but it's a very different process from paper reviews, often involving only a partial early manuscript. I don't know how every press does it, but when I've reviewed for an academic press, I was given a draft of the first three chapters, and asked to give a recommendation for whether this was a serious academic book worth publishing, along with my positive and negative comments on the partial draft.

There are obviously some things I could usefully do there: it's often possible to identify and reject total crank books, to notice some early tendency towards overstating claims, to give general feedback on the direction of the book and whether it is likely to reach its audience, etc. But it's not the same as a journal, where you have the full paper to review and are expected to read it and give detailed feedback (which is also a more reasonable thing to ask of reviewers, because the text isn't as long!).

This reviewing approach is pretty traditional (it's how most famous philosophy books were published, for example) and relies heavily on good faith. Reviewers are supposed to judge whether this is a serious historian writing a good-faith book, but if they say yes, then you let the historian publish the book they want to publish without a lot of further rounds of vetting. It isn't expected they'll get everything right, but it's expected they'll make an earnest effort to get it right, perhaps even by employing research assistants themselves to do fact-checking. Anything they get wrong can then be corrected in the literature by someone else writing their own article or book in response.

One exception are books which are adaptations of dissertations. In that case there would have been more substantial review of the claims, by the dissertation committee, assuming nothing fishy happens between dissertation acceptance and adaptation into a book. The first author of the paper linked here (Lieberman) has a book out on the history of sex toys that's an adaptation of her dissertation, and tells a pretty different history than the one she criticizes in this paper. The book being criticized (by Maines) is not a dissertation adaptation, though.


I agree wholeheartedly. During my PhD, I did a big data compilation in my field (the fluid dynamics of certain spray nozzles), which ended up convincing me many commonly believed things in the field are false. I'm currently planning at least 3 paper on the issues I've found. This took a lot of time, but ultimately would save even more time and money if people were willing to fund it. I did the work while I was a teaching assistant and doubt many funding agencies would be willing to fund someone for a semester on this.

I'd like to be a professional reviewer. The closest existing job is a patent examiner, but they incentives there aren't great either. After my PhD I intend to at least do some consulting on the side, and perhaps "professional reviewer" could be a service I could offer. Worst case is that no one wants the service.


One is reminded of Derek Freeman's "The Fateful Hoaxing of Margaret Mead", in that the theory seemed so good no one had the heart to challenge it. But then, Freeman's analysis is itself still controversial, so who knows. It will be interesting to see a response from Rachel Maines.


> But then, Freeman's analysis is itself still controversial, so who knows.

Yeah it's always interesting as an outsider to read about settled scientific wisdom being enthusiastically challenged. I liked:

* the Gerta Keller dinosaur asteroid story -- https://www.theatlantic.com/magazine/archive/2018/09/dinosau...

* the Rebecca Fried "No Irish Need Apply" back-and-forth -- https://web.archive.org/web/20150805045521/http://intl-jsh.o...


These are all interesting, and I hadn't heard about the NINA one, thanks! Of course, the HIV skeptics turned out to be...just plain incorrect. So sometimes the settled scientific wisdom is correct. But, not always...


“The success of Technology of Orgasm serves as a cautionary tale for how easily falsehoods can become embedded in the humanities”

I only skimmed around so am perhaps misinterpreting their tone, but the authors seem to emphasize the vulnerabilities of empirical research in the humanities specifically, but I’m wondering if any field of research is not vulnerable to widely propagated falsehoods, or at the very least, poorly verified findings.


Mathematics, for the most part. I wager this is due to the low material cost of verification.


It's fairly easy for well-intentioned mathematicians to make small mistakes in their reasearch that drasitically affect the outcome of a proof. The sorts of falsehoods discussed in this paper are a consequence of lying about the source material, or a severe lack of due diligence. So these fields probably have a lower prior for a falsehood in a paper.


But in the vast majority of cases, these small mistakes are caught by other mathematicians. This is because verifying a proof is easier (in terms of resources) than, for example, replicating a randomized control trial (with the necessary equipment and subjects).


Not necessarily. Most papers are only ever read by a few people, and even very well cited papers can contain errors which may not be corrected in public unless they significantly change the outcome (but which can make analogous proofs more difficult, as I found with a 2000-citation paper in my PhD's field (its outcome wasn't affected)).


It probably happens less often in mathematics, but it is not immune: in 1932, von Neumann "proved" that hidden-variable quantum theories are impossible, but his proof was amazingly flawed (he essentially assumed that an average of sums is equal to the sum of averages), and the flaw wasn't widely noticed until John Stewart Bell pointed it out in 1966 with an anguished quote that has become quite famous:

"Yet the von Neumann proof, when you actually come to grips with it, falls apart in your hands!... It’s not just flawed, it’s silly... You may quote me on that: The proof of von Neumann is not merely false, it is foolish!"

This is more of a mathematical physics example, but everybody in every field could stand to have a bit more humility and rigor.


maybe i am just tired right now but "average of sums is equal to the sum of averages"

is basically true 100% of the time-it is called the linearity of expectation and is a basic fact of probability.

Could you include the papers or provide a more detailed comment?


Sorry for the confusion, I attempted to simplify the issue for the sake of clarity but apparently only made it worse. You can get a more rigorous explanation of the problem here: https://arxiv.org/abs/quant-ph/0408191


This paper has a rather lofty title but it described just but one case were peer review failed. Sample of one is hardly generalizable. While the paper is virtually an “academic link bait”, there is no researcher who doesn’t thinks that review process is broken.

Typically at conferences you have a area chair who is selected due to some connection with organizer and then this area chair choses other people is his/her network to be reviewers for submissions they receive. So from top to bottom, the process is driven by social connections and very often used as paying back past favors or create a future favor. I have coined even term for this: favor economy.

One way to fix this would be to create a metric for reviewer like r-index in similar line to h-index which rewards reviewer for choosing high impact papers and somehow penalized them to pass on good papers. On a simplest level, r-index could simply be same as h-index for papers reviewed instead of authored. However I think there ought to be better measure.


The title is literally "A Failure ...". One single failure, and that's what it describes. There are only a handful of paragraphs about generalising, and these are knowingly subjective; two of them start with "We believe".


There was a guy at DEFCON showing his collection of these antique devices, and some literature from the era advertising their medical use.

I didn’t carbon date them, but, yeah, they existed, and they were used.


[W]e could find no evidence that physicians ever used electromechanical vibrators to induce orgasms in female patients as a medical treatment [for hysteria].

I always thought that whole "Victorian physicians used vibrators to treat female hysteria" thing seemed fishy...

--

We examined every source that Maines cites in support of her core claim. None of these sources actually do so.

I wonder if, when NLP is sufficiently robust, one could automate this fact-checking process en masse...


Probably augment, but not automate




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: