I haven't written too many peer reviews, but I was somewhat harsh in one I wrote because the paper was so bad. Multiple major problems, but I think it ultimately could be redeemed after a lot of work. The editor seems to have rejected the paper, and they sent me the other reviews for a reason unknown to me. I was surprised that none of the other reviewers noticed an error I found to be particularly obvious. Though one of the reviewers did notice a few problems I missed, so perhaps what's obvious to me is not necessarily obvious to others.
One problem I see is that what counts as an "error" is ambiguous. I might call something one error, but others classify it as two errors. A more clear definition might be needed, but to first approximation the naive approach should be fine.
This is common in my field, though not every journal does it. For the ones that do, after a decision is made, all the reviewers will get a letter from the editor thanking them for reviewing the paper, notifying them what decision was made, and enclosing a copy of all the reviews for reference (often with a meta-review or review summary written by the editor). I find it helpful to see the other reviews, since I can improve my reviewing by paying more attention next time to things the other reviewers had caught that I'd missed, and just get a better idea of how calibrated my reviewing is with that of others.
> Sasai, 52, was a corresponding author on one of the papers and a co-author of the other.
What is a 'corresponding author' and how much of a credibility contribution is this vs being a 'co-author'? I'm assuming studies include letters written by other scientists who remotely reviewed data but were not fully involved in the study? Which is a lesser tier contribution than being a full co-author but still essentially 'signing-off' on the study?
In mathematics, the expectation is that all people listed as authors of a paper contributed something significant to it and have co-written (or at least read and endorsed) the overall paper, so it really is a jointly authored work (if that's not true in a given case, there will usually be a footnote explaining). In some areas of the natural sciences, though, there can be large numbers of authors on a paper, because a study is the result of a lot of people's work. But in that case there isn't necessarily an expectation that every single author has verified and endorsed the entire study. For example some authors might be students who contributed a specific experiment, so got their name added as an author on the paper for their contribution to the study. But they may not be knowledgeable enough to have evaluated the rest of the paper, and probably also shouldn't be blamed if someone further down the analysis chain manipulated the data.
A convention in parts of the natural sciences with large author lists is that the junior scientist (postdoc, PhD student, etc.) who did the most practical work is listed as the first author (and called 'first author' or 'lead author'), while the most senior scientist, responsible for overseeing the entire project, is listed last and is the 'corresponding author'. Those two then get the most credit and responsibility, while 'everyone else' is sandwiched in the middle between them.
A 'corresponding author' is, literally speaking, just the author designated as being in charge of correspondence about the paper, with the journal editor and/or with readers who want to follow up (in some journals they're the one whose contact info is printed, although in other journals, all authors' contact info is included). In my field, computer science, 'corresponding author' tends to be mostly a bureaucratic role, the person responsible for handling emails from the journal, and is usually the same person as the first author. In other fields, it can carry more of an implication of the person in charge of the study. In some, the senior scientist's name is first, so they'll end up being both 'first' and 'corresponding' author. Really no consistency across areas, so the terms only mean much when talking to someone from the same field.
For example, this paper has 15 'authors' (same as 'co-authors'), of which the last one listed is the 'corresponding author' (indicated by the little mail icon): https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1574-6968....
I've always meant to dig more into the whole academic journal/paper authorship thing. This was quite helpful.
Some reviewers spend very little time reviewing papers, and they may even not be expert of the topic.
It's sad that science has come to this.
The primary solutions I can see (not necessarily mutually exclusive) are to a) reduce the costs of research equipment and staff substantially so that competition for funding becomes less relevant, b) provide a larger pot of state-sponsored scientific funding (industrial science tends to favor refining research more than performing tentative research so corporate funding would be an ill-advised solution), or c) to discourage young budding potential grad students from even going into the grad school track in the first place.
As an individual, option c) is something that I personally feel like I can make a difference with. Professors in undergrad can be really terrible about conveying the exact career options you can have in science without necessarily going for the PhD. Invisible support positions like HPC system administrators, research software engineers, and even to some extent hospital lab workers don't require hefty commitments to grad school but still allow one to live within the world of scientific discourse and contribute to the advancement of various fields as an enabler (perhaps less so for hospital workers). Unfortunately, the invisibility of these professions is a major problem, which is why there are organizations like UKRSE  that strive to generate more recognition for at least Research Software Engineering (curiously Google in the States seems to have been quick to recognize scientific programmers as distinct professionals relative to PhDs and other programmers, while academia in the US has not done so nearly as much). Also, for some support positions in IT there are non-science options that are much more lucrative.
Another point is that professors are very biased in that they successfully completed grad school, probably a postdoc if they are relatively new and then got a tenure track job. They then suggest that you follow a similar track, strong confirmation bias since you see their success and not the "failure" of the 90% who couldn't/didn't find jobs in academia.
Generally speaking, a student telling me they want a Ph.D in my field gets a strong warning and a frank discussion about the nature of the job market. I'm willing to support them if they simply cannot imagine doing anything else with their lives, but the pool of students for whom getting a Ph.D is actually a good idea is a very small pool indeed.
The simple fact that each professor advises >10 grad students in their career is generally sufficient to open an undergraduate's eyes to reality.
Do it for love, not for money, and do it only with the recognition of the fiscal/life sacrifice that a Ph.D entails.
But I'm also in a field where "alternative" careers don't carry quite the same stigma.
In anything close to an efficient market, what gives is that you end up with fewer scientists.
Some things will always be made obvious specially if it's a small field and you know who is doing what.
But if you are reviewer and the 'very famous and respectable' colleague is co-authoring the paper you are reviewig, you are much more likely to dismiss any unclear or incorrect aspects of the paper.
It would seem like a really simple step to just redact any aspects of the paper that would give away names, locations, etc.
There is a further issue that if you're in a small-enough (sub-)field you probably know what other people are working on and can effectively unblind the authors anyway.
And the fact that the fraud has been found out, even if a delay is involved, pleads in favour of the system.
1) That the system works so well that the only time fraud has occurred it has been found out
2) That the system works very poorly and this is the only time fraud has been found out
In experimental fields, one just pulls data out of thin air or one just discards observations that are contrary to the hypothesis being tested. This attitude is so rampant in psychology, cognitive neuroscience, and disciplines that use statistics.
I’m quite sympathetic to some guy in some random university in Pakistan, India or China. His plagiarization involves submitting his publication to some fourth rate publication.
The real abusers are in the first world, their primary tool is experiments and data.
Please cite direct evidence that this is “rampant”.
Articles making claims of “replication crisis” without direct evidence of fraud do not count. There’s a huge gap between failure to replicate and fraud, and quite frankly, most of the replication crisis media tends to repeat the same stories, providing little additional proof to back up the claims of “crisis”.
The money quote is:
> "I don’t think I’ve ever done an interesting study where the data ‘came out’ the first time I looked at it,” he told her over email.
Everyone around him knew and participated to some extent.
His postdocs (except the one who said no), his PhD students, and all his other co-authors.
They all knew and never said a word.
When Cornell was told, they made some pious noises and ignored it until it became too public.
Journals ignored all criticism until it became too public.
This is a system that does not police fraud.
Now I am paid usually $45,000 - $60,000 per year to talk on a phone to people who need help filling out a web form in order to buy something. It's a bit absurd. I don't mind this work and it is good money. It keeps me and my girlfriend fed and sheltered.
Why is the university system so negligent and abusive towards those who invent new ideas and teach new inventors?
Its the former, except the government is paying for it...
I say this as someone who got paid by the government via academia to torture rodents for no reason for years.
The government of course gave me the loan for inflated price of attending uni, but goddamnit I paid for it myself with my own blood and tears.
I'm saying that when a grad student or adjunct teaches a class, they get paid a few thousand dollars. The students in that class are paying 100x that amount in tuition.
I'm implying that grad students should be paid more for this service, instead of the money going to administrator salaries and football stadiums.
Have you considered the possibility that academia is paying researchers/etc what they are worth on average? Its just that the majority of people being paid are generating near zero, or negative, value to society.
So, you have consumers (students and their parents) paying real money for a real education that they are hoping will result in gainful employment. Either millions of students are overpaying every year, or the system provides some value.
Now, the ones delivering that value (grad students) are receiving maybe $8,000 per semester while teaching 100 students lets say, while the students are paying collectively nearly $1,000,000 for 4 months of tuitoon.
Where does the other $992,000 go, besides administrator salaries and football stadiums?
I mean, if the monkey doing the can generate 900k of value per year, but a human doing is can generate 1MM per year, then it would be better to hire the human for anything up to 96k/year (assuming peanuts cost less than 1k/quarter).
 I say "his" because it seems like men are highly overrepresented in submitting falsified data. Obviously there are probably more men in science to start with, but consider that 30/30 of the most retracted scientists are men. https://retractionwatch.com/the-retraction-watch-leaderboard...
For these reasons, retractions are not a good measure of behavior.
Edit: this was in response to your comment before your edit
I agree retractions are an imperfect measure, but it's not like there is much more to go on.
Most importantly, the 30 people on this leaderboard who have dozens of retracted papers are not just people who got unlucky with sampling or made one mistake and had to retract a paper–they are those who repeatedly and knowingly faked data and then published it.
I don't disagree that the rate of misbehavior and gender balance of scientists differ internationally and would ideally be factored into an analysis.
But to play the game... maybe woman are more likely to get caught falsifying data before publication, because the evil patriarchy thinks woman cannot to science and thus more rigorously verifies their results? We for sure could come up with lots of other crappy explanations and theories.
Don't take my word for it, here's a couple lines of R code.
# vector of prior probabilities, natural choice is proportion of men
prop.men <- seq(0, 1, length.out=100)
# compute prob of observation (30/30) given prior probability
pvals <- dbinom(x=30, size=30, prob=prop.men)
type="l", col="blue", lwd=2,
ylab="log10 pval for 30/30 offenders being men",
xlab="Baseline male percentage in science"
abline(h=log10(0.05), col="red", lwd=2, lty=2) # "significance" line
text(x=0.6, y=-30, lab="Below red line\nsignificant at α=0.05")
# since prob vector is 100 long, indices of entries returned here
# correspond to % men needed for this to have α > 0.05
Of course this is a bit back-of-the-envelope done in 5 minutes, but you get the point.
If I were being pedantic [I guess I am :-) ], I'd probably say the best prior would be p =(n male scientists/number all scientists)
Overall I bet men are slightly more than 1/2, but even if it's an 80/20 split the p value for this would be many sigma.
EDIT: Please don't downvote statements of fact. My karma score pays the price for your momentary burst of dopamine.
People in my immediate circle suffer from ailments mentioned in this article. Do these researchers have any conscience at all? Did they understand they're playing with people's lives?
I put them in the same category as the pharmacist who watered down the chemo drugs. There really should be criminal prosecutions and jail time for these crimes.
As such it's not really comparable to Madoff. Madoff would have been caught eventually whatever happened because what he was doing was fundamentally unsustainable. But this fraud was fairly close to being undiscovered. It was only caught because of the persistence of some researchers (for whom it wasn't even their job) and the unbelievable data.
tl;dr: two brothers fake their way to a PhD and became TV celebrities and book authors in France. A lot of people have no idea about this and still respect them.
That's sad indeed. They also write vulgarization books that sell well but are very bad and confused. I feel bad for the people who buy their books.
Numerous people (academics in the field) have suggested that the work was falsified.
I somehow can’t quite bring myself the believe it. I can’t understand what the motivation would be, or that any reasonable person would fake scientific data. And continue to do so, over a period years.
The STAP cell debacle is somewhat similar. What’s the endgame? Surely it becomes clear in the end that the work can not be replicated?
Can anyone shed some light on the motivation for these frauds? Is there something in particular about the Japanese ecosystem that makes them more common?
Money and prestige while it lasts?
> Is there something in particular about the Japanese ecosystem that makes them more common?
Well, the article states the following:
> Michiie Sakamoto, who is leading another investigation at Keio University, into Iwamoto's studies in animals, says it has to do with respect. "In Japan, we don't usually doubt a professor," he says. "We basically believe people. We think we don't need strict rules to watch them carefully." As a result, researchers faking their results may be exposed only after they have racked up many publications.
So, I still can’t quite put my finger on what is different about the Japanese ecosystem. Perhaps the checks and balances within departments are not as strong?
Outside Japan, a manager or department head seems to have a stronger supervisory role in my limited experience.
IMO that answer should basically be two (maybe even one). And if a paper is not willing to issue a retraction on those grounds (which I would understand), they should at least flag the paper with a "retracted author" notice.
truth is a pendulum. we are living in the extreme end of one swing: a gilded age where "truth isn't truth". It's interesting to see how much synchronization of the pendulum there is in very different aspects of society (such as medicine and politics).