I wish her luck in her endeavor and note that she proves what actual journals could do to stay relevant as a service when true distribution costs aren't there - vetting the data and giving a reason to pay for the data already peer reviewed by others.
Agreed, her work is independent peer review, an important part of science and the fundamental way science should work. The entire body of science should be constantly trying to disprove facts and remove falsehoods. Journals shouldn't have a monopoly on this. A rigorous peer review is a rigorous peer review whatever the source.
Currently it's not being treated as an important part of science, and that's hurting our progress.
If you can't trust science to be accurate you can't trust engineering to work. If you can't trust engineering to work planes fall out of the sky, ships sink and buildings collapse. After that you're back to the 17th century.
We should have a lot less science done better. The current number of people being graduated as scientists is completely unsustainable. The number should be cut down by 10 to 100 times for the current funding.
We just shouldn't expect every scientist to publish papers and use that as basis to determine their success and qualification. If you optimize on that metric, you would write on the same topics as your peers to generate the needed recitations that allegedly proove your ablility.
You cannot industrialize science where at the end is a number that determines scientific advancement.
Is is bad enough that research and economic interest are intertwined, it just plainly shouldn't be the case. But we should look for workable compromises.
WTF, are you out of your mind?
Are you suggesting we should allow less people to graduate in order to fix the problem? Sorry for the ad-hominem but this it absolute folly.
If there’s too much competition for funding the solution is not to further limit access to scientific profession but to increase funding for Chrissake!
The mere fact that there’s so many graduates is literally living proof that there’s an enormous, yet wasted potential for scientific research and discovery.
Get your logic in order please.
I was doing a PhD in high energy experimental physics until I finally notice that the majority of my peers were not doing science, they were padding a resume.
Today we could stop doing high energy physics, both theory and experiments, for 50 years and we would not actually retard the progress of the field. We lack the engineering to build experiments better than the ones we have for any sane price and we are sitting around counting angels on pins. The brainpower that we are wasting on un-rigorous mathematics could be much better spent in any number of fields.
I dare you to read 100 papers in any field that you are familiar with and tell me that 95 of the papers aren't there just because someone needed to publish something. I've talked to people in astronomy, genetics, mechanical engineering and computer science and they all say the same thing.
We don't need more science, we need better science. The simplest way to do that is to do less of it. We won't even be hurting progress because you will be able to trust it, unlike today where if you try and synthesize an experiment from three papers you're pretty much guaranteed it won't work because the papers are either fraudulent, p-hacked, flat out wrong or not even wrong.
Exactly. We have to increase signal to noise ratio. I'm trying to stay current with progress in Computer Vision and the amount of junk being published is overwhelming. I'm pretty sure it's the same in many other disciplines.
Departments need a blockbuster discoveries just to stay open, it’s obvious where the incentive goes.
Also, none of the underlings will ever think of questioning - let alone reviewing - the results. Their career, their debt repayment plans, their income would be railroaded the moment they stop to think at what they’re doing.
In this Mad Max scenario who’s going to make science and who’s just coasting in survival mode?
I think we should set standards and not be afraid to uphold them. This will probably result in fewer scientists, but maybe it will become clear where we are failing them earlier in their education and allow us to improve that process.
Maybe culling the herd a bit would improve its quality. Maybe being forced out of the profession for misconduct would help prevent it. Maybe the bar needs to be raised a bit for entry so there's less competition and less pressure to cheat.
I don't see a proof in this statement - all I see is a belief. Many people believe that a sheer number of people engaged in professional science and requisite "publish or perish" job description is the cause of decreasing quality of scientific publications. Increasing the fraction of population with PhD have to run into Bell Curve limitations at some point.
Pay the ones that produce good work? What is good work? Right now, that is decided by number of papers. That metric is gamed to death, that is what publish or perish culture did. Choose any other metric, it will be gamed too.
What if the PhD meant something such that having it was enough? That was all you needed, then you would just get paid to do science. Such a gatekeepeing would require graduating a lot fewer PhDs. How to do that is open: admit fewer? Or psd fewer? If the latter, maybe master out a lot more? Anyone that makes it through gets a guaranteed paycheck, but almost no one makes it through.
Where you are mistaken is that there are a large number of graduates because there is a large demand for research. There isn’t. A given professor needs N papers to get tenure. Say that N = c*k for c researchers and k papers. N is so large that c must be >2, so for the professor to get tenure, they need assistants. Where do they come from? Graduate students. What do those students do when they graduate? The professor doesn’t care because they have tenure now.
Research funding has been decreasing over time, which means N increases, which, counter-intuitively, means c increases, which starts a negative feedback loop because the amount of researchers competing for funding increases while the amount of funding available decreases, still further increasing the number of graduate students needed. That explains what you see.
Yes of course. As per "The Mythical Man-Month" adding more people to a project doesn't speed id up - quite the opposite, slows it down.
Right now we have huge overhead in training and communication. Low s/n ratio follows. You can fix that gatekeeping science a bit better - pick more talented, better educated, harder working candidates, and train & finance & supply them better.
None of that. A lottery of everyone who can pass a minimum literacy exam in their field.
Looking at the luminaries of any field to a very good approximation none of them are "more talented, better educated, harder working candidates", in fact being lazy, combative and self taught is usually a better predictor for ground breaking discoveries. The people who you would pick are already the ones in academia and they are the ones who created the replication crisis.
It need not be the Gold Standard people, or those who believe in a pending vague apocalypse or collapse. Some insist that fuel injection systems with computer control are "less reliable" in spite of needing less maintenance and fewer replacements than older fuel dispersal systems.
It resembles a "monkeysphere" sort of concept like a stereotypical communist refusing to see that the disappearance of every banker would cause real and major damage or an ideological counterpart objectivist not seeing that a complete disappearance of bueracrats would also cause real and major damage.
This isn't to say that current systems don't have deadweight, dysfunctions, and things which could be redone way better without the weight of inertia, legacy, and vested interests. The current cure may in fact be worse than the disease but that the cognitive "here be dragons" interferes with understanding a whole system and its alternatives.
I think calling a watch-dog or an independent peer-reviewer a vigilante is not inappropriate. It's a stylistic choice. I think it's important in literary education to be able to distinguish between what a word is and what cultural baggage it has (how it's used). Of course, where a word ends and its baggage begins is debatable, but it's a good debate to have.
BTW, If you like what she is doing, you might like RealPeerReview (https://twitter.com/realpeerreview)
Moreover I don't think a journal with a title like "Gender, Place, and Culture" would classify itself as scientific. Its website certainly doesn't do so . You can't expose pretend science if there's no pretension of science.
I believe (No, I don't have evidence) that fields of 'X - studies' have people performing research with rigorous practices. Yet, those holding to standards are marred by nonsense publications from people who disregard standards.
This is enabled by magazines printing without... you know, Review.
With that said, a problem with replication is that a given lab tends to gear itself up for one or a small number of research programs that could span years or decades. Experimental apparatus are developed, knowledge and techniques are passed from one student to the next, and so forth.
My thesis project involved more than a quarter million dollars worth of commercial gear, plus a lot of stuff that I built. By the time I was finished, some of my tools were already obsolete.
If one lab publishes a result, another lab would have to gear itself up to replicate that result, which would probably include a capital investment plus a lot of time spent making beginner mistakes.
I don't believe strict replication is necessarily the best or only way to advance science. It produces reliable factoids, but they are still factoids. Physics has made its greatest strides when experimental evidence, that may be riddled with mistakes, supports the development of unifying theories of ever increasing power and accuracy.
Preferable to strict replication might be to let researchers study overlapping domains, so that several projects attack the same problem, but possibly from different angles.
My impression is that replication efforts often happen when one group tries to build on another group's work and they get frustrated enough to retry the original assumptions.
(Although, at least in my field, replication is gives less bang for the buck than improved measurements. If I measure a thing, and then you measure it with 10x higher precision, you're not just replicating my result, you're moving the field forward.)
The issue is not on the science side, but how results are communicated to the general public. Administrators tend to add as much hype as possible, and reporters strip out all the important details.
As a scientist I would say quite the opposity is the case, reviews are sloppy in citations, per editorial guidelines have to be written in a positive optimistic tone, and often overstate the claims of the cited articles.
Personally, I often find them a useful starting point on a topic. At best they capture the field at a moment in time, at worst their near useless. However, that’s just me not everyone in every field.
Only to again realize that reviews are also often vehicles of bias perpetuated by the authors where they subtly amp up papers by themselves and their "clique" of researchers.
Only solution really is to just read as much as possible, be as critical as possible and never trust authors to interpret their results with full honesty.
Not only are they wasting their own funding, they are also wasting other people's time and money who often can't afford to ignore prior work. At the very least such papers come up in peer review.
And apparently, fraudulent papers can get cited quite a lot in practice!
Also, she has a Patreon  which currently only brings in $40 a month. Just thought I'd bring that to people's attention.
The paper described experiments where the scientists created 6000 yeast clones (yeast has 6000 genes), where each clone contained "a deletion of a single gene" and they showed which genes, when deleted, had fatal effect (IE, the yeast could not grow as a viable organism on yeast medium). The data table was a list of genes that had not been previously known to be "required", and when deleted, would cause the organism to not grow (IE, it died, so you could assume the gene's protein product was required for life).
I was asked to make all sorts of tables showing that the genes that got removed belonged to certain classes, or at least had enriched population of certain classes.
Since I've always been interested in overlapping genes, I instead spent time scripting some range intersection queries on the data tables (IE, assuming all genes are intervals on a line, find the genes whose intervals intersect). What I found was, for each gene they reported as a novel important functional gene, it intersected an already known important functional gene.
So, the most likely conclusion is their table was a bunch of false positives- they deleted gene X, but also truncated gene Y, where Y was already known that if it was truncated, would lead to loss of organismal viability.
I wrote a nice letter to the authors explaining what I had found and never heard back (I also dropped that project, explaining to my PI that it was unlikely the data was of high quality). A year or two later, the authors published a new paper explaining how gene overlaps had important functional significance in yeast...
If I were rich, I'd fund an institute, but instead of trying to discover new things, I'd find a bunch of data scientists who went around looking for low-quality papers and properly letting the community know (while publishing all work so that people can verify for themselves the quality of the paper).
There has got to be a market for this kind of honest work, to make the existing world of published data a better place.
Shocking. And even if she reports the lies, they are not likely to be actioned:
> Despite Bik’s work finding these manipulations, she estimates that only 30% of those papers have been corrected or retracted
And she does it off her own back in more ways than one:
> Living off savings, Bik reckons she has about a year’s leeway to work on her image manipulation sleuthing and hopes to find a way of monetizing her expertise as a science misconduct consultant to journals.
> she writes and reports under her own name. In some ways, then, Bik is a surrogate for so many. She posts and reports publicly, often after being tipped off to papers from those who aren’t able to do it themselves
There are moments when you feel like giving up, for example when you receive a DNA construct that has been proven to work in some Science paper. You set up your experiment and for 1.5 years you wonder why it does not work for you. And you can only conclude that the Science paper didn't in fact produce a working construct, you start to see in the images of said paper that indeed, they may be coincidental and not the result of a working protein. And it sets you back 1.5 years. Or, you write around it, creatively as you explore the borders of your own ethics.
Get the fuck out of this field. Those stupid things keep on being only because some people are willing to be abused. When a lot less people enroll for those programs maybe some change will happen. Or maybe those students could start rioting.
> I am excited by the business model of the journal, which is that its very small running costs (like Discrete Analysis, it uses the Scholastica platform, which charges $10 per submission, as well as a fixed annual charge of $250, and there are a few other costs such as archiving articles with CLOCKSS and having DOIs)...
In any case, you can still estimate the quality of a paper with a minute of research into the authors. This is a good idea even if it's in a journal.
If only she also focuses on the much shadier and shittier social "sciences" papers that media outlets and normal people seem take at face value.
If you aren't in the field, then it's difficult, although perhaps inconsequential.
This whole situation is a farce and an embarrassment to science though. The motto should really be “adhere to the rigorous standards of science, or perish”.
There's a great list of examples halfway down the page. For example:
"A growing body of evidence..." (Where is the raw data for your review?)
"People are saying..." (Which people? How do they know?)
"It has been claimed that..." (By whom, where, when?)
"Critics claim..." (Which critics?)