
Science Vigilante Calls Out Bogus Results in Prestigious Journals - lelf
https://onezero.medium.com/this-science-vigilante-calls-out-bogus-results-in-prestigious-journals-eb5a414c7f76
======
Nasrudith
Vigilante is the wrong term on so many levels for her work. For starters it
implies illegality, slipshod at best due process, and a proper authority.
Elisabeth Bik is essentially being a good citizen - doing exactly what she
should as a reader calling out improprieties.

I wish her luck in her endeavor and note that she proves what actual journals
could do to stay relevant as a service when true distribution costs aren't
there - vetting the data and giving a reason to pay for the data already peer
reviewed by others.

~~~
buzzkillington
She is doing peer review. There is nothing new about this. The publish or
perish culture is new and it's eating away the foundation that our
civilization is built on.

If you can't trust science to be accurate you can't trust engineering to work.
If you can't trust engineering to work planes fall out of the sky, ships sink
and buildings collapse. After that you're back to the 17th century.

We should have a lot less science done better. The current number of people
being graduated as scientists is completely unsustainable. The number should
be cut down by 10 to 100 times for the current funding.

~~~
eecc
I was agreeing 100% to this post until I hit the last two sentences.

WTF, are you out of your mind?

Are you suggesting we should allow _less_ people to graduate in order to fix
the problem? Sorry for the ad-hominem but this it absolute folly.

If there’s too much competition for funding the solution is not to further
limit access to scientific profession but to increase funding for Chrissake!

The mere fact that there’s so many graduates is literally living proof that
there’s an enormous, yet wasted potential for scientific research and
discovery.

Get your logic in order please.

~~~
buzzkillington
>WTF, are you out of your mind?

I was doing a PhD in high energy experimental physics until I finally notice
that the majority of my peers were not doing science, they were padding a
resume.

Today we could stop doing high energy physics, both theory and experiments,
for 50 years and we would not actually retard the progress of the field. We
lack the engineering to build experiments better than the ones we have for any
sane price and we are sitting around counting angels on pins. The brainpower
that we are wasting on un-rigorous mathematics could be much better spent in
any number of fields.

I dare you to read 100 papers in any field that you are familiar with and tell
me that 95 of the papers aren't there just because someone needed to publish
something. I've talked to people in astronomy, genetics, mechanical
engineering and computer science and they all say the same thing.

We don't need more science, we need better science. The simplest way to do
that is to do less of it. We won't even be hurting progress because you will
be able to trust it, unlike today where if you try and synthesize an
experiment from three papers you're pretty much guaranteed it won't work
because the papers are either fraudulent, p-hacked, flat out wrong or not even
wrong.

~~~
pauljurczak
> We don't need more science, we need better science.

Exactly. We have to increase signal to noise ratio. I'm trying to stay current
with progress in Computer Vision and the amount of junk being published is
overwhelming. I'm pretty sure it's the same in many other disciplines.

~~~
pacman128
Ex-prof here. The whole _publish or perish_ mentality is a big problem. How
many _good_ papers can a researcher produce a year? Way less than they are
expected to in the academic world.

------
Pinegulf
She is brave. Exposing 'pretend science' gets you backlash. Just see
consequences from 'Sokal Squared':
[https://www.chronicle.com/article/Proceedings-Start-
Against/...](https://www.chronicle.com/article/Proceedings-Start-
Against/245431)

BTW, If you like what she is doing, you might like RealPeerReview
([https://twitter.com/realpeerreview](https://twitter.com/realpeerreview))

~~~
n4r9
Do you think Sokal-type pranks are legitimately exposing pretend science? It
seems to me, the strongest conclusion you can make is that some journals are
over-eager to publish studies with edgy titles or themes. This might bring
into question the average level of scholarship in a field but it doesn't
directly challenge whether a field is legitimate.

Moreover I don't think a journal with a title like "Gender, Place, and
Culture" would classify itself as scientific. Its website certainly doesn't do
so [0]. You can't expose pretend science if there's no pretension of science.

[0]
[https://www.tandfonline.com/action/journalInformation?show=a...](https://www.tandfonline.com/action/journalInformation?show=aimsScope&journalCode=cgpc20#.VeTSWpjDWRY)

~~~
Pinegulf
No. It's not to expose pretend science, but pretend science
publications/magazines.

I believe (No, I don't have evidence) that fields of 'X - studies' have people
performing research with rigorous practices. Yet, those holding to standards
are marred by nonsense publications from people who disregard standards.

Edit: This is enabled by magazines printing without... you know, Review.

------
nyxxie
Academia is almost entirely based on trust and reputation, which we're seeming
to discover is not a useful heuristic if your end goal is a net gain in
uncovering the truth of the phenomena around us. If you ask me, credibility
should be based on reproduction of results rather than reputation of the
author, name of the sponsoring institution, journal title, or a vague "peer
reviewed" badge. New papers should be by-default untrusted until several
reproduction attempts have been successfully executed. This would incentivize
authors and scientific institutions to produce science of quantifiable
quality.

~~~
Retric
Individually papers are generally not trusted which is why literature reviews
are a thing.
[https://en.m.wikipedia.org/wiki/Literature_review](https://en.m.wikipedia.org/wiki/Literature_review)

The issue is not on the science side, but how results are communicated to the
general public. Administrators tend to add as much hype as possible, and
reporters strip out all the important details.

~~~
i000
Is this assessment (reviews over papers) based on your own experience?

As a scientist I would say quite the opposity is the case, reviews are sloppy
in citations, per editorial guidelines have to be written in a positive
optimistic tone, and often overstate the claims of the cited articles.

~~~
Retric
I am not suggesting they are more or even nessisarily as accurate vs
individual papers. Rather, they demonstrate the untrusted nature of individual
papers.

Personally, I often find them a useful starting point on a topic. At best they
capture the field at a moment in time, at worst their near useless. However,
that’s just me not everyone in every field.

------
rozab
Her powers of pattern recognition must be extraordinary. I know that enhanced
pattern recognition has been associated with savant syndrome [0]; I wonder if
her abilities are worthy of scientific study? Or maybe it just seems
incredible to me because I'm particularly bad at this task.

Also, she has a Patreon [1] which currently only brings in $40 a month. Just
thought I'd bring that to people's attention.

[0]
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2677591/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2677591/)

[1]
[https://www.patreon.com/elisabethbik](https://www.patreon.com/elisabethbik)

------
dekhn
One of the reasons I'm no longer a scientist: as a postdoc I was called on to
read a paper about the functional impact of deleting genes in yeast, take
their published data tables, and make sense of their results. The paper was
fairly straightforward to read (a rarity) and the data was easy to access. I
wrote a bunch of scripts to parse the data and ask questions (this was ~20
years ago).

The paper described experiments where the scientists created 6000 yeast clones
(yeast has 6000 genes), where each clone contained "a deletion of a single
gene" and they showed which genes, when deleted, had fatal effect (IE, the
yeast could not grow as a viable organism on yeast medium). The data table was
a list of genes that had not been previously known to be "required", and when
deleted, would cause the organism to not grow (IE, it died, so you could
assume the gene's protein product was required for life).

I was asked to make all sorts of tables showing that the genes that got
removed belonged to certain classes, or at least had enriched population of
certain classes.

Since I've always been interested in overlapping genes, I instead spent time
scripting some range intersection queries on the data tables (IE, assuming all
genes are intervals on a line, find the genes whose intervals intersect). What
I found was, for each gene they reported as a novel important functional gene,
it intersected an already known important functional gene.

So, the most likely conclusion is their table was a bunch of false positives-
they deleted gene X, but also truncated gene Y, where Y was already known that
if it was truncated, would lead to loss of organismal viability.

I wrote a nice letter to the authors explaining what I had found and never
heard back (I also dropped that project, explaining to my PI that it was
unlikely the data was of high quality). A year or two later, the authors
published a new paper explaining how gene overlaps had important functional
significance in yeast...

If I were rich, I'd fund an institute, but instead of trying to discover new
things, I'd find a bunch of data scientists who went around looking for low-
quality papers and properly letting the community know (while publishing all
work so that people can verify for themselves the quality of the paper).

~~~
mar77i
This is a great idea, Be it only to dampen all the published science and
research really is with all these reproducibility crises around.

There has got to be a market for this kind of honest work, to make the
existing world of published data a better place.

------
KuhlMensch
> Within her manual search of more than 20,000 pieces of biomedical research,
> 4% contained manipulated images.

Shocking. And even if she reports the lies, they are not likely to be
actioned:

> Despite Bik’s work finding these manipulations, she estimates that only 30%
> of those papers have been corrected or retracted

And she does it off her own back in more ways than one:

> Living off savings, Bik reckons she has about a year’s leeway to work on her
> image manipulation sleuthing and hopes to find a way of monetizing her
> expertise as a science misconduct consultant to journals.

and:

> she writes and reports under her own name. In some ways, then, Bik is a
> surrogate for so many. She posts and reports publicly, often after being
> tipped off to papers from those who aren’t able to do it themselves

~~~
teekert
It's shocking but imagine you are a PhD student, you need 4 papers out in 4
years but you are operating in a field where your results are only accepted
when they are positive and new (sometime even only when the confirm a certain
theory!). It feels really unfair as it is essential a probabilistic process
whether you will get a PhD or not, it depends only to a certain degree o how
had you work and how smart you are. This needs to change.

There are moments when you feel like giving up, for example when you receive a
DNA construct that has been proven to work in some Science paper. You set up
your experiment and for 1.5 years you wonder why it does not work for you. And
you can only conclude that the Science paper didn't in fact produce a working
construct, you start to see in the images of said paper that indeed, they may
be coincidental and not the result of a working protein. And it sets you back
1.5 years. Or, you write around it, creatively as you explore the borders of
your own ethics.

~~~
arkh
> It's shocking but imagine you are a PhD student, you need 4 papers out in 4
> years but you are operating in a field where our results are only accepted
> when they are positive and new.

Get the fuck out of this field. Those stupid things keep on being only because
some people are willing to be abused. When a lot less people enroll for those
programs maybe some change will happen. Or maybe those students could start
rioting.

~~~
teekert
So you would be the brave one, the one that will tell his professor: F U, I
wont stay for 3 more years on this slave wage, you're going to have to make me
a PhD based on my hard work that amounted to a set of results that you deem
unpublishable.

Good luck.

------
RcouF1uZ4gsC
This is one of the strongest arguments why scientific publications should be
open. The more people who have access to the science papers and have the
opportunity to find issues, the stronger our scientific foundation will be.

~~~
Enginerrrd
If you look at arxiv though, the result has been the proliferation of a ton of
really low quality, garbage papers. I wonder if a you could crowdsource the
review with voting and discussion a la stack exchange style. That might
actually be a good solution, particularly if you have to get vetted as being
knowledgeable and constructive in order to vote.

~~~
ac29
Open access doesn't necessarily imply free and not peer reviewed like arxiv.
There are plenty of good, peer-reviewed journals that are open access - they
are certainly not free though ($1-2k when I was publishing several years ago).

~~~
oefrha
There are also peer-reviewed “arXiv overlay journals” that are very cheap to
run and consequently cheap for authors. For instance, Timothy Gowers (Fields
medalist, prominent figure in the movement against big publishers like
Elsevier) announced[1] another one of these last week:

> I am excited by the business model of the journal, which is that its very
> small running costs (like Discrete Analysis, it uses the Scholastica
> platform, which charges $10 per submission, as well as a fixed annual charge
> of $250, and there are a few other costs such as archiving articles with
> CLOCKSS and having DOIs)...

[1] [https://gowers.wordpress.com/2019/10/30/advances-in-
combinat...](https://gowers.wordpress.com/2019/10/30/advances-in-
combinatorics-fully-launched/)

------
bayesian_horse
She should get funding from someone like the Gates Foundation. After all, she
could save them money and time.

------
mindfulplay
It's appalling that even proper science research can be fraudulent.

If only she also focuses on the much shadier and shittier social "sciences"
papers that media outlets and normal people seem take at face value.

------
Gatsky
This kind of thing gets too much hype. I mean, these people are just sloppy,
and catching them out is just a matter of how much effort you want to put in.
Generally in any field you get to know quite quickly who isn't trustworthy,
and largely ignore their papers. If only journal editors had the same
information. What is a bigger problem is papers which seem important but
actually reveal nothing due to statistical errors, mistakes you can't ever
find without getting the data and redoing the analysis, irrelevant or
contrived model systems, inadequate controls, inscrutable analytical methods
etc

~~~
jononor
What about people who are not 'in the field' (presumably several years of work
there), how do they know who to trust and not?

~~~
Gatsky
Don't get me wrong, ideally every paper would be trustworthy, but working
scientists are well aware of the issues, and aren't waiting for a posse of
armchair crusaders to save them.

If you aren't in the field, then it's difficult, although perhaps
inconsequential.

------
bjornsing
Looking at images of crystallized materials and interpreting all similarity
between small regions as proof of scientific fraud is a bit dangerous
though... If I had to guess I’d say it’s more likely that that electron
microscope image in Bik’s embedded tweet is genuine than not.

This whole situation is a farce and an embarrassment to science though. The
motto should really be “adhere to the rigorous standards of science, or
perish”.

~~~
9nGQluzmnq3M
Take a look at the full-size image: there's plenty of clear clone tool action
going on in there. The red blob is particularly damning.

[https://twitter.com/MicrobiomDigest/status/11744616805808824...](https://twitter.com/MicrobiomDigest/status/1174461680580882432/photo/1)

------
ipunchghosts
This cant be a serious article. This woman worked for ubiome and they recently
had to shut down because they were being investigate for fraud by the FBI!

------
Trias11
AI will make her work so much harder soon!

~~~
jonsen
Or it will do the work for her.

~~~
comex
That's what I was thinking. Manually reviewing _20,000_ papers is simply an
astonishing feat. Heck, even searching a single paper for _anything_ that
looks like _anything_ else in the same paper (possibly rotated, scaled, or
alpha blended) might be beyond me. But it's the kind of task a computer vision
algorithm should be able to do very well at. In fact, I Googled it and found
(what else) some papers about it, e.g.:

[https://arxiv.org/abs/1407.6879](https://arxiv.org/abs/1407.6879)

------
Railsify
I think of this scenarios like she discovered every-time someone uses a line
like "studies show", the only thing the study shows is that the author wants a
bump in clout.

~~~
symplee
[https://en.wikipedia.org/wiki/Weasel_word](https://en.wikipedia.org/wiki/Weasel_word)

There's a great list of examples halfway down the page. For example:

    
    
      "A growing body of evidence..." (Where is the raw data for your review?)
      "People are saying..." (Which people? How do they know?)
      "It has been claimed that..." (By whom, where, when?)
      "Critics claim..." (Which critics?)
      etc...

