Hacker News new | past | comments | ask | show | jobs | submit login

I did some checking of a lecture by an expert in virology, and I found an error. There are a few facts that I couldn't figure out if they were supported by other sources.

It wasn't anything particularly intensive, just making sure the other sources said the same thing. But it was still a lot of work.

For most people, learning is a lot of work in and itself.




The whole point of peer-reviewed journals is to be a noise filter.

The scientific community purportedly polices itself.


I think this is kind of the tip of the iceberg, a general reflection on biomedical research and academics. Not that this is how most researchers act, but in the sense that it's not unusual for the field as a whole to encounter this. I mean, if this happens in the Lancet and with the WHO with something with worldwide significant consequences, what do people think is happening with stuff with less scrutiny?

It also is a twist on all the calls for the importance of rigorous peer-reviewed studies, as if the preprint flood is just an example of why we need traditional academic structures. Here's a case where we needed the traditional academic structure and it failed miserably, even when prestigious institutions.

The irony is that this is being caught, but outside the confines of the traditional peer review. That is, there is peer review, but in the general public of scientists. So which is better? It seems like open publication and transparency is key, more so than the traditional structures per se.


Peer review is like a spam filter. But just because something gets past your spam filter does not mean you should assume it's accurate, particularly when malicious and fraudulent behavior is involved.

What appears to have happened here is that a rapid peer-review gave the benefit of the doubt to a purported data source, and then the community rapidly discovered that the data source was not behaving honestly. All of this occurred in a matter of a few weeks, during a time when peer-review is almost certainly moving even faster than normal due to the urgency of the pandemic.


The errors are indeed found, eventually, like this time. It's the expectancy of anything being completely "error free" from the start is simply false. Compare with

https://news.ycombinator.com/item?id=23412482

Look at the all the infrastructure involved there, all the automatic tests that exist in the originating infrastructure, and still only 1) an independent fuzzing test by the company with the most computers in the world 2) the work of the leader of another project; bring "medial" attention to the existence of the problem.

Nothing can be always perfect in the first try.

In science specifically, the "intentional frauds" aren't the starting assumption of those who do review, that's why the frauds manage to pass the checks that do work most of the time.

"Falsehood flies, and truth comes limping after it." Johnatan Swift (1667-1745)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: