Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, it opens with snark. But then the article goes through the arguments and figures in the paper and refutes the hypothesis, so it's not empty snark.

Publishing a paper in a fake journal does damage the validity of the claims. Papers with sufficient evidence for their claims can get published in journals with strict peer review policies. By choosing such a crap journal the authors are essentially admitting that Nature or Science would never accept their arguments.




The only objective criticisms I saw in the article were about the SEMs not being to scale and the preservation state of the "organisms". Other than that, it seems like it was snark at the beginning, snark in the middle, and snark at the end.


He directly addresses their key evidence: the pictures of the supposed organisms. He interprets the images as inorganic and and calls the paper's claims pareidolia. If outside observers look at the same data and don't see what you're arguing for, you're not making your case successfully.


"Papers with sufficient evidence for their claims can get published in journals with strict peer review policies."

Do you have any evidence that this is actually true?


The scientific method is based on the principle that peer review winnows out false hypotheses. If n observers each detect a falsehood with probability p then the probability of missing the falsehood is (1-p)^n. More reviewers and more accurate individual reviewers decrease the rate of false hypotheses being published. There are many flaws in the peer review system as it exists today. That said, journals with more peer review will publish fewer falsehoods than journals with less peer review.


Again, do you have any actual evidence that what you are saying is true?


Are you asking for a study? I'm not exactly sure what you're trying to say.


"Are you asking for a study? I'm not exactly sure what you're trying to say."

Yes. Every few months I see a new study about how academic research grants are handed out more or less at random, how journals are heavily biased toward positive results, how journals are biased toward authors with previous citations, etc.

All this seriously challenges the claim that if you have sufficient evidence you will get published by a top journal.

Similarly, is there any evidence for the claim that, "If n observers each detect a falsehood with probability p then the probability of missing the falsehood is (1-p)^n."

Go ask Kitty Genovese about the validity of this formula.


The question is not "are journals biased?" The question is, are more respected journals less likely to publish a falsehood? If the "better" journals have more accurate reviewers or more reviewers then yeah, they will publish fewer falsehoods.

Re: the formula (1-p)^n. It's http://en.wikipedia.org/wiki/Geometric_distribution . If the observers are independent then the formula holds. If they're not then it holds approximately according to how dependent the different observers are.


"The question is not "are journals biased?" The question is, are more respected journals less likely to publish a falsehood?"

What you said originally was that if the author's evidence was sufficient then he would have been able to get in published in a better journal. But if you're admitting that journals are biased, then surely you must see that your original assertion doesn't always --or maybe even often-- hold. Maybe it's true, but I'm certainly not willing to accept it on faith.

"If the "better" journals have more accurate reviewers or more reviewers then yeah, they will publish fewer falsehoods."

Your definition of 'better' journals sounds like a tautology. How do you know that 'better' journals have better reviewers? Can you actually prove that having better reviewers is what makes 'better' journals better?

I'd certainly be willing to be persuaded by evidence, but right now I believe that the best journals are the ones with the best vetting process about as much as I believe the best vodka is the one that costs the most, the world's best author is the one with the most sales, etc.


The Kitty Genovese killing isn't a good example of the bystander effect, as the police were contacted at least once during the attack and most people who heard it could not see it happening.

http://www.psych.lancs.ac.uk/people/uploads/MarkLevine200706...

http://dx.doi.org/10.1037%2F0003-066X.62.6.555

From the abstract:

> Using archive material we show that there is no evidence for the presence of 38 witnesses, or that witnesses observed the murder, or that witnesses remained inactive.

(Further, why mention the bystander effect at all? It's a little confusing)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: