Hacker News new | comments | show | ask | jobs | submit login
A bot crawled thousands of studies looking for simple math errors (vox.com)
421 points by MollyR on Oct 5, 2016 | hide | past | web | favorite | 207 comments



> and whether online critiques of past work constitute “bullying” or shaming. The PubPeer comments are just a tiny part of that debate. But it can hit a nerve.

> Susan Fiske, a former president of the Association for Psychological Science, alluded to Statcheck in an interview with Business Insider, calling it a “gotcha algorithm.”

> The draft of the article said these online critics were engaging in “methodological terrorism.”

If these are attitudes typical of psychology, then I cannot say I consider psychology to be a proper social science. There is a fundamental misunderstanding of how knowledge is created through the scientific process if the verification step is considered to be offensive or taboo. That anyone in the field of psychology would even be comfortable publically espousing a non-scientific worldview like that means that psychologists are not being properly educated in the scientific method and should not be in the business of producing research since they do not have a mature understanding of what "scientific" implies.


I took a class with Susan Fiske in grad school, and the following is a quote from one of her comments on a homework where I had called into question a paper that was clearly not just statistically incorrectly done but was based on urban-legend-level assumptions:

"These results are from respected investigators, so it's inappropriate to question them."


Haha this reminds me of the feedback I got on a report in Comparative Religion, when I questioned some French dude's really elaborate and fanciful interpretation of the paintings at Lascaux. "Ummm... what supporting texts do we have from 17,000 years ago?" "No, you can't ask that sort of question and it's really disrespectful."

FWIW, I wasn't really upset by that feedback. Senior spring; I don't care!


It immediately reminded me of "Emperor Kennedy Legend: A New Anthropological Debate" by Leszek Kołakowski : ) Highly recommended and very funny snce there's much truth to it I think. I found it online in English: http://jennis22.weebly.com/uploads/3/0/5/3/30531242/emperor_...


I've seen churches with more skepticism.


While I trust your story, and I believe it's somewhat taboo on HN to ask for this, I feel that it would be much more valuable if evidence was provided. That is an absolutely egregious act for a professor to commit and I feel like it should be reported to someone if substantiated..


Oh, I made a huge stink about it (and this was just one example of many comments of that type). This was at Penn in 2007.


Not to question you, but I can't find any evidence that Fiske was ever teaching at Penn. Her CV [1] is quite exhaustive, but only lists giving colloquia at Penn.

[1] http://static1.1.sqspcdn.com/static/f/1605966/27061956/14652...


She was one of the people giving the Social Psychology prosem in Fall 2005, because John Sabini (who normally gave it) died:

> September 21: The Self (Prentice) > September 28: Social Cognition (Fiske) > October 5: Attitudes (Cooper) > October 19: Social Inference (Pronin) > November 2: Prejudice & Stigma (Shelton) > November 16: Dissonance (Cooper) > November 30: Close Relationships (Fiske) > December 7: Social Influence (Prentice)


Thanks.


> Not to question you

It's quite okay. That's exactly the kind of attitude that's being scrutinized here ;)


An upvote for outing her for clearly bad science and bad teaching, if she actually did this. There are few things that offend me, a teacher damaging impressionable students like this is one of those.


I think she mostly damaged herself there, the student seems to have come out all right (but there was a risk they wouldn't have).


Is that verbatim?


Yes.


Please photograph and publish it. (Pics or it didn't happen?) This debate is important and shouldn't be muddied with hearsay on all sides.

Thanks.


dmd is a respected HNer, so it's inappropriate to question him.


It is inappropriate to use that fact to dismiss a valid request for the evidence. Being famous in some (possibly popular) website does not mean that one's experience has not been biased.


Go five levels up from your comment and you might find a joke you're missing—unless you're in on it too and it's sailing right over my head


Oh-kay, now I understood what happened (the fifth level is invisible from the parent comment, ugh). Thank you for pointing out where I screwed up :S


Out of curiosity, do you remember what paper it was?


I'm not entirely certain but I think it was

The modern synthesis of contrasting styles of thought in “east” and “west” *Nisbett, R. E., Masuda, T. (2003). Culture and point of view. Proceedings of the National Academy of Sciences, 100, September 16, 2003, 11163-11170


Interesting. As a physicist, I tend to put extra trust in PNAS credibility due to their history of critical editors and reviewers, and therefore am less likely to question results published there. Maybe it's the same fallacy, but at the same time we can't all be experts at everything and have to defer to trust at some point. Anyway, thanks for the contextual tidbit.


PNAS is notorious for inconsistent quality, and has recently had to modify their direct submission process, partly as a result. I don't know about physics, but I've seen them publish a lot of very poor work.


Yes! Similarly, I remember a few years back being shocked at the attitude toward the replication crisis, where authors of irreproducible work were miffed that no one talked to them first.

I was like, "Um, the whole point of science is that it's knowledge in the public record. If you need to 'chill' with the original researcher to get critical details, that's not science anymore."

Remember, the studies might form the basis of an FDA approval. If only the original lab can produce the result, then is the FDA approval of the treatment going to be limited to only that lab as well? Kinda limits scalability...

I did a search and found the blog post that quoted the researcher [1] with this scary attitude:

>When researchers at Amgen, a pharmaceutical company in Thousand Oaks, California, failed to replicate many important studies in preclinical cancer research, they tried to contact the authors and exchange materials. They could confirm only 11% of the papers. I think that if more biotech companies had the patience to send someone to the original labs, perhaps the percentage of reproducibility would be much higher.

[1] http://andrewgelman.com/2013/12/17/replication-backlash/ (thanks Google)


> If you need to a 'chill' with the original researcher to get critical details, that's not science anymore.

Certainly there is always the possibility that there was some details misunderstood, something that needs to be clarified, a print error, etc. Your "that's not science anymore" statement seems highly exaggerated. People are not supposed to communicate only via papers.


Sure, it's inevitable that, in some cases, eventually someone will have to go back to check notes because some factor mattered when no one thought to write it down.

But there's a difference between that, vs "we expect that someone will have to make personal contact with the original researchers in order to replicate it".

If you're explaining away replication failures by such non-contact (as the quote is), that's confirmation of a problem (in keeping with the standards of science), not a vindication of the results.

There's an additional danger of making it so that "you can't replicate until you have social contact with the original researchers". That way lies favoritism: it's harder to criticize someone as you get closer to them socially, and they can withhold the capability of criticism by not engaging the critics.


Here's an example from my field, which I think is informative because it gives a concrete idea of how this plays out in a public arena.

tl;dr for this wall of text: 1) Authors A describes algorithm, 2) Author B publishes counter-example to show where #1 fails, 3) Authors A say it wasn't wrong, but that the author of #2 'misunderstood', and B should have contacted them first; and in any case here are the missing details, 4) Author B points out that paper #1 should have said those details were missing. 5) Authors C point out that authors A misunderstood many things in their own publications; authors A can't complain about others not contacting them first when they don't do it themselves.

"Canonical Numbering and Constitutional Symmetry" (1977), DOI: 10.1021/ci60010a014 describes an algorithm.

"Erroneous Claims Concerning the Perception of Topological Symmetry" (1978), DOI: 10.1021/ci60014a015 points out examples where the algorithm from the first paper, and from another paper, don't work.

The authors of the first paper followup with "On the Misinterpretation of Our Algorithm for the Perception of Constitutional Symmetry" (1979), DOI: 10.1021/ci60017a012 .

> A recent paper in this journal contained critical comments on two methods for the perception of topological symmetry. Carhart’s claim that our algorithm does not correctly perceive topological symmetry and fails with certain structures is the result of a misinterpretation of our algorithm.

> Unfortunately, the author did not contact us directly to help him clarify his misunderstanding. This failure is unusual and difficult to understand. Thus, it was not until we received the recent issue of this journal that we learned of this misinterpretation.

> In our paper we were particularly aiming at catching the interest of the organic chemist for the problems of uniquely numbering the atoms of a molecule. Therefore, we put particular emphasis on the criteria for determining priorities among atoms to enable the chemist to manually number the atoms of molecules according to our procedure. We restrained from giving all small details of the algorithm to keep the paper concise, working under the assumption that persons interested in the details would contact us directly. It is astonishing that Carhart at the point where we did not fully elaborate on the details works with the premise that we misconceived the problem. Initially one should rather assume that other people, too, understand a problem. Only if explicit errors are found should one digress from this conviction.

Carhart followed up with a letter to the editor, "Perception of Topological Symmetry" (1979) DOI: 10.1021/ci60017a600 :

> I am delighted to see that my critique’ appearing in this Journal has encouraged C. Jochum and J. Gasteiger to present previously unreported steps in their algorithm for the canonical numbering of chemical graphs. They refer to these steps as “small details”, but in fact they are the very essence of any routine which reliably finds unique numberings for, ...

> However, I did not misunderstand their previous article (unless lack of clairvoyance can be classed as misunderstanding); I simply took it at face value. My critical comments, and the counterexamples I presented, were completely appropriate in the context of that article. In contrast with their latest offering, Jochum and Gasteiger’s previous paper did not present a sound and accurate definition of constitutional symmetry, nor did it indicate in any way that crucial steps had been omitted. I am sympathetic with the problems of describing a complex algorithm in the limited space of a journal article, but if space limits the development of a fundamental concept, it is the responsibility of the author to say so, and to indicate that a reader must obtain additional information before he tries to implement the described procedure.

It ended with a letter from still other people writing another letter to the editor, "Canonical Numbering" (1979), DOI: 10.1021/ci60019a600 :

> We have been following with some interest the controversy appearing in this Journal regarding canonical numbering and various types of The first article by Jochum and Gasteiger contains a number of incorrect and misleading statements about both their work and the work of those who preceded them. ...

> Jochum and Gasteiger also strongly implied that they had a “simple” algorithm which gave complete partitioning, eliminating the need for a comparison step. Carhart correctly pointed out that this was not the case. Subsequent publication of the details of Jochum and Gasteiger’s indicated that it does contain a comparison step ...

> On a more general level Jochum and Gasteiger complain that Carhart did not contact them “directly to help him clarify his misunderstanding”. Yet it is obvious from the large number of misinterpretations and/or misrepresentations which appear in their work that they made no attempt to clarify their misunderstandings by discussing such matters with the original authors. Publishing last on a particular subject accords one considerable power, power that carries with it the responsibility to treat the preceding work with fairness and objectivity.


>People are not supposed to communicate only via papers.

A paper and its supplementary materials are supposed to be enough to reproduce the experiment. In practice, this often fails, but that is a fault in the scientific process. Science isn't just about empirical knowledge, it's about public and redundant empirical knowledge, as opposed to losing important knowledge of the natural world when the original investigator gets hit by a bus.


Wouldn't those problems in the scientific process get corrected more easily if you contacted the original author to see if there are any details that were missed and then publish those details with your results instead of just publishing a paper that says "Nope, couldn't reproduce"?


No, you publish the results that you cannot reproduce.

Then maybe the next generation of researchers documents their work better.

Or maybe the original researcher publishes a v2 edition of their paper.


> People are not supposed to communicate only via papers.

That we use written communication that persists through generations is the basis of science and society in general. If we cannot communicate sufficiently via papers, we're in a world of trouble.


I used to hold this opinion, but my experience with academic research changed my mind. Much of the scientific knowledge we have is passed from generation to generation by mentoring. The amount of knowledge is so vast, and our means of searching the written literature for relevant facts so poor, that when I want to learn something or solve a specific problem there is no substitute for a discussion with an expert in the field.

The core problem is that human communication is very difficult. It becomes even more difficult when we try to communicate ideas without interaction, as we do when writing a book and expect someone to read and understand it. If I read a paper and I can't understand a sentence, it might take me days to figure out what's going on by myself, whereas asking an expert might yield an answer in less than an hour (sometimes minutes). The difference is really orders of magnitude.

There are whole fields that have effectively died because no one works on them any more. That knowledge doesn't live in anyone's mind. All the literature is there, but actually acquiring that knowledge by reading the literature is incredibly challenging and time consuming.

I have come to believe that the main purpose of hiring scientists in academia is to keep knowledge alive and have it passed on to future generations. Advancing research is of secondary importance. In fact I would say that most new research I see probably has no intrinsic value. I include my own research in this category. We have researchers solving esoteric problems of no value to anyone besides their own personal entertainment. Except, working on such research keeps our neurons firing and keeps knowledge alive. It is a well known phenomenon that taking a break from research very quickly leads to a sort of decay of memory. Our learned ideas and the connections between them wither away without constant reinforcement. In order to keep knowledge alive we have to engage in research, even if it seems pointless.


>I have come to believe that the main purpose of hiring scientists in academia is to keep knowledge alive and have it passed on to future generations.

Then these scientists should be devoted to producing textbooks and courses which can then be taught to non-research students. Yes, all knowledge about the scale of what a single individual knows (and keeps on their shelves, hard drives, etc) is embodied as communities and traditions, but we still get far greater redundancy of that knowledge from teaching it as undergraduate or master's-level coursework than from passing it down only via research mentoring.

If 25% of the population gets an undergraduate degree, 11% or so gets a postgraduate degree, and only about 1.7% get a PhD, then we need to be embodying society's knowledge among the larger cohorts for that knowledge to survive. We can't afford to live in a world where only 1.7% know how things work.


> Then these scientists should be devoted to producing textbooks and courses ...

Textbooks and courses exist for everything but the most cutting edge stuff (which are still in flux anyway), but they are a very inefficient way of transferring knowledge. I would say they are practically useless without expert guidance. At the most basic level, there are so many of them that an expert has to tell you which ones are both good and relevant to what you want to learn. I've once seen a student waste months of his life studying a book he thought was relevant, only to discover that book wasn't building towards the sort of knowledge he needed in that subject. The book was about the correct subject, but was focused on somewhat different aspects than the ones he was interested in. There was no way for him to know this in advance without guidance.

So we don't know how to organize existing books. Also, even the books that exist are usually pretty bad at conveying knowledge. Or perhaps humans are just pretty bad at learning things from books. Either way, no one knows how to write textbooks and courses that are much better than what we have today. I really don't know of a better way to preserve knowledge than the current one. Perhaps technology can improve the situation by making access to knowledge more interactive. But I suspect this would require a real breakthrough.

> We can't afford to live in a world where only 1.7% know how things work.

Why not?


I have a concrete counterexample. Let's say I write a paper presenting a model, plus some numerical results of large simulations. The code is based on gluing together various pieces of open source code. All these codes are typical scientist codes that are held together with duct tape. My paper is short, but I spent a lot of effort munging things together, and I'm fairly certain nobody can reproduce my results without my source code (preferably the whole environment) unless they spend a lot of time on trial and error like I did.

The tweaks I did to glue things together has no theoretical value and don't belong in the paper. As a practical matter, I can't fit a lot of source code into short paper format.

What do?


Open source the code and supporting data.


It's not that simple. What if some of it is proprietary? What if I'm not allowed to submit code because I need to be anonymous so reviewers can maintain impartiality? What happens when one of the upstreams update and breaks my code? Do I need to keep it updated? Forever?


At my institute at least, scientists are required to maintain everything that is necessary to reproduce a result for at least ten years. That includes all the data and the software used to produce the results. It's not an easy job, but it's important.


If your institute also mandates they make the data/software publicly available, then that's definitely the exception rather than rule. Also must be hideously expensive.

It almost never happens that a paper I read actually comes with usable source code.


Then your results are not reproducible and your conclusion is suspect.


A lot of thought has gone into such questions. For example, see the guidelines at https://www.epsrc.ac.uk/about/standards/researchdata/


> Let's say I write a paper presenting a model, plus some numerical results of large simulations. The code is based on gluing together various pieces of open source code. All these codes are typical scientist codes that are held together with duct tape. My paper is short, but I spent a lot of effort munging things together, and I'm fairly certain nobody can reproduce my results without my source code (preferably the whole environment) unless they spend a lot of time on trial and error like I did.

Then leave out the results since they are just an anecdote. If you want to include experimental results then it has to be done in a scientific fashion.


The usefulness of a paper that doesn't stand on its own is rather limited, though.


All papers should start with a dictionary? No clearly not, so there's always going to be some assumed knowledge - words change their meaning and have different meaning to different people so we're already on to a loser just with the medium we're using.

So, the possibility of things like, say, a researcher not mentioning something that is standard practice in their lab that later is found to be a crucial part of the setup for an experiment seems high. But just like you don't want to provide a dictionary of standard terms with a paper you don't want to provide a list of the chemicals used to mop the floor, or a list of the lumen and colour temperature ratings of lights in the fume cupboards, or ...

IMO if a paper is not reproducible then yes it should be published but also the original team producing the paper should be challenged to reproduce the results. It's not a fight, we're all on the same team - work with them and try to find the reason for the lack of reproducibility.


> So, the possibility of things like, say, a researcher not mentioning something that is standard practice in their lab

I'd suggest a different formulation: "standard practice in their field"

Standard practice in general cooking? That's ok. Standard practice in my kitchen? That's a problem.

The research is IMO like a meal recipe a knowledgable chef should be able to reproduce.

Though it is understandable why one would forget to mention something. Especially if they thought it was general practice to do something their way.


Maybe a paper does stand on its own, with the large list of citations at the end. But, maybe some of those citations are journals that your institution doesn't subscribe to, or are historical and in another language, or et cetera.

There is a page limit to publications in high impact journals, and generally it's not great practice to utilize the limited space on the details of hurdles overcome.

I would argue that some of the most important papers in science don't really stand on their own... they need context and expertise that the paper can't and shouldn't cover.


You left out this part:

> The gist of her article was that she feared too much of the criticism of past work had been ceded to social media, and that the criticism there is taking on an uncivil tone.

So it seems to me, her complaint was not that research shouldn't be criticized. It was more-so that research should be criticized well, and that social media is a poor place for it, because it favors personal attacks over thoughtful criticism. (I'm not saying I agree with her. Just trying to understand her argument in the best light.)


Personal attacks in academia aren't really unheard of.


neither are bullshit papers/editorials/letters-to-the-editor

http://frog.gatech.edu/Pubs/How-to-Publish-a-Scientific-Comm...

"Eminent" specialists often argue in favor of eminence (journals, tenured investigators, blah blah) over evidence (tossing an analysis up on arXiv and posting a link to it). Young turks or iconoclasts often do the reverse, since there is a power structure in place via editors, reviewers, study sections, and the like. Orthodoxy often sticks even when the evidence supporting it is quite skimpy (or absent).

Funny thing is, when you pull the underlying data (in fields where this is possible), you routinely find that the reported conclusions are overblown. Not necessarily wrong, but routinely sold as more conclusive than they are (and sometimes they are in fact wrong, whether due to small sample sizes, "outlier removal", or outright fraud).

In no way, shape, or form are these bad habits limited to psychology. They happen in basic biology, they happen in medicine, they happen in clinical trial reporting. It's faster and easier to oversell shitty science than it is to do good, thorough science, so with the tight competition for grant funding, you can imagine what happens next.

Trust whoever you like, but verify results from everyone. There are good people out there who are fastidious; there are good people out there who are sloppy; and there are people who don't care one bit whether they're publishing absolute horse shit. The onus is on you, the reader or researcher, to do the requisite critical thinking (and, perhaps, a few analyses before you waste time running down a dead end).

Keep an open mind, but don't let your brain fall out. Also, many good scientists aren't very nice people, and some very pleasant people in science are shitty scientists. It's very hard to tell a priori which is which, so look at the evidence and decide for yourself.


You are entirely correct (p< 0.05). To be fair, psychology is a broad church. Within it, we have people like Jacob Cohen, Daniel Kahneman and Tversky (mathematical psychologists awarded a Nobel in Economics).

But apparently we also have Susan Fiske. I think that the real difficulty with psychology is that people care too much. Its a little easier to be objective around the decay of atomic particles than about the intelligence of different genders and races.

Additionally, many psychologists are not comfortable with statistics. A large proportion went into psychology to avoid maths, and are somewhat disappointed to discover that this does not work.

So they follow guidelines, and paste tables from SPSS into MS Word (no automation for some reason), and stuff like this happens.

They don't use cross-validation and they are too much in love with their models, and the theory that said models facilitate.

I suppose this was a pretty long-winded way to say "Not all psychologists!"

Cohen, J: http://www.ics.uci.edu/~sternh/courses/210/cohen94_pval.pdf (The World is Round (p< 0.05))


An interesting recent complaint about Fiske doesn't entirely spare Kahneman:

http://andrewgelman.com/2016/09/21/what-has-happened-down-he...


> I think that the real difficulty with psychology is that people care too much.

Yup. There are powerful social forces which act to dissuade psychologists from publishing research which opposes to feelings of their peer group. One can't speak an inconvenient truth.


> If these are attitudes typical of psychology, then I cannot say I consider psychology to be a proper social science.

It's quite unscientific to draw a blanket conclusion like that about a field based on the anecdotal evidence of one named person and some unnamed "critics".

Let's get some statistics before casting doubt on the entire field of psychology.

I think most people are trying to get by and to be good. My personal experience in academics is that the vast, vast majority of students and teachers just want to study their art or science and contribute to their community without having to spin or sell to get funding. Some of the people that you hear about in articles get attention because they like attention, and they will happily say opinionated, contentious and unscientific things because it gets talked about.

There are bad actors with high levels self interest in all fields of study and work, so really the only scientific conclusion you can come to is that not all humans are perfect.


Sure "but not ALL psychologists!" is valid here (as it is in literally any discussion about anything online or elsewhere) but she is not just one person. She is the former head of a professional organization for psychologists; if we look for someone to represent them as a whole, she is the first place we should look. Would the professional organization for physics have as their head someone who believes we had it right when we said the earth was flat and the expertise of these scientists shouldn't be questioned?

How many psychologists have/will publicly denounced her? HN is ready to attack every police officer who hides and defends the actions of a few bad apples as bad apples themselves (rightly so), where's the equal treatment for psuedoscientists?


> She is the former head of a professional organization for psychologists; if we look for someone to represent them as a whole, she is the first place we should look.

Well, no; even we accept the arguments that (1) professional organizations of psychologists are a good proxy for generalizations about psychologists, and (2) the head of an organization is a good proxy for the organization, and (3) that each of those relations are strong enough that treating the relationship as transitive such that the head of a professional organization of psychologists is a good proxy for psychologists in general, she still wouldn't be the first place we would look to represent psychologists as a whole.

Because she's a former head of such an organization, not the current head. Just as we wouldn't look to Jimmy Carter as our first choice to represent what Americans as a whole believe (or John McAfee as our first choice to understand what the business organization now known as "Intel Security Group" believes), we wouldn't look to the former head of an organization as the first choice for generalization about what the group the organization is supposed to represent believes.


Generalizations are necessary for communication, cognition, general human understanding. If you try to reject them you will be rendered incapable of discussion, communication, thought about nearly any relevant or interesting issue.

Positions (1) (2) (3) are nearly axiomatically true -- if we are to generalize at all, and we must, we need to look for positions and people to generalize from. The very purpose of a professional organization is to aid us in doing this. Members of these organizations use it as a mental model to generalize "my professional community" and "the leaders of my professional community".

If you look at her resume, she is the former head of this and about 4 other high-prestige psychologists professional organizations, and a high ranking member of nearly every organization she is eligible to be a part of as a psychologist. She is the very definition as "well-respected representative of her field". She is a fine target for generalization. What you have not brought to the field are conflicting reports from other psychologists contradicting her or suggesting she is not an appropriate representative. Jimmy Carter and John McAfee are readily directly contradicted -- if you ask me to provide sources for numerous professionals in the field suggesting they are not the standard I can do so easily and so can you (though Jimmy Carter is a weird selection -- he is not a professional, and both are telling, since neither of them work in their field anymore)


I pretty much agree with you, but since the comment I replied to wasn't addressing Susan Fiske directly at all, and was generalizing wildly and negatively, I personally think this is a little misdirected.

I don't know Susan Fiske, and while she might deserve it, I won't be joining the crowd of pitchforks to judge and attack her based on a few forum comments. But, if you really want to excoriate someone who represents a lot of people poorly, it might be energy better spent and more timely to worry about Mike Pence who, I've heard directly from a handful of academics in Indiana, is doing very bad things for science and science funding.


> If these are attitudes typical of psychology, then I cannot say I consider psychology to be a proper social science.

is a proposition. He/she's not concluding psychology is not a proper social science; he/she's stating

> If these are attitudes typical of psychology, then I cannot say I consider psychology to be a proper social science.


> If these are attitudes typical of psychology, then I cannot say I consider psychology to be a proper social science.

In my experience, such attitudes are typical. Here's a 2014 paper by a Harvard psychology professor who tries to argue that null replication results shouldn't be taken seriously because the failure to replicate ipso facto indicts the study's methods:

http://jasonmitchell.fas.harvard.edu/Papers/Mitchell_failed_...

Quote: "Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way ... The field of social psychology can be improved, but not by the publication of negative findings."


Wow, so many responses that come to mind and they're all of the form 'un' + some swearing + 'scientific'.


> Someone who publishes a replication is, in effect, saying something like,“You found an effect. I did not. One of us is the inferior scientist.”

Wow... I really don't see it like that.


I agree with you. When Two scientists find different results only inferior scientists would take it as a personal attack.

There was a story about two chemists, I might have some minor details wrong, but that shouldn't hurt this anecdote because something like this happened back when chemistry was still raw and new. The going wisdom was based on the assumption that the atoms in a molecule were the only thing that determined its properties. These two scientists were having trouble replicating each others results when working with a molecule of 2 Carbon, 2 Nitrogen, 2 Oxygen and 1 Mercury atoms.

One describe it as inert and the other as explosive. Neither bothered to include synthesis steps in their papers and each thought the other was wrong. Rather than insult they swapped more and more information until eventually each confirmed that we dealing with a molecule with identical parts, but different properties.

They had discovered that molecules have structure. One arrangement was Mercury Fulminate an unstable explosive used in primers and detonators today and the other was really inert and not of interest, to me at least.

Any scientist who argues with the concept of "disagreement based on evidence" is arguing a foundation of science. They should be looking for evidence of reasons for that disagreement instead of quibbling over personal attacks and defense.


The last quote, which is the most extreme response, seems like it's not entirely within the same context as the article. I got that it was talking about _human_ online critics, and especially (only?) those who have an "uncivil tone", which is something I agree with.


Yeah, that's the key detail here.

The article went kinda off-topic on itself, by throwing in quotes about a completely unrelated incident, which on skimming may be read as referring to statcheck, despite having nothing whatsoever to do with it.


> That anyone in the field of psychology would even be comfortable publically espousing a non-scientific worldview like that means that psychologists are not being properly educated in the scientific method.

This is a generalization about psychologists from a sample size of one psychologist, to suggest that they don't understand science?


Hey, I did say "if" at the start of my little rant. You're right that I don't know how embedded this attitude is in psychology, but I am often dispirited by the attitude towards verification these days based on a number of different things I've read.


It can be dispiriting, but I think there is also a lot of good science being done that is too boring to hit the radar.


One ex-president of the psychologists professional association, a person who was literally deliberately selected by the community of psychologists to represent them.

It would similarly be completely inappropriate to generalize the policy positions of Hillary Clinton to the Democratic party, yes?


Saying that psychologists don't understand the scientific method is similar to a saying that democrats don't understand something as basic as the separation of powers in government


Calling Statcheck a "gotcha algorithm" descends into self-parody. It reads like outrage at the idea that anyone might expect you to be accurate about facts.


Anybody who objects to such an anodyne project that strenuously with such disingenuous arguments immediately makes me think they're hiding something.


There go Americans throwing around the terrorism word again. https://news.ycombinator.com/item?id=12620132


Don't judge a science by looking at just a few of the people in it...

Also to say i think it's a graded thing. Often such type of critique might be serious and well developed, often it might be serious but based on misunderstanding, and sometimes it might be an attempt at trolling, annoying or payback. I would guess your average peer reviewer engages more deeply with the paper than most readers and takes time to check their critique before sending it, and since there's an editor involved there might also be a better level of politeness. If now suddenly every reader is free to criticise (assuming they fall in category 1 & 2) that can end up in a barrage of critiques that an author might feel obliged to respond to in order to protect his/her reputation. And still those critiques can then be taken out of context, eg when someone in the media or from the competition tries to put you down a notch for holding the wrong views...

Think of anti-vaxers going around and criticising every paper on vaccines. You're always going to find something to criticise and once it's criticised you can point to SG authoritative 'the paper is criticised by other scientists for..'

Or another angle: do you want to feel obligated to defend every one of your papers for the rest of your life?

This can really feel like bullying even if it's not intended as such. That's in particular if you write about contentious issues or such with strong interest groups - think vaccines, abortion, gender, GMO, lobbying, financial regulation, inequality, race, ...

I think open 'peer review/critique' is a good idea, but it can take dark turns if there's no mechanism to prevent this.


As someone in this field somewhat close to the article in a certain sense, I wouldn't say Susan Fiske's attitudes are typical of psychology. Maybe not rare, but not typical either. Her comments have been very controversial to say the least, and many are disturbed by them. Moreover, if you level that statement against psychology, I'm afraid you have to level it against medicine as well, given that there have been similar sentiments expressed in major medical journals as well recently.

Also, without meaning to defend Fiske, her comments here are taken out of context somewhat. Her reference to "methodological terrorism" (from what I have heard through the grapevine) is more, or at least in part, about the trend toward having scientific debates outside of the peer review process, in social media. So my guess is that she might say that part of what she objects to about Statcheck is that it crawls through the papers, labels an error, and then we end up discussing it on HN rather than through peer review. What if Statcheck made an error, which it does sometimes? I don't agree with it, but I think the position I'm describing (which I think is her point in part) isn't unreasonable either. That is, it's not the checking of stats, it's the chaos and disintegration of the peer review system, and "extrascientific" discourse that's happening in science today, if you define "extrascientific" as "outside of peer reviewed journals," where your critics are attacking you on twitter, forums, and facebook, more so than in professional published outlets in a sort of mob.

Again, I do not share her perspective at all (I'm in favor of a shift away from journals) but I do think here her original point was twisted a bit.


If psychologists were educated properly to be scientists, they'd abandon psychology, which is at best an art.


I agree with Dr. Srivastava that psychology is not only a science but the hardest science:

https://hardsci.wordpress.com/2009/03/14/making-progress-in-...

> What are the “hard” — as in difficult — problems in science? Hard problems in science are those that are embedded in complex systems; they are hard because to study something well you often need to isolate it from outside influences. Hard problems are those that vary by local conditions — science seeks to identify general laws, and when something is locally dependent, you need to sniff out the complex interactions that make it so. And hard problems are those that are difficult to quantify — science rests upon formalization and quantification, and you need to get traction at that initial step of quantification (i.e., measurement) before you can test theories. So… by these measures, if we are going to differentiate areas of science, the continuum of scientific problems should go from “hard” to “easy,” and psychology is clearly a science that deals with hard problems. Perhaps the hardest.

Reminds me of the counterintuitive about the two hard problems in computer science. Naming things, and deciding when data is obsolete, are easy mechanically. But because they have neither a deterministic nor correct solution, they end up being the hard problems you face as you build bigger projects. Or, to emphasize a line from Dr. Srivastava above:

> And hard problems are those that are difficult to quantify — science rests upon formalization and quantification, and you need to get traction at that initial step of quantification (i.e., measurement) before you can test theories.

It's possible for a science to be "hard" while having a lot of mediocre practitioners.


That quote still fails to show why psychology should be considered a science. It only speaks about the difficulty of problems in science.


In what way does any of that support your/Srivastava's assertion?


"a proper social science"

Is 'social science' even a 'proper science' ?


In general, if you have to put science in the name, it isn't.


Computer Science?


That's long for math


Which definitely is science; QED.


Proofs are math, science is uncertain (otherwise why bother running experiments)


No, science is - literally - knowledge.


> No, science is - literally - knowledge.

No, science is a means of generating (and justifying) beliefs, it is not knowledge itself.


What a clever and, dare I say it, fantastically useful experiment!

So much less harm than even "door knob twisting" type explorations - no, this was using published works and pretty much running them through a process to verify or not verify accuracy.

Unsolicited? So what! As a practiced writer I make unsolicited judgments on language usage all the time. Are these people that completely write from their own minds and don't use a spell check or grammar check program of any sort before sending their material for editorial review? I'd strongly doubt it, because it's a tool to make communication more accurate. Math and formulas having a similar procedural check sounds quite constructive to me.

It's not bullying to point out errors; it's bullying to use the existence of errors to belittle or insult a person. I don't see that happening here. Sure, it's a little sterile or "cold" in this fashion, but I think that's for the best if such a process / tool can gain acceptance. It just spits out results and I think that's all it should do. Neat to read about.


Certainly not bullying. It's actually shocking to me that these studies could have been published without running a similar check on all of the arithmetic. The technology to automate this has been possible for decades.


Right. To my mind, this level of scrutiny is little different than a linter or static analyzer for a programming language.


Putting grammar on the same level as this is wrong. Poor grammar and spelling doesn't frequently change the entire results of a paper. Math errors though can easily be the difference between significance and insignificance, changing the entire set of conclusions that can be drawn from the paper.


Uh, no, poor grammar or words or phrases used improperly can genuinely have cascade / domino effects. Brush up on "Eats, Shoots, and Leaves" if you wish to counter-balance your math-centric logic notions with how similar principles apply in written language. English is, without question, one of the more tricky (re: confusing) mutts of linguistics.


> English is, without question, one of the more tricky (re: confusing) mutts of linguistics.

I am aware of two theories in linguistics about the complexity of languages:

1. All human languages are equally complex, presumably for some fundamental reason.

2. Some languages are more complex than others. Specifically, languages with a long history of being spoken only within isolated communities are more complex, and languages that have had a wave of non-native adults learn to speak them get simplified as that happens.

English is one of the go-to examples for simplicity in languages that have been learned by many adults; the others I'm aware of are Swahili (the trade language for north/east Africa), Mandarin Chinese (the cross-regional language for China), and (in reverse) Latin (the cross-regional language for western Europe -- a lot of grammatical features of Latin get lost as the Romance languages develop from it; you might wonder why Greek still has noun cases when the romance languages lost them).

I'm not aware of any theory that would suggest English is more tricky than other langauges.


Can you point me to a published paper where the entire meaning changed because of poor grammar?


I don't have an example from a published academic paper, but from the linked article:

> the odds are this will stoke real hostility for those who are already dubious about what has been termed 'bullying' and so on by people interested in reproducibility.

I'm pretty sure this was intended to be read as

    [has been termed] ['bullying' and so on by people interested in reproducibility].
But it could also be read as

    [has been termed 'bullying' and so on] [by people interested in reproducibility].
Ambiguous wording like this could easily be overlooked in review, especially if the misinterpretation agrees with what the reader was already expecting.


I think "bullying" is a reductive term for what's actually happening. I think the point trying to be conveyed is that there is a certain decline in the author's personal status when they get holes punched in their publications.


Sounds like a nice incentive to double-check before publishing.


I mean, it's open source software, right? Nothing stopping you from downloading it and running before publishing.

I view that as a pretty solid way to make the science better all by itself.


As it should be, when you publish a stupid error that can substantially alter your conclusions.


I'm not sure that's the healthiest attitude. The pursuit here is knowledge, not prestige. Getting personal doesn't do anything for the higher goal.

I read once that the attitude in NASA over Space Shuttle code was very focused not on who wrote the bug, but how to fix the bug. The thought was that who caused the failure was not nearly as important as preventing the failure, as "failure" in that context generally meant loss of human life.


Of course it turned out that NASA's two biggest failures were human/cultural, and which led to the loss of two shuttles. Besides, as you say it's not about prestige, it's about knowledge, and if you're publishing something with basic math errors, it leads to questions about whether you'd know "knowledge" if it bit you.


Right, but there's something to be said for fostering a culture where fixing errors - in schematics, in research, in source code, whatever - is considered more valuable than assigning blame.


Agreed, but when you're dealing with a culture that abhors critical thinking and sees something harmless like this bot as "bullying", we're a long way from that culture of fixing errors.


I find it very disconcerting that people are trying to fend off criticism of previously published studies by calling it "bullying" or sometimes worse. What do feelings have to do with science?


Well, scientists are people too, so they have feelings and are not perfect. I think there are polite ways provide criticism and corrections, hopefully without humiliating people with good intentions. If it's a simple mistake, they can email the author and suggest an erratum. If it's more serious, then it's common to write a response in the journal. I think more public forms of criticism are perceived as bullying because a general audience can't typically judge the magnitude of the error, since they're not scientists working in the field, so it can be unjustly damaging to someone's reputation.


If someone objectively screwed up a published study, there's no room for feelings or social consideration. People might be wasting their precious time and money pursuing impossible goals based on the incorrect study.

Let's say someone finds a mistake in a math paper. Do they politely ask the author to fix it? What happens if the author ignores them? The scientific community doesn't have time to coddle and pursue every mistaken author. Everyone makes mistakes, and the scientific community knows that, but those mistakes need to be brought into then open, not (maybe) resolved behind closed doors.


I think there is some nuance here; not all screw ups are equal, nor are all ways of correcting mistakes equal.


It's not coddling as much as professional courtesy. Mistakes are often brought up at conferences and in peer review, but most scientists within a specific research area try to be on good terms with one another and don't see value in publicly shaming colleagues for their mistakes.


There are two options here:

Use a bot to find tons of mistakes automatically and risk coming across as rude.

Or

Let social trivialities dominate scientific discourse and let most of those mistakes go unchecked forever because there's no feasible way to "politely" address hundreds of authors who made mistakes and keep checking back to make sure they actually fixed the mistake.

The former is clearly the preferable choice. Some individual scientists will suffer for it, but the scientific community as a whole will benefit greatly.


There is a third option that's better than either one you proposed.

Get people writing new papers to use statcheck before they publish.

The only rudeness here was an avoidable choice - publishing statcheck's results on a huge set of already published papers. The statcheck authors chose to do that for exposure, they even said so, it was not for posterity or the scientific well being of the community.

I don't personally think what they did was wrong, I don't particularly care that some people felt it was rude. But the fact of the matter is that the rude part was completely avoidable.


And what happens when a new fact-checking algorithm comes out? What I said in my comment.


Only if they're also rude about it and run it on a huge set of old papers, and do a big PR campaign to get attention.

Otherwise, the only thing that happens is papers quietly get better and everyone on all sides is happy.


So they shouldn't run it on the old papers, so as to not offend anyone's sensibilities?

Do we just declare said papers completely useless then? Or do they keep getting cited by new papers, and used to guide policy? If the latter, then not vetting them using the newer and better methodology would be unethical.


That would be jumping to a conclusion I didn't state. I think I already address this comment above, I don't think what they did was wrong, and I don't care if people were offended.

But, since you brought it up - old papers are already dead, they cannot be fixed. They can only be referenced as prior work, or retracted in extreme cases. We're not talking about extreme cases here.

Statcheck can only help new papers, it cannot help old papers. Running it on old papers was done as a publicity stunt for statcheck, and nothing more. The authors said so.


> If the latter, then not vetting them using the newer and better methodology would be unethical.

On the contrary, it would be unethical to hold people to new standards that didn't exist when the work was done. If you have a beer and then next week the laws change the drinking age to 65 years old, should you go to jail?

This is simply not how things are done, in science or in society. Laws and standards change all the time, you are only subject to the laws or standards that exist when the work or action you do is performed & evaluated. For the purposes of scientific publication, we do not and will never revisit all prior work and formally re-judge whenever standards or policies change.

You may be conflating publication policy with general scientific understanding. Old papers will always be informally evaluated under the thinking of the day. But that doesn't help the old papers, nothing can be done about the old papers, they are part of a fixed record that can't change, it only allows us to publish new papers. What will and does happen now is new papers will be published that refute old papers. The new papers are subject to the new methodologies.


I don't get this argument.

There is no new standard here. This is just a tool that says, "is there anything that looks like a mathematical/statistical mistake".

The expectation that a paper's calculations are correct and error free is one that has always existed.

There is no "thinking of the day", there is just mathematical correctness or not. At most you could argue that we might learn that some thing we thought mathematically true is no longer so, and warrants us reviewing papers where it had previously been used, but that is not the case here.

It is not unreasonable to hold published research to the standard of correctness, and if a paper contains errors within its calculations these should be fixed - regardless of when or who published it.


You're not holding people to new standards. You're holding their research to new standards. Which is the only sensible course of action.

And no-one suggested that those papers should be rewritten. But insofar as there's some known problem with them, why is it a bad thing to have a public record stating that much?


We may be agreeing and mis-communicating, or agreeing violently, as I've heard it called, so let's get specific. What are you suggesting should happen to a paper when it's shown to have errors?

Nothing at all is wrong with adding new information to the public record stating the issues, that's what I mentioned is already happening -- new papers reference and demonstrate the weaknesses of old papers. In my field, as I suspect most, it's a time honored tradition to do a small literature review in the introduction section of a paper that mainly dismisses all related previous work as not good enough to solve the problem you're about to blow everyone's mind's with.

In my mind, nothing is wrong with what the statcheck authors did either. My one and only point at the top was that it's not surprising it ruffled some feathers, and that it didn't have to ruffle any feathers. That only happened because the results were made public without solicitation. @wyager was trying to paint the situation as a dichotomy between rude or unscientific, that rude was the only option. Rude is not the only option.

If statcheck hadn't published the review of old papers and contacted all the old authors, then I'm pretty sure two things would have happened: 1- this wouldn't have ruffled any feathers, and 2- it wouldn't have gotten much attention, and we wouldn't be talking about it.


> new standards that didn't exist when the work was done

Being correct is not a new standard.

> This is simply not how things are done, in science or in society.

Oh, I say! Positively indecent! A moral outrage! We can't have our morals compromised by scientific objectivity!

> we do not and will never revisit all prior work and formally re-judge whenever standards or policies change.

Have you ever heard of Principia Mathematica?

> nothing can be done about the old papers

Except to try and figure out when they're wrong, as this bot is doing.


Please try to judge comments in their context and not look for excuses to try and humiliate someone with sarcasm. I was discussing standards with @int_19h, who brought up the issue of new methodologies. Of course being correct isn't a new standard, I agree with you, but that's not what was being discussed.

Of course morals shouldn't be compromised by scientific objectivity. Again, you're arguing a straw man - that's not the issue I was talking about.

I have stated multiple times, including my first reply to you, that I think the bot is fine. My argument in context is that a paper cannot change because it has been published. Do you disagree with that? That doesn't have any bearing on whether bots or people find & publish errors later. It does have a bearing on how people will respond to PR campaigns to publish errors when nothing can be done about it on the part of the author. Statcheck will do good things for authors who get to use it before they publish rather than after.

Maybe you're not reading all of what I wrote? Maybe I hurt your feelings?


My argument in context is that a paper cannot change because it has been published.

I think this statement is, if not completely untrue, grossly misrepresenting how existing papers are interacted with.

First of all papers, as with all publications, have errata published all the time. These errata may be included in future prints, or published in a separate location that can be looked up by people using the paper. Publishing errata is not a new occurrence, and although perhaps technically the original paper remains published unchanged, it is disingenuous to claim that this means the paper cannot change.

Modern publishing methods, such as the arXiv, allow for new versions of the paper to be uploaded, literally changing the published version of the paper.

As you point out yourself, literature reviews should point out issues with existing papers. Do you think that the original authors throw their hands in the air, thinking to themselves "oh well, it's published, nothing can be done"?? Of course not! If they are still engaged with the subject they either defend the paper, correct obvious mistakes, or continue experimentation or investigation in response.

To claim that errors should not be pointed out simply because the original authors can do nothing about the errors is diversionary at best. Of course errors in published results should be made public. How else can we trust any of the works?

If errors in existing research is always hidden, squelched, swept under the rug, we have no reason to trust it. It is the openness of research - publishing in the open, criticising in the open, discussing in the open - that allows us to trust research in the first place. Indeed, that trust is already eroded by revelations of systemic issues like p-hacking within published research.

You may be suggesting that posting these analyses to the individual papers was the wrong way to do it, that it would be better done in a literature review or paper.

I completely disagree.

It is essential that anyone looking to reference a paper with a glaring mistake in it (which many of those affected are) is able to see that mistake and correct for it. Leaving the old research be is just ensuring that incorrect ideas are allowed to propagate, and have more of an impact than they ever should.


Science being a social activity, I think you have to justify the presumption that we are talking about social trivialities here.


>Science being a social activity

Hoo boy. Research might be a social activity, but science is the application of probabilistic reasoning to evidence collection. Science isn't a "social activity" any more than topology is. Any social complications are entirely incidental.


Yes, but that probabilistic reasoning is applied by humans with feelings which sometimes get in the way of their reason. Wanting to have those feelings simply go away is unrealistic.


What does that have to do with calling out papers for being incorrect?


Sorry for replying so late.

Your findings have to be accepted by the others. If they're not accepted it's like they don't exist. Think of all the great theories that didn't take off at first, because nobody accepted them (for various reasons).

When you attack someone, they are less likely to listen to you, even if you are offering valuable feedback. They care about saving face, so they will focus on defending themselves.

I realize this can be frustrating, because it means truth doesn't always prevail. However, it's what we have to work with, our emotional brains.


I find it hard to understand how something that is clearly an automated bot, posting comments that are 100% factual ("I checked this paper; N things looked wrong"), could be perceived as humiliating.

I guess it would be the case if you assumed that all people are perfect and never make mistakes. I would hope that psychologists, of all people, would know better than that.

So if it's not that, where's the humiliation part in pointing out math mistakes?


I don't necessarily think this is bullying or humiliating, but it's silly to think that being done by a "bot" and being "factual" has anything to do with it. If a malware "bot" secretly posted people's porn viewing history to their facebook page, would that not be humiliating for those people? Or would it not because it was factual and done by a "bot"? Clearly it would be.

Saying it was done by a bot is no excuse for anything. The bot didn't spontaneously pop into existence - somebody created it and decided what behavior it would have.

In this particular case, whoever created the bot could easily have made it email the authors of the mistaken papers and given them a chance to correct the mistakes before outing everybody in public.


Being done by a bot is not an excuse in general. However, when pointing out objective factual mistakes, I think there is a difference between your colleague pointing it out, and an automated tool pointing it out. Even if result is the same, the former can be embarrassing, hence why we learn to phrase negative responses in a roundabout way. But politeness cannot be expected of a brainless machine, and so it can deliver simple facts.

So it seems that it boils down to public disclosure before private?

Out of curiosity, how would you imagine the "correct the mistakes" procedure after private disclosure? The author cannot just edit the paper, it's already published. They would have to publish errata, which draws just as much attention. And, from an ethical perspective, if an author is notified of a mistake found by autonomous tool, wouldn't they be required to disclose the methodology when publishing errata? So I'm not sure how that whole situation is fundamentally different from just dumping it in the public.


I don't care about their reputations. It's not a school mascot contest. These are scientific papers.


Yes, and scientists don't care. Science, as it has been explained to me by scientists, is a penis length contest, where impact factor and prestige serve as ersatz penises.


The best part is that any fool can see that impact factors are a terrible proxy for real impact, and in fact the editors of several glam journals have pointed this out (disclosure: I have plenty of glam papers on my CV*).

But since it props up a myth that senior faculty like to believe (i.e. their shitty old Cell paper is great because Cell is/was great) that's the yardstick. As the director of a CCC once told me, it takes too long to see the impact of papers (citations piling up) so the JIF is used as a proxy.

Sort of like how it takes too long to do good science, so some people just publish whatever garbage they can sneak past the editors (ha fucking ha only serious). There is a LOT of crap in the literature as a result.


When I read that sentence, I immediately thought it sounded like some people were being childish and defensive.

OTOH, a bunch of people got unsolicited error reports for already published papers, and I can understand being initially irritated. This was, by the admission of the statcheck authors, a way to get attention quickly.

My guess is it'll all settle down and people won't complain about bullying once everyone just uses statcheck like they do spell-check.

To answer your question though, I think feelings have everything to do with science. There are many reasons different people do science, and they all stem from emotions. Some people do science out of curiosity, to solve mysteries. Some do science as a means to an end, to gain knowledge required to further some other goal. Some do science to gain social standing & intellectual superiority. Some do science to help others understand how the world works. In all cases, the reasons people are doing science is because of a want, some kind of desire to achieve a goal. Nature and physics will continue to exist whether we explore it or not, we do so because we care, and caring is a feeling.


> unsolicited error reports for already published papers, and I can understand being initially irritated

Please don't see this as me attacking you, i am genuinely baffled by what you said and would like to understand your thinking.

How exactly could you possibly have any sort of understanding for that? How would you feel about a software author being irritated by unsolicited error reports? If that irritation is less acceptable than the one in your quote, where would lie the difference?


> How exactly could you possibly have any sort of understanding for that?

I'm also confused by your question, so I can relate to you. :) Despite your warning, the way you phrased that question does sound like an attack, it implies that you know my experience does not allow me to speak on this subject, which you do not know. Why is it hard to think that I can empathize? What do you think you know about me that makes it implausible for me to understand this situation?

I can and do relate to it because I'm a published paper author. I imagine that it would be irritating to me as easily as I imagine it would be irritating to others. For me personally, I don't think it would make me angry, but it would give me anxiety to cast doubt on a paper I'd already published, even if the report about inaccuracies are true. It would mean that I didn't do as good a job as I thought, which of course I want to know, but a published paper is part of a record that cannot change. I'm sure many paper authors, for better or worse, have the same reaction, that something that casts doubt on their publications after the fact would cause some degree of mental anguish. That doesn't mean we shouldn't search for the truth, it means only what I said, that I can understand the reaction.

Your analogy to software errors fails. (And I also have first-hand experience with this as the owner of a software business.) Software is an on-going work that can and should be improved to remove errors at all times, and errors that get fixed do not affect my personal reputation or career. Reports of software errors can also be irritating in their own way - I don't want to know my software is buggy - but they are always welcome. Published papers cannot be improved, they are fixed in the permanent record. There are some ways to recover from severe errors, but there are no ways to recover from minor errors, and public (academic) perception of the quality and level of the errors in a paper can change the way an author is viewed.


> it implies that you know my experience does not allow me to speak on this subject

Not in the least my intention. I could not understand how anyone, regardless of knowledge level, could empathize with the quote as stated.

From your response, it seems then that your quote wasn't meant in the way it seemed to me. It's not even the unsolicited nature of the report, but any error that causes irritation, and while possibly felt in the direction of the messenger, ultimately caused by the system being ... Well. Broken.

Thanks for the in-depth explanation. :)


Yes having any error pointed out can be frustrating.

However, the unsolicited nature of it may play an important role in this case. In order to empathize with the feelings of people who might have been irritated it is helpful to understand the academic paper publishing process.

The authors all went through a stressful process of submitting their hard work to a journal and then being evaluated by a panel of "experts". Many of them had to make changes to their papers and resubmit them in order to get published.

There's a level of understanding and expectation about how this process works. The papers aren't normally open for public comment before publication, and they don't normally get public comment after publication. They're evaluated by people in the field, and presented at conferences, and then referenced in other papers if they're influential.

Having a unknown third party with brand new possibly buggy software cast public aspersions on a paper after the fact, at a time when nothing can be done about it, is simply not helpful to the authors and is not how reviews normally happen. It's very easy to see why authors wouldn't particularly like this, even if they would use statcheck in the future.

The only real problem here was statcheck's authors publishing all the results and making a great deal of noise about it. They didn't have to do that, it is an aggressive move that was not designed to help authors, it was designed for statcheck to get attention. We have no idea how big of a problem it is, this article might have been mostly muckraking, and statcheck might be great and well liked.

Anyway, I don't think the system is broken. It is currently working better than it has worked at any time in the past, and it is continuing to improve. Statcheck might improve it more, but that remains to be seen. Other software tools already have improved it.


In Mathematics, if you get an unsolicited error report and the report is accurate, your first reaction is "oh my god" as you assess the damage and how much revision needs to be done to correct the paper. Then the answer is "thank you! I didn't realize so many people even read my work so closely! I'll add a correction and thank you for your contribution in an update of the paper".


Yes! And I think this is true of most paper authors in most fields. It has been rare in my experience to see anyone end up angry or upset about getting a correction in the mail. The majority of people who publish know that errors occur all the time, and are glad to hear about it, more so if it happens early enough to do something about it.

I'd bet the majority of recipients of statcheck's automated correction, whatever their initial reaction, appreciate and end up wanting to use this kind of a tool before publishing their next paper.

It is worth mentioning there's a large stylistic difference between receiving an unsolicited error report directly from a reader & having a nice conversation about it, and being notified that an unsolicited error report has been published and attached to your paper automatically, without review, for all to see.


Right. If you can't handle critique (or embarrassment from invalidated conclusions), don't publish. Write in your diary instead.


Exactly. This isn't even subjective critique, it's math. Your math is right or it's wrong. Lastly from what I can tell there isn't even any value judgement attached, simply a notification of incorrect math.


This was my thought exactly.

The most disturbing aspect to this whole deal is the response of the administrators who I assume represent a good deal of the community.

The researches who contributed this project ought to be thanked and rewarded for their participation... not called terrorists.


Objectivity is listed as a problem with science curriculum. Welcome to the future.

http://nsuworks.nova.edu/cgi/viewcontent.cgi?article=2467&co...


While I am not a big fan of contemporary feminism, I am pretty sure taking issue with masculinity or "gendered" text is not the same thing as taking issue with objectivity.

Edit: Found this quote in your source "Poststructuralism “rejects objectivity and the notions of an absolute truth and single reality,”"

Hmm, this seems an awful lot like a religion.


I'm not in any sense an expert. A lot of it seems to boil down to what assumptions are implicit.

When you consider something like math, you can pin down what axioms have what consequences. This set of rules produces a group, adding some more axioms produces a field, things like that.

When you get to more ordinary stuff like living life, it's not clear what axioms you picked, vs what axioms i picked. Most people don't think about it at all.

Imagine a coworker that just started showing up topless. That would be freakishly weird. But really, in a professional setting, why would it matter? There are some handwavy arguments about the nature of professionalism, but all of that relies on what axioms you pick for your culture.

Anyway. We all have these ideas about how we're supposed to interact, but we're all playing by different rules. So you get into these Wittgenstein kinds of discussions.


It's pretty trivial to take the Ibn Warraq approach, though: just reject poststructuralism itself.


> I am pretty sure taking issue with masculinity or "gendered" text is not the same thing as taking issue with objectivity.

Well, since English is a gendered language, taking issue with 'gendered' text is taking issue with the objective reality that … English is a gendered language.


Aside from things like pronouns, English hasn't been gendered since the 14th century:

https://en.m.wikipedia.org/wiki/Gender_in_English


'Aside from that Mrs. Lincoln, how was the play?'

Pronouns are precisely what folks who complain about gender in English are complaining about (well, that and words like 'mankind').


Glibness aside, the usage of gendered pronouns does not make a language gendered. That's only true if there has to be agreement between the gender of a noun and words relating to the noun.

And it's also not what people are arguing. Instead, they argue that gendered terms are the product of cultural norms and prescriptive grammar that reinforce gender roles that are oppressive to both men and women. That some view these assignments as objective when that's untrue linguistically and historically, I think only lends credence to their point.


Relevant part of the report:

"As these examples show, the STEM syllabi explored in this study demonstrated a view of knowledge that was to be acquired by the student, which promotes a view of knowledge as unchanging. This is further reinforced by the use of adverbs to imply certainty such as “actually” and “in fact” which are used in syllabi to identify information as factual and beyond dispute (Biber, 2006a; 2006b). For example, “draw accurate conclusions from scientific data presented in different formats” (Lower level math). Instead of promoting the idea that knowledge is constructed by the student and dynamic, subject to change as it would in a more feminist view of knowledge, the syllabi reinforce the larger male-dominant view of knowledge as one that students acquire and use make the correct decision."


The equating of objective facts with "male-dominant views" is just bizarre.

I mean, seriously, they are saying that in the feminist view is that there is no objective reality?

I suppose they are saying that feminists... don't live in... reality.

You can't make this stuff up. At least they're up front about it, I suppose.


A fact is a human construct. It is a symbol. You just conflated facts with reality. A cursory examination of reality will reveal that observations of reality are observer dependent. So what is an objective fact?

An objective fact must be one that is covariant in change of observer interpreting the fact. Physical laws like General Relativity are covariant in this way: the facts of relativity differ symbolically depending on the reference frame of the observer but different observers' facts are related in a coherent way.

In GR an observer is a reference frame. In order to define objective fact you need to presuppose what an observer is.

It is a masculine behavior to consider this question unimportant. A feminine science is much more interested in the study of the subjective because you can't be objective without understanding subjectivity.

Ultimately an observer in science is a human. That means objective fact can only be understood by understanding the subjectivity of the scientist. An objective fact is in fact an intersubjective fact, and intersubjectivity is an essential contribution to science by feminism. It's a good keyword to google.


And yet, would the feminist science claim that intersubjectivity is the correct view to take? Can it? Or must it be that intersubjectivity is only the view of certain feminists and itself depends on the observer -- to some observer, independent observable facts exists, and to some others, they do not as they are intersubjective?

I think your position is interesting if odd and completely alienated from the way science is done or thought about in all of human history. I think the downvotes are coming from referring to standard "incorrect" science in your view (as it does not embrace intersubjectivity) as "masculine" and the "correct" view as "feminine" -- to put gender labels on abstract philosophical positions seems contrived and silly to some (me, at least). What does masculinity have to do with belief in objective facts? Why do only women understand that facts depend on the observer? Is it just because the person who introduced reference frames and general relativity to modern science, Albert Einstein, was female? What does adding these incendiary labels to the positions you outline and contrast add to the discussion, other than encouraging your mostly male and mostly skeptical-of-gender-politics audience to not listen?


Intersubjectivity is a major part of modern psychology. It is a relatively new concept. Its influence grows slowly as more and more people find it usefully applicable to their work. Like most science, actually. Intersubjectivity is not a normative idea or a theory, but more of a point of view. It is a heuristic that says, in order to study a thing, study the subjective experiences of that thing and their relations. Observer-invariance is too strong of a constraint on what is a fact: physics already embraces this. What you want instead is covariance, which is how observations transform when you change the observer.

Studying the covariance of observations is intrinsically a study of intersubjectivity, although I need to point out here that I am merely making an analogy between human observers and the abstract observers of physics, which are themselves simplifications of human observers.

> independent observable facts

Independent of what? Observation? A fact is necessarily tied to how it is observed. That's what I'm talking about here. It is a fundamental aspect of the scientific method that scientific observations can be made by different people and then their results can be compared. The dependence of facts on observation is absolutely crucial, so I'm not sure what sort of independence you're talking about.

W.r.t. masculine and feminine, I wrote my first post on a phone while in a taxi and I think one of my paragraphs clarifying this point got eaten by a pothole. I clarified in a sibling post but I'll expound here: I don't mean anything intrinsic to men or women. I am referring to archetypal masculinity and femininity, which are social constructs. Every society has had these archetypes, for example the ideas of Yin and Yang, the feminine and masculine principles of ancient Chinese thought. There is little physical basis as they apply to the human sexual dimorphism for these archetypes and they vary from society to society.

There is well established scientific literature observing these aspects of society. For example, women are expected to understand the emotions of others far more than men are. I'm time constrained so I won't cite this; it's not hard to peruse the literature. Does this mean that men can't empathize or that it makes them girly if they do? Of course not!

> What does adding these incendiary labels to the positions you outline and contrast add to the discussion, other than encouraging your mostly male and mostly skeptical-of-gender-politics audience not listen?

I don't feel like I'm being incendiary. Why are you receiving my words that way? It's an interesting phenomenon.

And this might sound pedantic but, I don't care if you're skeptical of gender politics because gender politics are real, and they have been real for thousands of years. Masculinity and femininity as social concepts are real in that sense whether you acknowledge their existence or not. Feminists did not invent gender politics, they merely scrutinize them. There was a time when gender was considered an inviolable concept writ large across the cosmos by a male deity. Was a time? Still a time for hundreds of millions of people. The fact that people, mostly women because they were the most incentivized to do so, started critically engaging these ideas philosophically (thus laying the groundwork for scientific investigation) is a boon to me and you. I would suggest y'all stop taking gendered analysis of social phenomenon personally because it can only enrich your conceptual toolbox.

> I think your position is interesting if odd and completely alienated from the way science is done or thought about in all of human history.

I did math research a long time ago. I'm still pretty connected to the mathematical research community. These ideas are corroborated by my own experiences. We need to make a distinction between what people say is the way science is done and what the actual way science is done. We need to make a distinction between the way science is thought about and the way people think they think about science is done. This isn't pedantry or navel gazing or postmodern bullshit, this is hard-nosed critical thinking.


Thanks for the explanation.

>This isn't pedantry or navel gazing or postmodern bullshit, this is hard-nosed critical thinking.

I don't think it is any of that -- to me it smells more like a new word for classical philosophical skepticism/relativism. Which is very hard-nosed critical thinking and perniciously difficult to dismiss (see G.E. Moore's 'rejection' of skepticism -- "here is one hand") In fact while you say it is very new, I'd posit that it is very, very old. Thousands of years old. The essence of it from what I understand as you've explained it, minus the unnecessary gendering of the concepts, is taught in undergrad philosophy courses, has been for centuries.

My first point was trying to get at this question: we have two ways to approach science, the traditional approach where we choose to believe facts are objective and try to behave that way, and the "intersubjective" one. But which one should we take? What is the right framework to even make such a decision? Even if you simply assume the goal is predictive power, do we assume there is an objective assessment of predictive power that we try to get at, or that these predictions are inherently intersubjective and can't be compared so that no determination can be made?

And sure gender politics exist. But not everything is always about gender. I think what people reject is not considering gender politics at all, but for example trying to make science about gender. The relativist/skeptical position does not need to be feminine more than the realist position must be masculine. There are social gender structures that lay the burden of expectation on members in society, yes, but they do not completely pervade and define every moment of every thought of every person -- there are some instances in which we are human first and gendered second. And most I would suggest, believe (specifically hard) science is one of those instances. Or we should treat it that way. The entire purpose of mathematics is that there is no "female mathematics" and "male mathematics".

To that point, I would really be interested in hearing a little more about how intersubjectivity plays out in the mathematics research community.

This is what skeptical-about-gender-politics is about. Not rejecting gender politics themselves, but rejecting turning every discussion about anything into a discussion about gender. Which often quickly evolves into an attack on men (abusive, 'privilege', implying the work men do means less because of their advantage, etc). Where everything bad is the fault of "the patriarchy" (read: men). Which is tiresome. And not at all what you have done.

And as to your request,

>And this might sound pedantic but, I don't care if you're skeptical of gender politics because gender politics are real, and they have been real for thousands of years.

>I would suggest y'all stop taking gendered analysis of social phenomenon personally because it can only enrich your conceptual toolbox.

I would say that you should embrace intersubjectivity in your own position and understand that your posts cannot be objective, they are observed and interpreted by many others, so what matters is not what you say, rather, how it is observed by your audience. If you want it to be a strong argument /to HN/ you should take some steps to avoid throwing up red flags that HN readers are used to using to dismiss arguments out of hand as a tribal attack (this is what I meant by "incendiary" -- I was not incensed by it but the 'men are dumb but women are smart' trope is familiar because it is as ubiquitous as it is uninteresting. Your position is not this at all, but it superficially contains some of the same characteristics).

Again thanks for your thoughtful response, I enjoyed it a great deal.


I only brought up gender because I was clarifying what the quoted author meant because the person I originally replied to found it confusing and spurious. I think the key is that the intersubjective nature of science means that it is open to social analysis, a subset of which is feminist analysis, and basically I've just been giving my interpretation of what the quoted author was saying.

You brought up a lot of interesting points but I only have the energy to respond to one of them given my reception lately :)

> To that point, I would really be interested in hearing a little more about how intersubjectivity plays out in the mathematics research community.

1. What constitutes a correct proof? A proof has to convince other humans. Two mathematicians who work together a lot can sketch out an informal proof that they both agree on, but it's harder to write a proof that is widely considered rigorous enough. A fully formal proof that a computer can verify isn't anywhere near feasible. What constitutes rigor today is different from what constituted rigor for Euler is different from what constituted rigor for Euclid.

2. Most mathematics isn't done as symbol manipulation. Mathematicians rely on their human intuitions. We share a lot of the same cognitive structures but we each have our own preferences.

"It must be admitted that the use of geometric intuition has no logical necessity in mathematics, and is often left out of the formal presentation of results. If one had to construct a mathematical brain, one would probably use resources more efficiently than creating a visual system. But the system is there already, it is used to great advantage by human mathematicians, and it gives a special flavor to human mathematics." - Ruelle (1999)

Interesting idea: if there are differences between how men and women think, there could be a male and female mathematics. I doubt there is any significant difference though. Likewise, autonomous AI will almost definitely do mathematics with a distinctly different flavor from human mathematics even though they should be mutually intelligible.

3. What we think is important to study is what we think our peers and superiors value. A grad student does mathematics their adviser thinks is interesting. A grad student chooses their adviser based on their interests. Which fields get grant money? Which are "hot"? Which fields are all but abandoned even if they have legitimate open questions?

4. Mochizuki has published a proposed proof of the ABC conjecture. That's a huge result. It has not yet been widely accepted because he worked alone for several years and the concepts he has come up with are very foreign to every other mathematician. So, a lot of the work of "proving" the ABC conjecture is teaching his ideas to other people even though he has produced a detailed proof. You can't just read the proof and understand it.

5. This isn't math, but physics. It's in the news recently that a time crystal may have been constructed. It's not entirely clear if a time crystal can even exist. How can two physicists look at the same experimental data and hold two different positions on this question?

6. Likewise, https://www.quantamagazine.org/20140827-quark-quartet-fuels-...

Physicists, hardest scientists of them all, getting really emotional about QCD? But I thought science was about objective facts! "Objectivity" is what remains when the science has settled, it is not the science itself, and it certainly is not a permanent state.

P.S. http://plato.stanford.edu/entries/process-philosophy/


Ok. So this is semantics and in this science there is a different meaning for the word "fact."

What is the utility of redefining a fact as a human construct of accepted information, rather than as data points corresponding to components of objective reality?


I think it's important to think about this because we as humans have major limitations in our understanding. For example, we are limited and biased due to the spatial and temporal scales of our senses, and we may be similarly affected by social things such as language, personal beliefs, interpersonal interactions.


Because you're putting the cart before the horse. Science is building a picture of """objective reality""" from human subjectivity. Data points aren't reality, they have to be interpreted. No matter how you try to do it, at some point you have to accept that a human being is doing science and the way science is done is fundamentally an intersubjective phenomenon. Science isn't reality. Scientific knowledge isn't reality. Science is a human activity and scientific knowledge is a relationship between humans and reality. I'm not redefining what a fact is, I'm unpacking the definition of a scientific fact and scrutinizing its dynamics.

Why is this useful? Because if you don't look at science from this point of view, you can't understand or even recognize good or bad science. The scientific method can only fail in its execution. Its execution is fundamentally an intersubjective phenomenon.

There is a deeper philosophical advantage to this point of view: This is the correct formulation of science if you want to do science about science. That is, if you want to study scientifically the method by which discoveries are made, become verified, and evolve into commonly held "objective facts", you have to formulate science in this way. If you want to study scientifically how people become scientists, you have to formulate science this way. If you want to study why scientific illiteracy is still a major sociological problem, you have to formulate science this way. Et cetera.

In light of all this, I have to ask: What is the utility of the idea of objective facts, objective reality, a science that is not intersubjective, one that is merely acquired and applied in the masculine mode? My answer to that question is engineering: Engineering is not interested in doing science, but applying it. In which case, objective reality is merely a shortcut to using scientific knowledge. I think that's an admissible use, but it is inadequate for science itself and understanding the relationship between science and society.

P.S. When I say masculine and feminine, I certainly do not mean qualities intrinsic to men or women. I am referring to archetypal abstract masculinity and femininity which are themselves diffuse aspects of human intersubjectivity the study of which reveals long-scale correlations in human politics and social structure.


There's this funny thing I've noticed with your posts where the more words you use, the less utility I get from understanding the post. It's almost like you use overintellectualization to bury that you aren't saying much at all.


I'm not going to lose any sleep over it. The people I was talking to found my posts intelligible and interesting. I doubt you want to ask me to clarify my points or point out in particular where I'm difficult to understand. I think you've just assumed I am communicating in bad faith instead of trying to communicate a complex idea but I shouldn't jump to conclusions like that :^)


Clarity and brevity are the enemies of obfuscation.


This is pretty interesting. Society and science don't always get along so it's certainly valuable to study the subjective relationship.

I think most of us who would disagree with you do so from the assumption that there exists an objective reality, regardless of whether an observer is aware of it at all.

In other words, the general consensus asserts that the universe doesn't care whether you or I know, understand or agree how it works, it works that way regardless.

You, and your science/philosophy, assert that since we cannot determine any reality outside of our observation then said reality does not exist (or is, at least, irrelevant).

I still think this is just semantics, even though both philosophies will continue to assert that their point of view is more correct.

From a practical perspective a person coming from the philosophy of a true objective reality existing and science being the process of finding those facts should be no different from a person advocating your point of view. Both will make errors and will need to correct them. Both will have to overcome dogmatic beliefs that are later proven incorrect.

The objective reality philosopher would claim that reality is correcting her errors whereas the inter-subjective philosopher would claim that new observations from other subjects (observers) are correcting his errors.

Regardless of where they believe the corrections are coming from, they seem to lead to the same science, and the same convergence toward what the data tells us.

(Regarding the P.S. That's a very helpful clarification. The terms seem to confuse the issue rather than illuminate it, in my opinion)


I believe in an objective reality too, I just don't think it's a scientific concept. Perhaps we can say that the existence of an objective reality is what makes science worth doing even if science cannot directly access it. I think I can agree with that.

P.S. you might find our different points of view well described as the difference between substance metaphysics and process metaphysics (http://plato.stanford.edu/entries/process-philosophy/).


> Data points aren't reality, they have to be interpreted

Data points perhaps need to be interpreted for utility, but they represent facts (but those are facts of sense experience, which is ultimately subjective, though often expressed in terms of a fairly restrained conclusion of that subjective experience.)

Which is not to say I disagree with your broader point.


Based on my experience with feminist though in academia, objectivity as a mask for bias is the issue. The preferred method to address is to surface as much of the implicit bias a possible so the whole can be considered. I don't find this problematic.


Bias is a big problem, but why not call out the problem directly instead of confusing the issue with a tangential, philosophical discussion about whether or not objectivity exists?


The problem is when "as much as possible" becomes subjective personal experiences. Basically treating un-aggregated anecdotes as data, while also drawing a line at questioning the validity or applicability of it (aka "just shut up and listen").


This appears to be about STEM education, not STEM fields directly.


So STEM curriculum should teach there are no objective truths? 1 + 1 is not always 2? Failure to reproduce a study could just be the result of different cultural, gender, or racial differences in the researchers and does not necessarily invalidate anything?


I won't go so far as to agree there are no objective truths but your second point certainly sounds plausible. Researcher bias is a well known issue.


Even if feelings have everything to do with science, isn't getting errors in your paper pointed out a good thing that you should be happy about?


I guess I can accuse my compiler of bullying me.


Bully it back with:

  $ cc test.c || sudo rm -f $(which cc)
"Listen, cc, we can do this the easy way, or the hard way..."


Just make sure you don't make a mistake on that command. You don't want to get into MAD with the shell.


I've been saying that for years!!!


They weren't looking for errors in papers and trying to push for corrections or improve the science. They were using these errors as fodder for promoting their software.

It might not be bullying, but it's not coming from a place of objectivity or honest debate.


Yeah, this is an abusive marketing technique. If you don't want future embarrassment you'd better buy our service.


Or just be more vigilant and less defensive, both of which improve the practice of science


That's just not how polite human interaction works. If you become aware that someone has made a mistake you inform them privately to give them an opportunity to rectify that mistake. In the case of a scientific journal entry you might instead publish a response article. You don't put everyone on blast and practically blackmail them into purchasing your for-profit service. It was never about the science at all, it was about the profits for this product/service.


> What do feelings have to do with science?

When the science in question is psychology, feelings have everything to do with it.


Studying feelings does not make feelings a part of studying.


Here's the GitHub page:

https://github.com/MicheleNuijten/statcheck

And if you're curious how it works, as I was:

Statcheck uses regular expressions to find statistical results in APA format. When a statistical result deviates from APA format, statcheck will not find it. The APA formats that statcheck uses are: t(df) = value, p = value; F(df1,df2) = value, p = value; r(df) = value, p = value; [chi]2 (df, N = value) = value, p = value (N is optional, delta G is also included); Z = value, p = value. All regular expressions take into account that test statistics and p values may be exactly (=) or inexactly (< or >) reported. Different spacing has also been taken into account.


Very helpful -- this points out that this is a very focused tool on simple calculations of test statistics. They appear in papers formatted very strictly, like this:

  t(37) = −4.93, p <.001
or:

  χ2(1, N = 226) = 6.90, p <.01.
Because of the strict format and the limited scope of the tool, one would suspect the false positive rate is very low. And because it's just a simple calculation (not a matter of interpretation), authors should not (in theory) be offended by getting a notice.

The link to their paper is: http://link.springer.com/article/10.3758%2Fs13428-015-0664-2


> When a statistical result deviates from APA format, statcheck will not find it.

Incentive for authors to obfuscate their math?


Pretty sure APA format is required for APA journals, but either way, fortunately, the desire on the part of authors to get papers published and on the part of reviewers to have standards and clear presentation outweighs anyone's desire to game statcheck.

But you do make a great point in that statcheck could and should red-line papers that present no discernible stats, and provide links to the APA style guide!


It'd be easier for them to not obfuscate and use the checker themselves prior to publishing.


True, assuming no malfeasance on the part of the authors.


If you are trying to publish a fraudulent paper, you aren't going to do it by including subtle math errors to fudge your data in the direction you want. You would just falsify the data itself; way harder to find, and much more effective.


> There’s a big, uncomfortable question of how to criticize past work, and whether online critiques of past work constitute “bullying” or shaming.

Science is fundamentally reputation-driven. One of, if not the primary incentive that encourages scientists to do science work is the chance of raising their prestige. Citations are one very quantifiable yardstick for this.

If positive social sanctions are a driving force for science, then it's entirely reasonable that negative sanctions should come into play too. If you can well-cited paper and attract fame, then a poor paper should likewise attract shame.

Otherwise you have a positive feedback loop where once a scientist has attracted enough prestige, they are untouchable. We need negative feedback to balance that out.


Shaming becomes another power structure that allows the influential to bully their peers. It's not so simple as you suggest.


This coupled with the Automatic Statistician (https://www.automaticstatistician.com/index/) will help fix a lot of biases and human errors that creep into scientific research.


The article sadly doesn't report on the false positive rate of statcheck. I assume the paper does?

I mean, it just uses a basic regular expression, I can see it easily performing bad checks. I assume the authors take this into account.


A good distinction between "peer reviewed" vs "computer verified"

>“The literature is growing faster and faster, peer review is overstrained, and we need technology to help us out,”

This is a problem in every field, not just Psychology.

I want someone to tell me the distribution (or average ratio) of papers read to papers written.

Every thesis written is supposed to add some delta to the state of the art. But there is no method for doing a diff between past and previous versions of human knowledge. How to make science less redundant and more efficient?

I dream of aggregators for everything.


I can certainly understand people being nervous about academic debate moving to social media. It would be a hassle for climate scientists if every paper got a brigade of climate change deniers criticising it and you had to respond to those criticisms.

But this example - someone notifying you there's a mistake in your paper, when there really is a mistake? That seems like a strong argument /for/ academic debate via social media, not /against/ it.


> some found the emails annoying from PubPeer [since PubPeer notifies authors of comments] if Statcheck found no potential errors

I would.

> There’s a big, uncomfortable question of how to criticize past work, and whether online critiques of past work constitute “bullying” or shaming.

It's facts about your work. Learn to handle it or quit pretending to be a scientist.

> The gist of her article was that she feared too much of the criticism of past work had been ceded to social media, and that the criticism there is taking on an uncivil tone

Valid enough point. Criticism and correction can be done in a civil manner, and in an accepted forum.



Remember when writers could do spell-checking and grammar-checking by "running a program" on their text files?

Here we have numbers-checking working the same way.

I bet you this sort of feature gets built in to word processors eventually, and puts wavy red lines under the results it flags.

We've had this sort of real-time "syntax" checking in software engineering for half a generation. It seems wise for other disciplines to consider adopting it too.

It's obviously got to be discretionary, just like spell-check is discretionary in browsers.

We will get a new genre of humor, thought "statcheck fail."


Haha! What if this is actually a marketing ploy for their web-app? Stir up some shit so everyone gets talking, and provide a service.


And what if? Does that negate the results?


Oh no not at all! The results are still solid. So, "ploy" isn't the right word, not really. "Stunt"? "Technique"?

I was just thinking - They could have found a way to email or otherwise contact the authors, instead of just posting comments, but posting comments (and getting some news on it) drives more traffic due to scandal effects.

This rabbit hole can go deeper - what if private disclosure was expected to backfire? By surprising people publicly, they're more forced to admit there's an issue; and then by providing both the issue and a solution, there's an easy way at hand to fix it....


If I use software and there's a bug, I don't want to find out years later when the author decided to admit it, I want to find out right away so I can get it patched. Papers are equivalent to code, if I'm developing something based on one I want to know if the foundation is reliable or not.

> what if private disclosure was expected to backfire?

Well of course it could be expected to go badly. Look at all the "concern" spent over their methods already.

But yes, it would have been interesting if they had contacted authors quietly and without telling them their methods, simply to see who updated their paper and who tried to blame the messenger... Then publish that list and let the field really correct itself.


While it is definitely to the benefit of all that the bot emails authors when it finds mistakes, emailing when it doesn't find anything is a dark pattern. Reminds me of those bots that spam me after scraping my linkedin.


Why does the article focus even for a paragraph on whether egos would be bruised? If the result is a general improvement to the readers' understanding that is, as far as I'm concerned, case closed. Good on them!


Will I never guess what number 7 is?

(DNR)


I wanted to make a program like that, but considered the ethics of it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: