Scientists publish in slow peer reviewed journals, because scientific publication has to be peer reviewed. The reviewers are a very small selection of other scientists, who are expert in the field of the publication. As long as the reviewers don't signal white smoke, a publication is no science by definition. This way of work is important to keep quality excellent. We certainly don't need more quickly published junk that stirs the world but is found to be inaccurate later.
Can we accelerate the process of peer reviewing? I doubt that. As I have written, the reviewers are other scientists who have to find the time to read and comment new articles. The more time they spend with this, the less time they spend doing research. So the small amount of experts usually is responsible for being slow.
You may be quicker by internet publishing, if (IF!) you work on a highly dynamic field where other scientists are hungry for your results. As the author writes, this is the exception. Most scientific results will be useful only a long time after the publication, if they will be useful at all. In this way, science is much faster than application already.
2. The author writes, we could make progress faster. Please: why? It seems to be a sign of the time that we want to accelerate everything. It is most likely that this will only produce more noise. We need to slow down our lives and our thinking to stay accurate and produce real value. I am glad that science results, at least in my field, are still reliably scrutinized by many. This way I know that reading and spending the time to understand the stuff is worth my time.
I notice this particularly in computer science, where it seems nobody is capable of locating or reading journal articles published before around 1995, a handful of canonical "classic" papers excepted.
I think that more openness in scientific publication will in fact mitigate the problem of people being quick-on-the-draw, as contradictory findings and previous publications of similar results will be easier to find, even by the non-scientific audience.
How much of that is because research from the 1970s is effectively locked away behind ACM and IEEE paywalls?
If you're serious about keeping up with research, a couple hundred dollars every year to get through a paywall doesn't sound so bad. The real problem with the stuff from the 1970s and earlier is that it's not readily available even to paying members. When I went looking for an article from 1970, the ACM Digital Library had an entry for it but not the document itself.
I am a professor at a reasonably good state university, having just come from Stanford. At Stanford they subscribed to everything, and here our department picks and chooses so that I constantly run into paywalls despite a university subscription.
I don't know how much it would be to upgrade to Stanford-level access, but if it were $200 a professor I assume they'd do it. (Certainly I'd pay $200 from my salary for that.) I'm guessing high four or low five figures per prof in the department.
I'm guessing ACM/IEEE are nonprofits? Kudos to them for making their prices reasonable. There are some professional organizations in math (e.g. the AMS) that do something similar. But unfortunately a lot of our journals are published by for-profit companies.
Wikipedia says the IEEE is non-profit, not sure about the ACM.
IEEE weights towards the EE and high-math end of things, ACM weights towards the CS end of things.
The Journal of the ACM is about $300USD/yr for print/online access (nonmember).
I have a wide variety of interests and would prefer to have full access to about 5-7 journals (Some Wiley, some Elsevier). Assuming $300/year prices that's a pretty decent sized price to keep up with research interests. :-(
(Of course, I'm confident part of the problem is that expensive paywalls discourage exhaustive trawling by non-academics).
Of course it is arguable that peer review weeds out noise too. But you have to get your work in front of a peer, and they have to understand your contribution. Its also arguable that peer review leads to stagnation and missed opportunities/discoveries. In fact, surely it does.
A very small (but growing) minority of journals, like PLoS One or , make no effort to understand or estimate the significance of the contribution, only whether or not the work is scientifically sound.
The article seems to be a badly researched rehash of other Open Science articles. Open Science is not about faster delivery, it's about access to information. The problem is not that publications are slow, it's that without paying for the magazine other scientists, the media and non scientists can't access those papers and learn, divulge, criticize or contribute.
Peer review is slow because publishing and retracting used to take a long time, now it is instant. And as for the quality of the reviewer, perhaps it would make sense for the journal to employ professional scientists that only review -- they'll be up to date on all the current research (that's their job), can highlight gaps, and can help cross pollinate other disciplines by reviewing 2 or 3 different but related topics.
The idea that a scientist should do everything seems to be more and more inefficient. We need to break down tasks like we do in the commercial world so that experts can move faster.
Have you ever done a peer review? It took me hours to do each one. To do it well, you have to stare at each procedure and each conclusion and try to imagine what could be going wrong.
You may also have to read a lot of literature, both in general (so that you can recognize the difference between something genuinely new and something that was already published in 1975) and in particular (so that you can make sure the manuscript is not misrepresenting its own references).
And, given what's at stake in a review – you're basically holding a year or more of some grad student's life in your hands, at a minimum – it's only respectful to take it seriously.
perhaps it would make sense for the journal to employ professional scientists that only review...
This is either obvious, or silly. Obvious, to the extent that journals always have employed scientifically trained editors – they're called "editors" – to do everything from tweaking bad phrasing to offering criticism to ultimately making the final decision about what gets published. Silly, because journal editors are rarely experts in your specific field. How can they be? There are a lot of scientific fields. It's impossible to be "up to date on all the current research" in every single one of them, to the requisite level of detail. And most fields aren't awash in money to the extent that they can support a full-time editor with complete expertise. In the general case, you have to rely on peer review because only your peers have the incentive to be experts in your field.
Could I be wrong, sure -- but if the field is so big that a dedicated person can't keep on top of it then there is no chance for a scientist to do that either. The biggest issue is whether there is an incentive for someone with the skills to do it; why would I want to just review the work of others when I have the skills to do my own.
You're missing the point of scientific publication. The entire point of publication is so others can reproduce your work. Speed doesn't work if you need a billion data points taken over 20 years to prove a long term issue.
>The idea that a scientist should do everything seems to be more and more inefficient. We need to break down tasks like we do in the commercial world so that experts can move faster.
The problem here is that for a lot of cutting edge theoretical research, the experts are the only ones qualified to really vet the paper. And finding experts that: 1. Are capable of understanding everything in the paper and 2. Are willing to put aside their personal research to review the paper is a VERY small set of people.
For example, the P=NP proofs produced over the last few years were only presented to a small group of about 20 very qualified mathematicians to vet. I, with an undergrad CS degree, could barely understand the summary of the proofs. As far as I know, those proofs still have not been completely refuted or accepted. I don't think that this is inefficient, it's simply the fact that there aren't enough people with enough time to really work with extraordinarily complex concepts and ideas.
Agreed. In my academic field (Debuggers), to understand and know enough to give a real review of the subject requires reading what I would estimate to be somewhere around a thousand papers. It also requires keeping up with the major industrial producers of the product. I think the field has something like two to three thousand papers right now, I haven't checked in a while.
For my work, I am just hitting the seminal papers and trying to avoid spending time reading work that led nowhere. I've read maybe two-three hundred articles (haven't tracked), and my thesis cites over one hundred.
So no, ordinary people won't do that. Ordinary computer programmers won't do that. Most people are not capable (or prepared) to understand everything in a given paper and comment on what has gone before and what has been tried before.
Academic knowledge maintenance is the total philosophical opposite of tl;dr and Twitter.
I'm genuinely curious how much of the difficulty here is actual breadth/depth of the subject matter, and how much of the difficulty is due to some systemic inefficiency in the way research is published and consumed.
So: to know what's seminal so you can skip reading a bunch of papers, you need to read a bunch of papers. Right.
In theory, that's the point. In practice, no journals ever publish replications, so nobody wastes their time reproducing others' work when they could be working on something publishable or their next grant proposal.
Uh, this is exactly how Science works. It's not worth publishing the exact same results of the exact same experiment by multiple people. That only adds noise to the discussion.
If I arrive to the same conclusion after running the experiment again, then there is little benefit to anyone if I do a full writeup and publish it. On the other hand, if I am unable to arrive to the same results, there is a tremendous value in publishing my findings. Was the original study flawed? Were my own methods? That's what peer review and publishing results help determine.
Getting more data improves the accuracy your results even better than using more sophisticated algorithms: http://www.catonmat.net/blog/theorizing-from-data-by-peter-n...
Effectively requiring a positive result for publication means two phenomena can be the subject of multiple studies, with the one fluke that finds correlation being the one that gets published. At that point, we only get corrected if someone actually attempts to replicate the result, but replication effort may well be seen as a waste of resources.
If the paper is groundbreaking, everyone will want to read it anyway.
If it is so-so, an expert will be able to determine if it is correct by skimming it pretty quickly.
Exact reproduction can be problematic in science because if there was a flaw in the original experimental design or method, an exact reproduction could "confirm" that same flawed result. It's better if other scientists can learn enough to understand the result, and design their own experiments to confirm or disprove it.
A super simple example is if I drop something and then report a value for gravitational acceleration. But what if I dropped a feather? If you simply reproduce the experiment, you'll get the same (wrong) result. Whereas if you select your object to drop, there's a better chance you'll pick something denser and get a different result.
The methods of science will be disrupted and improved. Bright lay people will absolutely be able to poke holes in research once the methods and data are completely open, as has been indisputably proven with the advent and prominence of open source software. Research has shown systemic flaws in numerous disciplines. These sorts of flaws fester because of the closed nature of the system. Open research will have its day. It's only a matter of time.
By the time they can astutely manage the actual field with understanding, they will no longer be lay people.
Peer review is not robust against even low levels of collusion (http://arxiv.org/abs/1008.4324v1). Scientists who win the Nobel Prize find their other work suddenly being heavily cited (http://www.nature.com/news/2011/110506/full/news.2011.270.ht...), suggesting either that the community either badly failed in recognizing the work's true value or that they are now sucking up & attempting to look better by the halo effect. (A mathematician once told me that often, to boost a paper's acceptance chance, they would add citations to papers by the journal's editors - a practice that will surprise none familiar with Goodhart's law and the use of citations in tenure & grants.)
Physicist Michael Nielsen points out (http://michaelnielsen.org/blog/three-myths-about-scientific-...) that peer review is historically rare (just one of Einstein's 300 papers was peer reviewed! the famous _Nature_ did not institute peer review until 1967), has been poorly studied (http://jama.ama-assn.org/cgi/content/abstract/287/21/2784) & not shown to be effective, is nationally biased (http://jama.ama-assn.org/cgi/content/full/295/14/1675), erroneously rejects many historic discoveries (one study lists "34 Nobel Laureates whose awarded work was rejected by peer review" (http://www.canonicalscience.org/publications/canonicalscienc...); Horribin 1990 (http://jama.ama-assn.org/content/263/10/1438.abstract) lists others like the discovery of quarks), and catches only a small fraction (http://jama.ama-assn.org/cgi/content/abstract/280/3/237) of errors. And fraud, like the one we just saw in psychology? Forget about it (http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjourna...);
> "A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices....When these factors were controlled for, misconduct was reported more frequently by medical/pharmacological researchers than others."
No, peer review is not the secret sauce of science. Replication is more like it.
But an alternative perspective might be: think of how much worse things would be _without_ peer review.
Hey, I have this elephant repellant for sale. I know it works awesome because I haven't ever seen any elephants around here. (I also cited historical data about the absence of peer review working, you know, pretty well.)
Sure, peer review is not flawless. But flawed procedures can either be replaced or improved. Improvement keeps what is good and changes the rest. Complete replacement may come with new problems.
Don't think that you can escape vanity where people are involved. Not even in science. There are also always people who make use of system errors. As long as that is only a few percent, we do pretty well.
And yes, I think snowwrestler also has a very good point.
Division of labor does not imply peer review. Peer review is merely an ad hoc, unproven, flawed way of implementing division of labor.
Industry or start-ups may not make full use of publications that attack the same problem over and over again but they surely can improve with a new data or newly implemented algorithms.
[EDIT] Publicly available data and software make replicability more realistic. Currently lack of details in publications make it almost impossible.
If only there was some way to influence NIH and NSF grant requirements ...
I disagree. There are three incentives: first, getting a permanent job, second getting grant money, and third, doing good work. The two former are unfortunately sometimes in opposition to the third.
Scientists publish in slow peer reviewed journals, because scientific publication has to be peer reviewed. The reviewers are a very small selection of other scientists, who are expert in the field of the publication.
So, when you throw work out into the open, anyone with an interest in your topic (your functional peers) will be able to do their own analysis of your work (a review)--some sort of peer-review, if you will. All without needing to go through a stodgy paper review process.
This way of work is important to keep quality excellent. We certainly don't need more quickly published junk that stirs the world but is found to be inaccurate later.
So, if there is one thing we've seen in the programming world, secluded cabals of experts don't always result in excellent quality (I submit to you, OpenGL, C++, CORBA, for a few examples).
New patches are submitted to open-source projects all the time; some are accepted, some are rebuked publicly, and some merely catalyze interesting discussion. "Stirring the world" is hardly a bad thing--in the case of Fleischmann-Pons, the debunking happened quickly and the world was free to move on. In a similar avenue, one of the reasons that the eCat story is frustrating is the lack of transparency (both from the inventors and the media refusing to even air solid rebuttals)--were the information more open, we could sort it into bunk/science and move on.
So, you make the (valid) claim that the process can't go faster, predicating on the scarcity of experts. You seem to have the answer at hand: get more experts. This can be accomplished by either getting more researchers reviewing, or loosening the restrictions on who is considered a valid source. Indeed, places like StackExchange have shown us how to pick out decently good judges of these things--we don't have as great a need for the older method of establishing experts, anymore.
2. The author writes, we could make progress faster. Please: why? It seems to be a sign of the time that we want to accelerate everything. It is most likely that this will only produce more noise. We need to slow down our lives and our thinking to stay accurate and produce real value.
In order to help motivate the need for faster progress, please note the following:
Earth has over 7 billion people now. (http://www.unfpa.org/swp/)
The bottom 50% of people in the world (rightly or wrongly) own around 1% of the wealth. (http://escholarship.org/uc/item/3jv048hx#page-4)
We (at least those of us in the first world) don't have a lot of time left, unless we can find ways of solving the Big Problems. Progress in tech is likely the only palatable solution to this predicament. We can't really afford to slow down now.
I am glad that science results, at least in my field, are still reliably scrutinized by many. This way I know that reading and spending the time to understand the stuff is worth my time.
I'm greatly concerned about this sort of attitude. Might I suggest that an occasional dalliance in "time-wasting" articles might prove refreshing, and further that it could help ferret out ideas that would otherwise be overlooked?
edit: Modified for civility. Sorry folks.
The scheme you are advocating might have succeeded 200 years ago, when science was happening at a much for fundamental level than it is now. These days, things are so specialized that what the OP says is spot-on: only a handful of people are qualified to knowledgeably comment on your work. What's more, these people are all /incredibly/ busy; I would know since I spend my time trying to get their attention (grad student) :-) In short, the way we conduct research is kind of like what Churchill said about democracy; it's the worst system except for all the other systems.
As I've mentioned in another reply, I'd like to see the system cleaned up and made more accessible. I'm a mechanical engineer by schooling and a graphics programmer by trade, so I don't frighten easily at math or algorithms. That said, this paper gave me quite a start. How deep down the rabbit-hole would you say it is in the field of computer vision? Is it fairly advanced/niche in application, or is it about some sort of foundational knowledge that is supposed to be known backwards and forwards by practitioners of the art?
I feel as though a better way of presenting the papers might aid in understanding (clear list of dependencies for knowledge, clear list of applications, clear note of "hey, this is a minor optimization, so don't sweat the details if you don't get it", etc.), but my personal metrics for judging this may not be correct. Any feedback would be appreciated.
Realistically, even in a narrow field just reading (forget about working through the mathematics, checking the results, etc.) all the papers that are published through the slow existing process would take up more than 100% of working time. Clearly there has to be some kind of filtering process.
As it happens, I think that this process should be both faster and more open, but it is foolish to dismiss realistic concerns about how to decide which research it is most worthwhile to read.
There's a big difference between that and being afraid to be exposed to something outside their comfort zone (I am not too sure what you mean by that). You may hope to be so lucky as to have competitors with people like that on your team, but would you be happy to have on your team someone who spent literally their entire workday every day reading reviews of other people's work without contributing anything new of their own?