Academics want the credit for their work, which I think they should retain. However, dissemination of work is lacking - there are tons of supersmart ideas locked in academic papers; they act as a barrier for many.
Instead of a dense academic paper in LaTEX or whichever, what if the standard were to provide the simplest explanation possible, that required a graphic that demonstrated the idea.
Its sort of a tragedy though, when revolutionary papers present ideas simply and are overlooked, often because of the misguided notion that 'simple to understand' equates to a 'trivial insight' - intoerhwords, if you read it and it makes sense immediately, sometimes you think that it musn't be revolutionary. This doesn't always happen, but definitely to an extent.
All of the 'correctness' and 'proofs' are useful for a separate crowd, and should also be included, but in a separate section because they are for a different user, namely, for other academics who are well versed in the domain. Also, a place for code/data/materials, as well as a checklist or script or otherwise to strictly reproduce results.
I don't understand your complaint. Papers are written by researchers to each other. The 'correctness' and 'proofs' are the point of the paper. The standard is the simplest possible explanation that includes the justification for all claims made. You are asking researchers to write another, different, article for you. Maybe they would be better spending their time on more research and leave the popular writing to others who might be better at it.
If you are asking for some explanation that lies in between the extremes of original paper and a popular article, you are in luck: this is what textbooks do.
The supersmart ideas usually spread out before the textbook gets written - grad students bring them along as they go into industry, etc. Most - almost all in fact - papers are pretty boring by themselves. The ordinary case is that papers gradually build and refine ideas for a few years until we look back and say 'wow, we made some improvements on a decade ago. Cool. Now, pushing on....'
"revolutionary papers present ideas simply and are overlooked, often because of the misguided notion that 'simple to understand' equates to a 'trivial insight'"
is popular in the imagination but I don't think it happens very often. More common is when a great new idea is expressed wonderfully simply and people skilled in the art read it and say 'holy shit that's neat'. For example Einstein in 1905 was a near-nobody that presented powerful ideas very simply and his papers were not passed over.
2. A ton of research papers are extremely dense, while the actual 'newness' could be explained with a simpler definition and a really good diagram or code.
3. Without making too blanket of a statement, often, formal proofs are much more useful after one understands the intuition.
Inotherwords, I'd like to see a format that stresses expressing the 'new idea' simply. Proving that the new idea is supported by mathematics is definitely important (as you point out - its the 'meat'), but I believe understanding it follows intuition in many cases.
I agree with your other points, and many authors do work hard to give an intuitive description. Intuitive, that is, to a reader skilled in the art.
But don't think that just because you don't understand a paper that means it is overly complicated or obtuse. It might be perfectly compact and straightforward if you are familiar with the field and its conventions and notation. And this is the most efficient way to communicate.
Of course, some researchers just suck at writing. A few might even be trying to sound fancy and make things sound complicated or high-falutin'. Grad students start out with this tendency, but we try to beat it out of them ASAP. Some are never redeemed. But I like to think my group produces readable papers.
No, it is all that matters. Having a neat and intuitive understanding of a new idea that is wrong has at best no value and at worst is actively harmful.
Isn't the audience of an academic paper other academics? Placing proofs and details in a separate section is convenient for the layman but bothersome for the intended audience. Often, it is precisely these details that are important. In mathematics for example, using new methods to give simpler proofs of old theorems is very useful; here, other researchers would care about how particular details are resolved.
I could draw you a picture of my latest papers if you wanted, and it may look pretty but I think you'd be cheating yourself if you thought you were getting anything of out it.
If I can't get the key value of the paper from that first readthrough the paper is usually not very good.
Writing a good paper is actully hard and the typical formats exist for a reason. If you review say 20 papers/day during your initial research phase it's more valuable to have clear structure and an abstract than to have "easy to understand" language with lofty examples.
So basically...good papers do provide the simplest explanation possible. In fact it's something you very actively try to do when writing a paper. Or in other words: I think you just want more papers to be good (there's a lot of unreadable crap that seems pseudosmart but ask most academics and they'll tell you they strive for easy to understand).
I do think there's a problem with how research gets shared more generally though. I wish there were more scientifically literate writers at that boundary, because unfortunately, there aren't enough hours in the day for researchers themselves to fill that role.
My summary of the idea:
There is an awful lot of redundancy and wasted effort that goes into most papers. From introductions that need to be rewritten every time (when linking to a solid introduction would be both better and less time consuming). Each piece of a piece of a full paper (intro, data, analysis, ...) could be peer-reviewed and published individually. A full paper could then be built from these paper-bricks. Anyway, recommend reading the paper as it's well written and clear.
There's also a YouTube video by the author explaining it: http://www.youtube.com/watch?v=4sorEcLjN04.
"Formalize the structure of papers, such that each paper is composed of one or many (clearly marked) of the following "sections" ("bricks"):
symbol description my description
"I" Introduction ("domain intro")
"PS" Problem Statement ("specific problem")
"HLSI" High-Level Solution Idea ("solution vision")
"D" Details ("solution implementation")
"PE" Performance Evaluation ("benchmarks")
-------- >8 ---------- >8 ----------
Some of the advantages:
* no need to rephrase the same "intro" in every domain paper, just reference an existing "I" brick;
* a benchmark (PE) can just reference many "D"s;
* one can easily work "backwards" -- e.g. start with a benchmark (PE) of existing implementations and already publish it, then propose a new implementation (D);
* if someone publishes a similar paper before you, with similar "vision/idea" (HLSI), this doesn't totally destroy your publication, as you can still publish the part with an alternative implementation (D);
* "I+PS+HLSI or I+HLSI: This is what some communities call a "vision paper" [...]"
* & many more listed in the linked arxiv paper http://arxiv.org/abs/1102.3523. Very nice, short and readable one, this.
‘We are nonchalantly throwing all of our data into what could become an information black hole.’ -Google's Vincent Cerf
Open access to the data would allow others to corroborate it, determine the correctness of the analysis in the paper, perform meta analysis, aggregate it with other similar data. Though there are bound to be privacy concerns, especially with medical research.
Science itself has still not undergone the digital revolution and it desperately needs it.
One thing the article does not mention is the need for better ways of documenting provenance for data.
I specifically don't like the idea of annotations rather than editing, because annotations expand and diffuse the literature rather than distilling it to excellent concise articles. In particular, annotations only address two of my six motivations (#4 and #6).
Within the hard sciences, what do you think annotation accomplishes that can't be accomplished through good editing?
> Nothing is going to replace the paper,
Ahh, a defeatist. You may say I'm a dreamer...
But seriously, I think this is a much more realistic goal than Wikipedia and ArXiv looked like when they were launched.
Based on your comment, I'm now convinced that a wiki-only model is unsuitable.
I am interested in solutions in this space, but for a completely different practical effect: democratic discourse.
http://e-drexler.com/d/06/00/Hypertext/HPEK3.html#anchor3165... (to link to a relatively concrete scenario; the paper as a whole is interesting but some of it had to be just to address the very desirability of something like the web.)
Much more recent and also good: Reinventing Discovery by Michael Nielsen.
If journals didn't care about having the exclusive publication rights then I suspect a lot more academics would select more flexible licenses.
I've also spent a lot of time thinking about this problem and would like to eventually put some work towards it. A couple of additional ideas that I've had:
* A paper could rely on a critical reference to build upon and the referenced paper could be disproven down the line but this is not immediately obvious from the paper that used it.
* Currently it doesn't seem like any merit is given to researchers who are very good at reviewing papers. Compare this to software where a good code review is celebrated. Editing and cleaning up the state of science should be valued when scientists are looking for work so I think that something along the line of a Github CV for scientists would be valuable.
This actually sounds similar to Authorea (https://www.authorea.com). It's an online collaborative academic word processor and publishing platform. Uses LaTeX, but more powerful and efficient.
I added a comment on the post with some more details. (Disclaimer: I work at Authorea, so I'm biased.)
I think you'd need a very radical overhaul of how science works to replace the scorekeeping aspect. I don't think a Github CV is a good analogy to how this could work, because academic science is interested in hiring leaders (i.e. people who can get funded), not contributors. I think realistically you would need to change how science rewards/emphasizes certain activities first, and then publishing would follow. That would be a good outcome for science, but I think it'll be awfully hard to get out of this equilibrium.
edit: I'm curious what you think of the PubPeer model. That's obviously different from what you're envisioning, but thematically I think there are some similarities.
I have a lot of criticisms of academic incentives, and I agree there is something of a chicken-or-the-egg problem, but at least in my field there are plenty of people who don't command big grants but have large citations. The problem comes more because people are hired based on metrics that are unusually bad at tracking what we want.
> I'm curious what you think of the PubPeer model.
It could be useful. Only seems to differ strongly from blogs in its centrality, but (surprisingly) I'm not sure this is actually a big issue. My immediate concern with PubPeer is that it expands and diffuses the literature rather than distilling it to excellent concise articles.
I think this might take off in physics if the ArXiv interfaced with, or copied, PubPeer. Doesn't address most of my concerns, though.
Wouldn't be helpful for physics and math, though.
I'd like to add the one problem that has usually motivated me to think about this problem the way you have: Academic works can contain ideas or data that becomes outdated or is found to be incorrect.
If you're not an expert in a niche, it's hard to sift this out when you come across it. It seems intellectually wasteful to have works that are, for example, 90% accurate and relevant, but have an idea that needs to be updated or tossed. An example of a field where I believe this happens too often is in economics.
Alone, I don't have the time to keep up and fact check everything I read, but collaborative editing could help a lot in this area.
>First, I’ve designed an attempted successor to Wikipedia and/or Tumblr and/or peer review, and a friend of mine is working full-time on implementing it.