Hacker News new | comments | show | ask | jobs | submit login

Shameless plug: I'm working at the MIT Media Lab on the PubPub project (http://www.pubpub.org), a free platform for totally open publishing designed to solve a lot of these problems:

One is peer review, which, as some have already mentioned, needs to be done in an open, ongoing, and interactive forum. Making peer review transparent to both parties (and the public) makes everyone more honest.

Another is the incentive of publication itself as the ultimate goal. Instead, we need to think of documents as evolving, growing bodies of knowledge and compilations of ongoing research. Every step of the scientific process is important, yet most of it is flattened and compressed and lost, like most negative results, which are ditched in search of sexy click-bait headliner results.

Another is the role of publishers as gatekeepers and arbiters of truth. We need a medium in which anyone can curate a journal, and in which submission, review, and acceptance procedures are consistent and transparent.

Another is the nature of the medium itself. It's 2016, and these dead, flat, static PDFs are functionally identical to the paper they replaced! Insert your favorite Bret Victor/Ted Nelson rant here: we need modern, digitally-native documents that are as rich as the information they contain.

Another is reproduciblity. We should be able to see the code that transformed the raw dataset, tweak it, and publish our own fork, while automatically keeping the thread of attribution.

The list goes on and on...




How are you planning on addressing the fact that none of this matters in the slightest if academic career growth/incentives/reputation specifically revolves around publishing boring pdfs in established journals? What do I gain by sending my work to your platform where the first question anyone in my "target audience" will ask is "why didn't this get published in a real journal, whats wrong with it?", and I will get essentially zero resume kudos points for it? Keeping in mind that it took me at least many months to do the work, I don't exactly have an abundance of papers to throw around, and my officemate almost certainly will submit work of similar quality to a traditional journal and reap the benefits of that.

Just saying, my decision to publish in traditional journals isn't as much as a decision as it is a requirement of the very career I am attempting to pursue.


Great question, and one that certainly doesn't have a straightforward or trivial answer. It's definitely more of a social challenge than a technical one - making publishing free/open won't do anything to fix incentives on its own.

My hunch is that change to this system will come from the outside. It's too risky of a career decision for a tenure-track professor to start publishing on PubPub (or any open/new system). But, there are lots of people who aren't playing that game. Lots of people who are doing science outside of academia, at a corporate R&D position, or for the sake of education, etc.

The most important step is to show that open publishing works. If we can work with these early adopters and show that conversations are more rich, or results more reproducible, we can start to go to universities and grant agencies and advocate for them to require open publishing. The first day that a university hires a professor or an agency rewards a grant based on the history of openly published work, will be a turning point. I hope it will be similar to the first time a software dev was hired for their Github profile, rather than their CS degree.

Today, software companies hire on experience. A university degree can show that, but so can major contributions to an open-source project. I hope science can become the same. Whether you're a PhD out of a great program, or an high-school drop out that has committed her life to rigorous experimentation, your demonstrated experience should be what you're hired on, not the list of journals that have found it in their interest (many of them are for-profit) to include your work.


Perhaps offering some sort of crowdsourced funding mechanism and a reputation system would go a long way toward correcting some of these incentives?

For example, giving authors / organizations a Bitcoin address where they can receive funds from individuals / organizations who want to support their research.

Also, awarding reputation to authors based on the level of peer review their research has successfully undergone (number of peers, level of rigor, etc.), and conversely awarding reputation and funding to those who perform peer reviews. Allowing users to contribute to a peer review fund for individual articles or in general.

All that to say this is very exciting and opens up a lot of new possibilities.


> For example, giving authors / organizations a Bitcoin address where they can receive funds from individuals / organizations who want to support their research.

That's a fantastic idea. Maybe we could call this "depository" of money to conduct research something like, hmmm, what's a good word… a grant?

> Also, awarding reputation to authors based on the level of peer review their research has successfully undergone (number of peers, level of rigor, etc.), and conversely awarding reputation and funding to those who perform peer reviews.

Sounds fantastic as well! Maybe these authors could create like, a website or curriculum vitae where they could list their accomplishment to establish their reputation. You know, they could have a section in their medium of choice that could be titled something like selected peer reviewed articles where they'll list their publications along with their coauthors and the journal it appeared in. Maybe these journals could devise some kind of ranking to measure reputation. Maybe they could call it something like… amount of impact or maybe just impact factor for short. I think this could work really well.

> Allowing users to contribute to a peer review fund for individual articles or in general.

Maybe a general fund should be created to support science! Maybe a national science fund or something, governed by a so-called national science foundation who can vote scientists, engineers, and the like onto their board to steer the allocation of funding.

I really think you're onto something very good here!


> Maybe we could call this "depository" of money to conduct research something like, hmmm, what's a good word… a grant?

Nah, that word is already in use for stagnant allocations of academic welfare to work on bullshit instead of transformative techniques (e.g. CAR T-cells, which NIH refused to fund for years). Need a new word to signify "money that is actually intended to produce results" instead of "a pension for irrelevant tenured senior PIs to pay non-English-speakers below-minimum-wage to work on topics that became irrelevant a decade ago".

> Maybe they could call it something like… amount of impact or maybe just impact factor for short. I think this could work really well.

Ah yes, impact factor is such an amazing tool. It allows "executive" "leadership" types to predict (very poorly, but who cares?) how many citations a paper might receive if it survives the months or years between submission and publication in a major journal. Trouble is, JIF is massively massaged and the COI that Thompson Reuters has in equitably enforcing it is ridiculous.

WARNING: Non-peer-reviewed work ahead! If you're not careful, you might have to apply critical thinking to it!

http://biorxiv.org/content/early/2016/07/05/062109

> Maybe a general fund should be created to support science!

That's a great theory. Perhaps it can be as well executed as the CIHR fund (where study section has given way to "ignore everyone who doesn't suck my dick directly") or NSF (whose yearly funding is dwarfed by the R&D funding at a single company). This approach is working out very well!

You know, if I didn't know better, I might think you were the sort of researcher that fails to look at the details and just submits your most fashionable bullshit to whateve journal at which your pals happen to be editors. I might get the impression that you're the cancer which is killing grant-funded science, which prizes large labs over large numbers of R01 projects, which believes that O&A is an entitlement to take out mortgages on new buildings instead of to pay for the costs of disbursing and administering a grant. But, since the evidence isn't before me, I won't.

It would be nice if you thought a little more carefully about what you wrote. The devil is in the details.


> or NSF (whose yearly funding is dwarfed by the R&D funding at a single company)

If the worst thing you can say about the NSF is that they need more money, that makes it sound like GP has come up with a nice way to allocate the available funding towards particular research projects.

> It would be nice if you thought a little more carefully about what you wrote. The devil is in the details.

Details like how to get "crowdfunding" to put up enough money that "independent scientist" can be a full time job and not just a hobby for the odd few who somehow already have most of the needed lab facilities/equipment?


Also: I still haven't heard (from either you or the previous parent poster) how journal impact factor can possibly be justifiable as a metric for relevance.

Anyone surveying the actual citation distributions at major journals will immediately note that a metric assuming near-normality cannot possibly summarize non-normal distributions of citations. The latter describes nearly all journals, thus even if JIF were not manipulable by stacking, self-citation, and negotiated exclusion of items to decrease the denominator, it would still suck.

https://quantixed.wordpress.com/2016/01/05/the-great-curve-i...

Look carefully at the details! This metric is among the most frequently emphasized by researchers who comprise study sections, and it is objectively terrible.

I'm not whining "just because" -- many of the lines in my CV end with NEJM, Nature, or Cell (no Science paper yet). I'm saying that at least one of the commonly accepted metrics for individual investigators is broken. That sort of detail corrupts the entire rest of the system.

I'm also not saying that a direct public-facing system wouldn't have huge potential problems (although it is nice to see attempts like experiment.com seemingly doing OK, and the funders realizing, hey, there are a lot of shades of gray between "utter bullshit" and "exactly the right experimental design for the question being asked").

One of the nice things about talking directly with folks at NIH, for example, is that they recognize there are serious issues with the incentives in place. If they are willing to collect the data and evaluate (publicly, e.g. in Chalk Talk postings) the findings, doesn't that suggest room for the current system to improve?


I take it you're not familiar with "crowdfunding" sources like the AACR, LLS, ASCO, or other professional societies?

As someone who is funded by several of the above, and who noted that their review processes were substantially less bullshit-intensive yet no less rigorous than NIH review (which has many benefits, efficiency not among them), I'm going to go out on a limb and suggest that it's possible.

As far as the NSF, they do a good job with what they have, but what they have is not commensurate with what we as a society could stand to spent on science. Even NCI is a far cry from that: https://pbs.twimg.com/media/CmLJzKQWkAAl372.jpg:small

Distributions are similar for various other avenues of funding, and it is quite clear that the overhead & administrative costs requested by many recipient instutions are far out of proportion to actual needs, so the impact of the funding allocations is further reduced.

Thus it appears that a direct conduit from potential patrons to researchers is, in fact, desirable. Otherwise, services like experiment.com would not exist. They're not at the level of an NIH study section (duh?) but they have consistently produced a small stream of usable results that belie their supposed irrelevance. Once upon a time, the Royal Society existed for just such matchmaking: find a rich patron and a promising young scientist and line them up. You've likely noticed that many if not most major universities and "centers of excellence" rely upon exactly this model, supplemented with NIH or NSF grants, to exist. Further modularizing the model so that an administrative hand yanking out bloated "indirects" at every turn is not mandated might not be the worst thing, or (alternatively) being more transparent with said O&A requests, might at least bring some of the bullshit under control.

The public clearly wants accountability. The masses may be asses, but if we want their money, we really ought to be transparent about what we're doing with it.


The difference between professional societies and crowdfunding is that professionals, not the crowd who donate directly, decides which projects to fund. In this sense, I do not see a great qualitative different to government funding agencies --- if you do, please elaborate.

EDIT: And to clarify, in the societies I know, general members do not directly take part in grant decision processes. Rather, the decisions are made by a small panel, possibly together with external reviewers. This is fairly different from crowdsourcing.


It's different from crowdsourcing, but the source and sink for the funds also tend to be more closely related. Ultimately I don't really believe that major initiatives (eg P01-level grants) can be adequately reviewed by anything other than genuine peers.

But by the same token, an exploratory study requesting $30k for field work or sample processing could very well be evaluated by less skilled peers. Actually, I think I'm going to try and shop this to a friend at NIH. I'll fail, most likely, but at least I won't just be whining.

For example, pharma and big donors use the LLS review system as a "study section lite" to hand out grants larger than a typical R01. The paperwork and BS isn't really necessary at that level and just gets in the way. If something like this existed for "lark" projects, inside or outside of NIH/NSF, perhaps more diverse and potentially diversifying proposals would be worth submitting.


To some (fairly large, in the case of ASCO or ASH or AACR, perhaps smaller for LLS or AHA) degree, the dues-paying professionals in these societies are the crowd. I would say they are a middle ground between something like an experiment.com or similar at one extreme, and NIH (which has inordinate purely political input -- ask your program officer!) at the other.

We shan't discuss scams like Komen here, but genuine research foundations can exist along a continuum.


The paperwork burden for an NIH grant (relative to a society grant) is often a large scalar multiple. The accountability is often on a par with, or less than, the typical society grant. It mystifies me why this should be so.


I feel like those fields with highest facility needs / costs would come last if at all. There are many fields that require pretty small amounts of resources for example: Computer Science (I did most of my research on a personal laptop, with other equipment costs <$5,000), Mathematics, Philosophy, Economics, Psychology.

All of these seem very possible to crowdfund with the ultimate goal of unhooking them from perverse incentive systems of typical universities.


at least three of the above are in fact supported by experiment.com backers, although largely as a "bridge" to more traditional scholarly outlets. That said, if you go out and get extramural funding for your work, generally that is the definining characteristic of a successful PI, so...


I wasn't saying the current system is great, there's a lot wrong with it yes I agree. The impact factor thing is also a pretty silly metric to me as well, I agree with you there. The point I was trying to make, albeit sarcastically, was that the system the guy proposed is what we have today just without the extra hoops to jump through. Like I have absolutely no interest in maintaining a Bitcoin wallet or whatever nor do I want anything to do with them. I'll take my funding in dollars or euros or something real and tangible please.


Mostly agreed, although I did consider using an HPC allocation to mine bitcoins & hire work study students. But then it turned out that if you study interesting stuff and write the ad correctly, you'll have to beat them away with a stick. For good measure, I convinced one of our corporate patrons that they ought to pay for one of the students.

As far as extra hoops, it's not clear to me whether endless NIH paperwork bloat and ICMJE declarations are more or less onerous than crowdsourcing type stuff. I tend to think there must be a happy medium, but I could just be naive.


Oooo sarcasm. You're probably right though, the old system seems to be working out pretty well. Besides, science is all about never questioning existing institutions right?


It was sarcasm yes but my point was that what you proposed already exists, just without the crypto currency bullsht and extra hoops to jump through.

I didn't say the current system was flawless. I'm just saying your proposal is the current reality already.


I think it's a political problem. Most researchers and professors tend to be employed by the state. Lobby the state to give very high preference to open access publication. In an ideal world I'd read a sentence like this in a hiring guideline for state universities: "only research that is available to all citizens of this state shall be considered as a contribution by the candidate"

Additionally for research project funding I want to read a similar sentence in the proposed outline that you can typically download for the grant proposal. "Only list your research projects and open access publications in the Previous Related Work section" (or high preference will be given to ...)

Basically state funded research that is not available to citizens should be heavily penalized. I think that is logically very consistent. In fact I'd love to GPL the process and make it a requirement to OA everything as soon as a single cent of money goes towards the research.


Christ this would be brilliant. The trouble is that you have otherwise-decent people like Kirsten Gillibrand completely in the pocket of RELX. You have massive COI and crap like the PACE trial who WILL NOT, under any circumstances, release their data, and who consider academic inquiries regarding said data (the foundation of any scientific publication) as "vexatious".

In short, you're better off trying to get policy changed at NIH and other, major funders of research to push openness (Google and major tech companies already are quite good about this), and only resorting to legal challenges when the satanically evil Elsevier and friends attempt to buy off legislators.

If you were wondering whether academic institutions typically stand up for scholars (as opposed to faucets of grant money and indirects, who can behave as they please), kindly refer to the following. http://boingboing.net/2015/12/11/what-will-it-take-to-get-mi...

Academia is not scholarship. The institutions of academia are in fact opposed to scholarship if it might make their senior "leaders" (often far removed from any scholarly pursuits they might once of had) look bad. Academia responds wonderfully to incentives, though, and that's where I do believe you have nailed it.

Decision: Accept with Minor Revisions

;-)


I think the answer is closely tied to funding. The whole prestige of impact factor, national academy membership, etc are basic heuristics that help the current funding process. Grant review committees comprised of already established scientists and administrators are more likely to fund what will show up in boring paywalled PDFs.

Given another way scientists could make a living while pursuing research, I think you'd see some would stop feeling pressured into publishing in the traditional way and experimenting with altmetrics, open access, open post-pub peer review.


> where the first question anyone in my "target audience" will ask is "why didn't this get published in a real journal, whats wrong with it?"

One would think that this wouldn't happen simply because it is (I think) a combination of two of the most well-known logical fallacies, "argumentum ad populum" and "appeal to authority" https://yourlogicalfallacyis.com/appeal-to-authority

I am extremely dismayed by the fact that, in a career where (of ALL careers!) the evidence should ostensibly speak for itself, papers (and their information) are essentially pre-rejected or pre-accepted based on reputation


Conferences and journals are a filter. If your paper can't pass a few rounds of anonymized review (for a conference) then it probably isn't worth the readers time.

As an example, arXiv is has a lot of great papers but it also has a lot of bad ones. I'm unlikely to go there to find a paper unless someone referred me to a specific one.


I've never had a paper rejected from a conference, and I've seen some pretty shitty ones at conferences. I think this statement is pretty field-specific.

That said, in the CS and EE fields at least, the use of 10-12 referees for a short, dense conference paper is a really good idea. It would appear that this works primarily because of... drum roll ...the social norms and incentives in place within these specific fields.


As an academic in biomedical sciences and as someone who tried to create a journal that did all the things the OP said, thank you.

The old crappy system is mandatory. You can't decide to do otherwise and still have a job.

The impetus for change needs to start at the funding source level, NIH. They need to make a journal that one is required to publish in order to keep your funding.


They have one (several, actually). The trouble is that they're called Cell, Nature, and Science; they're all for-profit; and they answer to no one.

The Wellcome Trust is walking the walk that NIH talked for the past decade. We will see how this goes. Cross fingers.


With ELife?


No, much bigger. They're going to start requiring OA.


Peer review also needs a reboot. It's epically unhelpful and slows down publishing dramatically, Think they should mandate post publication review.


The review model in the WTC's new world order is open, as it currently is in the F1000 journals, and the Wellcome Trust pays the APC so there is no excuse for grantees not to publish. This slowly kicks away all the bullshit excuses that are presented for lack of scientific productivity so that the funder can objectively assess "what are these people doing with our money, and does that advance real, non-press-release science?" It's brilliant imho.

https://wellcome.ac.uk/news/why-were-launching-new-publishin...

Momentum is important. There's no reverse on a submarine. One of the biggest funders in the world has taken away a standard excuse for failing to openly report results, negative or otherwise. It is free to readers and free to authors funded by the Trust. And all of the reviewing process is out in the open.

This is a huge step. The only other funding body I can think of with similar weight is NIH, and (at least internally) they're starting to move in the same direction.


I'm a new professor in the US so I'm happy for you but it looks like we'll have to wait for the NIH to do its job.

F1000 is entirely post-publication review where reviewers identities are known? If so, I love it!


The NIH does a pretty decent job all things considered. With an organization that size, there is considerable inertia, but newer blood is starting to fix some of the older problems.

Best of luck -- work on stuff that matters, if you can :-)


IMO, the change has to come from the grant financiers, and a case can be made that they get a greater more bang for their buck by adding requirements such as pre-registration, open data publishing (so long as they don't want to keep the data proprietary, of course), and publishing guaranteed regardless of outcome. The fruits of what they purchased can be of more use to the field in this manner, and even they themselves may find they spend less money if the 'failures' are documented in such a way that they don't inadvertently repeatedly fund them due to the fact that the initial results were hidden away in a filing cabinet.

I think the same case could be made for placing more emphasis on funding replication -- instead of thinking 'we learned it, why revisit it?', you're checking if it's actually valuable information worth spending more on, or just a fluke.


It could work more like the open source software project model: you would get your respect from having several "accepted pull requests" to the literature. The difference is that your PRs would be better framed in terms of identifying your specific contribution.

As it stands now, the publishing model is the equivalent of writing your own version of the project -- with no easy metadata about your specific innovation -- and asking it to be accepted or rejected as a whole.

With that said, it's understandable why to keep doing the latter version until academia has standardized on the former.


In mathematics, Perelman posted his beautiful 3 part proof to Poincare to arXiv and within a day there was allegedly hype (or so reputed professors report, while I did go into advanced alg. top. later, I was 14 at the time). Professors (the same professors who'd referee) were independently reviewing Part 1 within 24 hours and this was in 2002.

Then you have a Fields Medal winner Voevodsky who also bitches about having a paper stuck in peer-review limbo for three years.[0]

In cosmology (I am not a physicist in the slightest, but I have enough mathematical background s.t. I can watch lower graduate level classes without too much difficulty in a recreational capacity), pre-prints get e-mailed around daily, such that if Lenny Susskind posts something on evening-{n}, morning-{n+1} will have most of the field commenting on it.

In biological sciences (again, just from hear say mind you) I understand it's different (e.g. in medicine, specifically, my uncles/father/their colleagues and students are all concerned with the impact factor of their papers and/or their doctoral students papers).

I also realize some journals have a "no preprint" policy. But a lot of heavy hitters (Nature, Elsevier, Springer[1]) allow the circulation of pre-prints now. So it's not a binary decision between one or the other in some cases. (And in the instances where you are able to submit pre-prints to online, if you've made a mistake, a colleague could inform you and you can issue a retraction hopefully before the $month+1's Nature Methods hits physical press.)

---

Side-bar, skip over if you don't care about proof theory, type theory, constructive logic, univalent foundations, or category theory

[0] Vlad on Univalent Foundations http://www.math.ias.edu/vladimir/files/univalent_foundations...

Interestingly enough, for you category theory fans, Vlad has recently become a huge proponent of Homotopy theory[0.b]. Everyone knows Wildberger has kookie ideas w/r/t infinitesimals but Vlad too shares the same 'feeling[0.a]' that analysis (and to a lesser extent axiomatic set theory) just doesn't 'feel' foundationally correct. He's a huge proponent of Coq, Agda and dependent typing instead. Princeton's IAS put out the seminal text that's worth a read. No advanced mathematics required (or even advanced CS knowledge, it's all self-contained).

[0.a] I can't really put into words what the 'feeling' is any more than describing why some people find algebraic proofs more elegant than analytic or geometric. But in the same way that there's mathematical beauty [0.a.i] well agreed upon, this 'gut feeling' too is agreed upon amongst those who fall into a certain category (pun not intended)

[0.a.i] https://en.wikipedia.org/wiki/Mathematical_beauty [0.b] https://homotopytypetheory.org/book/ -- think of this as the best minds in aggregate sit down at Princeton to write one of the best texts on Proof theory, type theory and logic. Practical Foundations for Programming Languages (Robert Harper) is a good companion text.

[1] https://en.wikipedia.org/wiki/List_of_academic_journals_by_p...


My god I love coming here and reading the travails of real scientists. The journals that have a "no preprint" policy are primarily Cell Press (link) rags and Annals of Oncology.

http://crosstalk.cell.com/blog/lets-talk-about-preprint-serv...

Note the insane amount of FUD. "We're fine with preprints, as long as we can control them!" But that's kind of the point, isn't it -- wrestling back control from journals with a massive inherent COI! Allowing the public (lay or scientific) to objectively determine what's real and what isn't, with visible salvos from both camps as needed, instead of the invisible and un-accountable hand of an editor or anonymous referees? (I sign my reviews.)

Physics doesn't seem to have suffered from preprints -- if anything, it has thrived more than expected thanks to it. I don't hear about a replicability crisis in physics, for some reason. I hear about it all the time in biomedical sciences, perhaps because the review process is so incredibly shady for the "top" journals (in terms of impact factor, a statistically indefensible and criminally manipulated metric). Papers that are outright rejected or in need of major revisions are simply yanked and resubmitted elsewhere (I am guilty of this, by the way; rejected from Nature, so we sent it to NEJM with almost no edits and they accepted it a week later). Senior authors act rationally, so if perfecting a paper will cause a delay and potentially deny them priority on a discovery, who can argue for making the edits instead of resubmitting elsewhere?

All of this goes away when preprints are the default. Priority is established by deposition, rather than publication; the discovery often (though perhaps not always) is itself enough to prompt review from rivals, even if only in self-interest; and the preprint status makes it clear that the item has not been formally reviewed. Many or most resulting referee comments are public, often submitted as related deposits. From a scholarship perspective, I cannot for the life of me see the downside, other than reticence to boldly state the truth and face retaliation from shysters. It's a shame that the latter is an issue, but given the even larger amounts of $$$ at stake in physics, it seems that biomedical researchers are mostly just cowards using this as an excuse. (JMHO)


What's wrong with PDFs? PDFs are great because it's a downloadable format and when I create one I know that what I see is what other people will see. The same can not be said for a website format where people use different browsers, different devices, and different versions of different software.

I just don't see what's wrong with using PDFs. If I want to read one of the papers on your website on my phone when I don't have Internet, how do I do it? If I just keep the page open on my phone's browser it's just going to go away at some point as my phone's OS clears the browser's cache to make room for other apps that I'm actively using. With PDFs I can just open the pdf I've downloaded.


any standardized downloadable format (to include VMs or AMIs) also fits these criteria. PDFs aren't inherently bad, but on occasion they are not the best tool to communicate results.

For example, Sean Morrison's recent paper on live-imaging stem cells in vivo benefited greatly from inclusion of movies showing the live-processed data. Their point was that live adult stem cells can be imaged, and this revealed niche biology. The only way to support this was to provide the proof, and MPEG did this nicely.

Agreed regarding permanent vs ephemeral, though.


I agree with most of your points, but disagree strenuously with this one:

It's 2016, and these dead, flat, static PDFs are functionally identical to the paper they replaced!

There's a lot of value in static PDFs and how we format documents for publication. $CURRENT_YEAR doesn't invalidate good ideas from ($CURRENT_YEAR-n).

Just in terms of technology, static PDFs are:

- Easily distributed. - Easily archived. - Readable forever, even as platforms change incompatibly. - Readable on any device.

20 years from now, do you really want to have to run an old browser, or debug shoddy JS/HTML/whatever, just to read a paper from $CURRENT_YEAR-20?


> [W]e need modern, digitally-native documents that are as rich as the information they contain.

It's best to separate interactive parts from the non interactive. When I read papers I always print them because I can take notes, underline, draw shapes, comment etc. all in-line with a pen that I'll buy from a closest little shop if I don't have one with me. It is more convenient that any software tool out there, because there is basically no limitation to how I'll use the pen on the (actual) paper.

But how will I be able to print a video, or a graphic that has some controls to be fiddled with?

But they can be provided separately, and if I have the mediums, I'll use also them.


If I might give my opinion on which features PubPub should implement next:

- adding "Math in-line $$" in Formatting menu

- enabling upload of LaTeX files

- enabling upload of PDFs, to store on PubPub existing research (combined by via.hypothes.is, one can annotate PDFs, which is in the spirit of PubPub)

- include some interesting features of SJS like grading, download as PDF, add to personal library...

I am impressed at the job you've done for a graduate project ! Best wishes :-)


Shameless promotion: SageMathCloud (https://cloud.sagemath.com) does many of those things, is also open source under the same license as PubPub, and built on similar technologies to PubPub (React, CodeMirror).


Why reinvent this wheel? Why not instead (also) partner with people like Overleaf who already have many of these features?

https://www.overleaf.com


that appears to be totally proprietary


not really

https://github.com/overleaf

Some parts of it are (duh?) but overall the system is astoundingly transparent. I think they ended up charging for the sync-to-git option, but if you look at the above repos, it appears that this feature can actually be pulled on the user end rather than just pushed from Overleaf.

Overleaf integrates nicely with existing journals and with preprint servers to some extent; I've bugged them for some time to support MarkDown (I don't care to use LaTeX for manuscripts which don't have much math) but as yet they have not implemented it. It may be an issue with the backend, although even that should be surmountable.


not sure how to parse all that. At the least, it's not straightforward, and my original comment is 100% true: "that appears to be totally proprietary" (emphasis added)

If a project is actually FLO, you'd think they'd make that clear to someone visiting their website.


They're a corporation. Some of their decisions are made accordingly. The most interesting bits, though (version controlled dumps of the LaTeX you're working on) do have open implementations.

Ultimately, it (like Paperpile) is a service that is built from mostly open components, integrates nicely with OA workflows, and has no real lock-in. I'm not sure what you're afraid of -- a paper will unsubmit itself? LaTeX will go closed source? Since your entire version history is there for the taking at any time, there's not much downside, and for our manuscripts & tech reports we've seen huge upside.

Ymmv!


Haha - not my project, I just work on it for now. But thanks for the great ideas!


I still don't... get what PubPub is. I definitely think that peer review should be updated and science should be using the internet better, I just have no idea what PubPub has to do with that mission. Is it just... a place where people can post things, and other people can comment, and other people can curate posts? So like any blogging platform?

Even if used as intended, it seems like the only comments would go on things that people are already seeking out; there hasn't been a "random pub" button since it got overtaken with spam. So the use of comments to "review" is already throwing up a huge barrier to discovering new science and leveling the playing field for unrecognized researchers, exacerbating the problems of the current system. The inability to anonymize comments or posts gets rid of the most promising way the internet could affect peer review. It's hard to see what about the platform promotes evolving documents as opposed to static publications. And complex, digitally native papers are already being promoted by larger journals, with better resources to implement them, and more staff to help potentially unfamiliar scientists with the process, another aspect that could make the platform a source of exclusion and bias.

I don't mean to sound to critical, I just follow both science publishing and projects coming out of the Media Lab pretty closely, and have been baffled by this one for a while. https://www.sciencematters.io/ is doing cool things with switching from static publications to a set of growing datapoints, updated in real time, http://meta.com/ (formerly sciencescape) is dipping their toes into curating papers in a really interesting way, and even http://the-artifice.com/ has a unique peer review system that I think could be implemented well in the hard sciences as well. PubPub is just like... Medium? With a little bit of GitHub, and the ethos of arXiv?


So I'm very interested in going beyond PDFs, but I can't for the life of me figure out what PubPub intends to do. I get that there's an online collaboration engine, as is offered by many places like Authorea, ShareLaTeX, Overleaf, and FindusWriter. What does PubPub do differently here?


Plugins and views!

We're currently working on a significant architecture refactor that will enable 3rd party plugins to view arbitrary data assets inline, like a reactive spreadsheet or a D3 visualization.

We're also working on deeper, richer hyperlinks between documents that would let you reference or quote other publications in the system with arbitrary resolution (down to the section/paragraph/sentence/number).


OK. Color me skeptical...


That’s fair. We think it’s important to be able to reference anything within the system with arbitrary resolution, so that’s our goal. Whether that implements well or not is certainly a technical challenge, but hard problems are the fun ones, right?


I think the point being made was that already existing sites didn't change the status quo, and so you'll need something much better to achieve what they failed to.


"anyone can curate a journal"

What does the word "anyone" mean? If you literally mean anyone, then you are getting rid of the idea of "peer", in which case you seem to be excessively idealistic. Even Wikipedia eventually had to impose limits.


> Another is the nature of the medium itself. It's 2016, and these dead, flat, static PDFs are functionally identical to the paper they replaced!

Well, many of the existing publishers (at least ACS, Nature group, Wiley, Science group) are already pushing hard towards something "more" than PDF, mainly for DRM reasons. And scientists in general are resisting greatly, because PDF is such an established format, supported on all devices and platforms, can be created by the tools we use, and can be shared freely amongst peers. The barrier to adoption for a new format is extremely huge.


I'm also concerned about long term readability. Some of those documents might have at least a few hundred years of people wanting to read them before them. Can any of those innovative new data formats stay around long enough, especially if they're proprietary? We might change computation platform and hardware paradigms a few times as well, just think back to 20 years ago. A lot of measured raw data is already lost due to this; it's about impossible to parse any more. Pure text can survive far more easily.


Hi! I really think what you are doing here is great and I hope you will succeed!

Is there a way to test the platform without creating to much garbage? Like a sandbox version maybe?


Everything is open-sourced, and we've tried to document as much as possible to make it easy for people to host their own instances! Alternatively, you can just create arbitrarily many drafts without publishing any of them.

https://github.com/pubpub/pubpub


Running my own instance is a bit heavy just to try the platform :). But thanks for the tips, I can create draft and delete them without trashing your instance.

The fact that it is open source is a major positive point. I suppose you have heard of SJS [1] which in a way is very close to what PubPub is, except I do not believe in SJS because of it being closed source: it is way too unsafe for a scientific community to take the risk of adopting such a media for a new journal or a conference as the platform is not open and there is a risk of Elsevier buying it to shut it down or closing it even more, for instance.

[1] http://sjscience.org/


> and there is a risk of Elsevier buying it to shut it down

Yup. I'm still stung by Mendeley's sale to Elsevier.


If you don't mind I have a few questions :).

– PubPub doesn't seem to have math support, is that right? I think that should be a very high priority.

– LaTeX import is also really important (but this supposes that math is supported), is that on the roadmap?

– Related question: is there a way to work offline in my usual text editor and collaborate with my usual tools (git, svn, whatever) and then import my paper in PubPub?


Math support already there: http://i.imgur.com/tHKpNrB.png

We're keen on getting good offline support added, but (as you point out) there's still a lot to be done on the web side of things. We're pushing out some big updates that hopefully clarify the project and how to use it in a couple weeks.

Welcome to ideas on what good offline support would look like, or PRs if anyone wants to take a stab at it.

Our goal is for PubPub to be a public utility for scientific communication. It'll be non-profit and open source for as long as it lives (still a grad student project at the moment...) and free for anyone to publish whatever they like. I don't think we're nearly smart enough to know exactly what that looks like, so please do feel free to contribute ideas, code, or inspiration into what a public tool for science communication should be (this comments page is already wonderful in that regard).


Ah, thanks. I tried math with single $ signs.


Of course! Importing from markdown/latex/git, export to PDF/XML/etc, and lots more are coming (not in that order). We're still in very active development, so there's a lot left ahead on our roadmap.

If anyone is serious and passionate about helping out, we're hiring!


Had I not landed a position in academia, this is really the type of project I would have love to work on if I had to do engineering rather than research ;).

But I really hope I can contribute as a user in the future :).


JOVE, the journal of visualized experiments, is also a very interesting model.


I skimmed through the site and am left with the question: where do I submit my paper?


This is great job! Congrats !!!!


No need for shame with such a worthy plug+!!!!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: