Hacker News new | past | comments | ask | show | jobs | submit login
MIT Ends Elsevier Negotiations (news.mit.edu)
1104 points by davrosthedalek 3 months ago | hide | past | favorite | 262 comments



Here is the MIT Framework for Publisher Contracts:

https://libraries.mit.edu/scholarly/publishing/framework/

It requires that:

1. No author will be required to waive any institutional or funder open access policy to publish in any of the publisher’s journals.

2. No author will be required to relinquish copyright, but instead will be provided with options that enable publication while also providing authors with generous reuse rights.

3. Publishers will directly deposit scholarly articles in institutional repositories immediately upon publication or will provide tools/mechanisms that facilitate immediate deposit.

4. Publishers will provide computational access to subscribed content as a standard part of all contracts, with no restrictions on non-consumptive, computational analysis of the corpus of subscribed content.

5. Publishers will ensure the long-term digital preservation and accessibility of their content through participation in trusted digital archives.

6. Institutions will pay a fair and sustainable price to publishers for value-added services, based on transparent and cost-based pricing models.

Not surprising that Elsevier couldn't meet these requirements. #2 in particular seems antithetical to Elsevier's philosophy.

What was surprising was the number of institutions which had signed onto the framework - my alma mater included. I'm curious how many of these still have an Elsevier contract.


> What was surprising was the number of institutions which had signed onto the framework - my alma mater included. I'm curious how many of these still have an Elsevier contract.

If you ever tried to dig deep into the internals of a large organization, you will notice that they are perfectly capable of managing a set of self-contradictory rules. It is actually an amazing power. At least, this was my experience.

For a mathematician used to third-excluded logic, it may seem impossible, since you can prove anything from "p and not p". Yet, these organizations manage to not be able to do arbitrary things from contradictory inputs, but somewhat sane things (some of the time).


> they are perfectly capable of managing a set of self-contradictory rules

This is caused by managers' and middle-managers' performance metrics being tied to the wrong outcomes.

The top layer of management's target is the desired outcome. The bottom layer's (the worker) target is also clear, it's to do whatever their manager says to do.

In the middle is where it often breaks down; their objectives are short-sighted and while it works out for their own career, it rarely benefits the top level target. At best, it introduces a large inefficiency, waste of resources and unnecessary busywork ("bullshit jobs" becomes relevant here) and at worst it goes completely against the target set at the top level.

Imagine it as an eventually-consistent system. If you don't change anything and give the system enough time it will eventually achieve its goal. The problem is that the "time" we're talking about is measured in years if not decades and during that period the system is stuck in a contradictory state. Endless restructurings and other external factors often reset this "timer" so the system is even more likely to stay in the contradictory state forever (some other "systems" depend on this system staying in the contradictory state), despite theoretically advancing towards the end-goal.


> At best, it introduces a large inefficiency, waste of resources and unnecessary busywork ("bullshit jobs" becomes relevant here)

If the COVID-19 lockdown has taught us anything, it's that probably a full third of our economy is bullshit jobs. We're at best self-perpetuating economic 'nice-to-haves', at worst parasitic rent seekers, probably somewhere in between, whose real role is simply to distribute GDP more widely, while pretending to add value to society.


There's a ton of nice-to-have professions that are not strictly necessary, like musicians and decorative fountain builders. That is a good thing! It's clearly possible to live in a giant grey barrack and eat nothing but soylent but it's not what 99% of the population would prefer. If anything, it would be good for society to make the percentage of not-essential-but-nice-to-have as high as possible.


I dont think the parent comment's bullshit jobs are the same as your definition of nice-to-have jobs, at least assuming parent is talking about David Graeber's use of the term.

Bullshit jobs are ones that could disappear and the impact would unnoticeable or minimal for an organisation's output. Nice-to-have (for society?) professions still have value, otherwise they wouldn't be nice - if all the musicians disappeared people would definitely notice.

I'd also contest the idea that cultural labour is unnecessary. Lots of people in lockdown have depended on all kinds of music, TV, film, etc to maintain their mental health. This seems to go beyond preference, even if not everyone needs the same or as much cultural produce to survive healthily.


It’s roughly the same concept. Those people were hired for a reason, even if it wasn’t a fully rational one. Just as musicians are the quality of life improvers of society at large, so too are code janitors to an IT team.


The idea of bullshit jobs started a few years ago mostly based upon the idea that the people doing themselves felt like they were doing bullshit jobs, and therefore not feeling fullfilled.


Douglas Adams got there in the 80s with his Golgafrincham B Ark storyline.


>Douglas Adams got there in the 80s with his Golgafrincham B Ark storyline.

If COVID-19 has taught us nothing else, it's that (as Adams warned us) "telephone sanitizers" are absolutely essential.


if anything, I'd wager the bullshit jobs make an organization less efficient.


There's various kinds of "unnecessary work" jobs. Some do really contribute directly to the common welfare: artisans and producers of culture; providers of small conveniences, etc.

There's also a whole lot of administrative, artificial, make-busy-work jobs, where we build complicated systems of bureaucracy to prevent losses that cost orders of magnitude more than the losses they prevent, and that in turn impose layers of bureaucracy on others.


To be fair, nobody knows which parts of a complex bureaucracy are useless and which are important. There is a pretty decent chance that the truly useless parts of an organisation are linked in to complying with regulations.

Most of the parts of the bureaucracy that I think are useless are the parts that are non-negotiable from a legal perspective.


> Most of the parts of the bureaucracy that I think are useless are the parts that are non-negotiable from a legal perspective.

When we look at e.g. higher education, for a place that's particularly guilty of this:

Administrative costs have grown to be approximately equal to instructional costs at universities, from about a quarter of expenditures in 1980.

In essence, our desire to prevent waste and to ensure that money is spent the way we'd like has created an ever-ballooning set of administrative work. It might be better to tolerate a somewhat greater share of instructional and research expenditures being wasted or spent in non-ideal ways.

Any time there's a small bit of waste or money spent in ways we don't approve of, the impulse is to create rules and administrative procedures to prevent it from happening again. But this effectively creates a perpetual overhead for what may have been a one-time, very minor problem.


“There is nothing as necessary as the unnecessary” - from the movie Life is Beautiful.


I suppose that is a third way to deal with automation.

I usually say, there are two ways to deal with productivity increases (from automation and industrialization): you can either attempt to do more as a civilization (build moon bases, increased consumer goods, etc) or you can have everyone work less (3 or 4 day work weeks, 5 hour work days, etc).

I guess you can also just create lots of bullshit jobs so everyone is still "working" 40+ hours a week without actually doing anything.


I believe your third answer is indeed the reason that despite so many things being automated, we're not seeing benefits of that in terms of reduced workloads on people. The way I see it, as long as creating bullshit jobs is easy, the market will force everyone to work 40+ hour a week to survive. And as long as creating bullshit jobs that just shuffle money around is easier than creating socially valuable jobs, we won't see any Moon bases either.


It's not just a matter of, is creating bullshit jobs easy?

On the one side we have supply constraints on necessities. If there isn't enough good housing for everyone and we're not allowed to build more then people will have to work 40 hours so they can outbid other people on the good housing and not get stuck in the bad housing. We could solve this by building more good housing, but only if we actually do that.

On the other side it's a question of where the surplus goes. You have a company that makes billions of dollars -- far more than it needs to operate. What happens to the rest of the money? If the managers get to use it for empire building, you get bullshit jobs. That happens a lot.

Ideally we'd solve both, but even just one would be progress.

If you don't have to work 40 hours to afford necessities then you're not going to choose a 40 hour bullshit job. You're either going to choose a real job or a job with lower hours so that you can spend more of your time doing something you want to do, which might very well itself be a real job (i.e. starting your own business).

Whereas if we can figure out how to transfer more wealth away from organizations and into the hands of real people, that gets rid of bullshit jobs too, because most of them come from the principal-agent problem and the misalignment between managers and owners/taxpayers. Transferring the money to any person or entity without middle management would be a net improvement.


Look at it from another angle: would people work bullshit jobs if they didn't have to? Fix the incentives, pay people a minimum income where they don't have to work, and if you absolutely need labor you're going to have to pay someone who doesn't have to be there if they don't want to.

If people's basic needs were met, I think you'd see a lot of labor shift to work that is valuable, but currently not compensated for.

Also, you have to adjust constraints to deal with legacy systems: the Federal Reserve works to maximize employment through monetary policy. This is suboptimal, when work will expand to consume the time allowed for it [1]. You need to use a one way policy ratchet to ratchet down the hours per week worked as productivity has increased, otherwise workers will never see productivity gains and society will be stuck on the 40 hour labor work week forever.

[1] https://en.wikipedia.org/wiki/Parkinson%27s_law


> You need to use a one way policy ratchet to ratchet down the hours per week worked as productivity has increased, otherwise workers will never see productivity gains and society will be stuck on the 40 hour labor work week forever.

The feasibility of this (in the US) is strongly dependent on health insurance not being tied to having a full-time job (often defined as just under 40 hours a week).

Plenty of employers have demonstrated that they can be organized to use mostly part-time labor, but their incentive to do this has been avoiding the obligation and expense of contributing to health insurance and other benefits, rather than giving employees a productivity dividend.


I agree. Healthcare must no longer be tied to employment.


The current losses in public education will be felt on the decades timescale. I think the perspective here on HN is probably shifted due to the tech demographic, where VC and ads have provided a buffer.

Systems have inertia, stuff continues working until it hits a wall. The unemployment systems written in COBOL kept churning until now, for example.

I think the cuts in jobs might give the impression that "we never needed them in the first place," and maybe that's true in some cases, but it's hard to distinguish right now from the effects that take years to surface.


I think it's a little unfair - you will always have jobs that are important in the immediate, short, medium, long terms. Society can function perfectly fine without teachers, engineers, scientists... For a bit. Some teams can function fine with reduced manning, but long term need full or more manning, for training, taking care of admin, ensuring sanity, and doing more than the bare minimum.

Not that I disagree that some jobs are for expensive seat warmers to ensure warm fuzzies


I don't really understand what COVID would tell us about "bullshit jobs." It tells us that restaurants and bars and concerts, etc, aren't "necessary." Duh? Leisure activity has never been necessary, but it's still something we enjoy and value.

The existence of those leisure jobs and the venues they operates certainly adds value to my life.


A lot of businesses have used revenue loss to justify layoffs.


> The top layer of management's target is the desired outcome.

In my working life my experience has been the opposite. I have spent my career in large corporate environments. Often the top management are removed from reality as an example I once had a CIO who thought he could replace a 20 year old database that had several thousand tables and countless stored procedures. First step of such a plan is a POC and guess who gets to implement the POC whilst making sure the application actually bringing in the money keeps working, it is the middle manager. Consultants are brought in but they end up taking up the middle manager's time because middle manager has to explain current application to consultants. Millions later the project is scrapped.


> The top layer of management's target is the desired outcome.

Kinda. Top managers tend to have a much shorter time preference than shareholders. The incentives are aligned only when all shareholders are of the active trading kind (if that ever happens on any company).


Isn't there also an additional misalignment caused by shareholders who hold shares only to speculate on the stock market? Those shareholders don't care about long-term goals of the company, but only about short-term stock performance they can use to jump sell their shares high, and then buy some other shares low.


Short term speculators have no direct influence on the management (other than through share price i.e. the money they promise to give to the long-term shareholders to sell their shares) because they don't hold the shares long enough to vote on the shareholder's meetings - the long- and mid-term shareholders can order the board on what the priority should be or replace the board if it's not aligned with these goals; and there's a class of activist investors who buy shares in order to do just that, to steer the company somewhere else - but in order to do that, you can't be a short-term speculator, you have to buy, vote and wait before selling.


I still have never seen a concrete example of a job which is clearly a bullshit job. Any suggestions?


I think of as these rent-seeking organizations that don’t provide value and don’t need to exist.

For example, Elsevier has this monopoly on publishing so they need to hire staff to manage everything. Those staff do real jobs- accounting, sales, IT, etc- but there is not value created. So I might call that bullshit.

Layer of layer of real functions and interdependencies of people and groups all costing and resulting in salaries. But if you entirely removed every Elsevier job from existence it could be replaced by scihub with minimal disruption.

Poof all the jobs could disappear. I think of meaningful jobs as those that create some objective value.


Where does scihub get the papers from, in this scenario?


From the researchers.


Has always seemed like a rhetorical device to me. In effect no job is actually bullshit since if it was it would be eliminated. The truest "bullshit" job is one where you are absolutely irreplacable for a single, trivial piece of knowledge. I.E. I'm the only one who can print the records at the end of the year, and otherwise I do (practically) nothing. That's how I've always thought of it, but others may disagree.

Often times it feels like a way for creatives and engineers who make the product to express some (rightful) resentment towards the folks who manage people and business.


Corporations are not effective and useless positions are not necessary eliminated.


There's also jobs that exist as the result of Moloch traps, e.g. many corporations need to keep a lot of lawyers on staff because their competitors are litigious. If they all agreed to be less litigious, they could all realise a huge net savings. I imagine much of advertising is similar, albeit in a less straightforward way.



Totally a tangent, but: the law of the excluded middle ("tertium non datur") doesn't state that "a and not a" is always invalid, it states that "a or not a" is always valid. This is a somewhat important distinction since there are logic systems (intuitionism) where "a or not a" is not a tautology, yet I don't know of any logic that accepts "a and not a".


I'm confused. Or maybe you're confusing AND with OR? How can "A and not A" ever be true, let alone tautologically so? Also see https://en.wikipedia.org/wiki/Law_of_noncontradiction


I didn't claim that it could be true, as I did in fact state that I know of no logic that accepts "A or not A". The argument was solely about what the law of the excluded middle is. IOW, law of the excluded middle != law of noncontradiction. Classical logic accepts both of these laws, intuitionism only the second.

edit: looking at Wikipedia does seem to indicate that there might be systems of logic where "A and not A" might not be valid (paraconsistent logic), but I haven't really looked into that, and I don't know how much such systems are used.


[this was misleading]


Are you sure you're not confusing AND and OR? https://en.wikipedia.org/wiki/Intuitionistic_logic#Relation_...

Literally copying from Wikipedia: The system of classical logic is obtained by adding any one of the following axioms: ϕ ∨ ¬ϕ (Law of the excluded middle), ...

where ∨ means OR, not AND... right?


Yes. In classical logic, p OR !p is always true. You have to add these axioms to get classical logic. Intuitionistic logic doesn't have that. p OR !p is not necessarily true in intuitionistic logic.


> A and not A are not necessarily contradictions

Nobody was saying p OR !p is necessarily true. The argument was about p AND !p. It really seems to me like you're confusing AND and OR...

The intuitionist will accept that "A and not A" cannot be true https://en.wikipedia.org/wiki/Intuitionism#Truth_and_proof


Quantum states can be A and not A.

Unknown questions are A and not A.

There’s interesting alternative logic systems that try to tackle this.

We get into interesting territory when using English and then “is” as a statement. Most English is assumed to have a certain value because of to be.

English is almost inherently flawed with logic bugs and assumptions.

E-Prime may help with this.

1. https://en.wikipedia.org/wiki/Catu%E1%B9%A3ko%E1%B9%ADi

2. https://en.wikipedia.org/wiki/Jaina_seven-valued_logic

3. https://en.wikipedia.org/wiki/Infinite-valued_logic


Okay but I still don't see how the law of the excluded middle says that "a and not a" is always valid.


Grandparent said that the law doesn't say that "a and not a" is always INvalid. (The law also doesn't say that "a and not a" is sometimes valid. It makes no comment at all, but only claims that "a or not a" is always valid. According to the grandparent. Now I need to lie down for a bit....)


Yes, this.

The point was that you can reject the law of the excluded middle without getting into the absurdity that would result from accepting "a and not a".


Ahh! Now I see I misread your comment, I see what you were trying to say now.


> you will notice that they are perfectly capable of managing a set of self-contradictory rules. It is actually an amazing power.

That's how life is, though—a jumble of rules that don't all agree and aren't all consistent. So you get used to juggling them all, making them work together as well as you can, so you can get your job done and move on.


Yes, but to me this (the contradictory nature of a large set of rules in life) is a nuisance, not a feature to be welcomed. Sometimes it is unavoidable or very expensive to clean, but we should discourage adding complex rules contradicting our existing set if possible. My 2c.


Consider the state and intent of a group of humanity as a spin glass, and you can apply a little bit of vaguely applicable analysis.

I've tried for most of my life to internalize that most of the difficulty in life is caused by different people having different viewpoints and goals and conditions. Any endeavour only works as well as the sum of its implementors.


Light behaves as both a wave and a particle. This goes well beyond human eccentricities. The universe does not care that our simple brains want to model it simply.


The wave and particle models are our own. They don't necessarily respect the true nature of reality. A bit like the old anecdote about blindfolded people feeling an elephant.


The dual, contradictory model predicts reality better than a unified one can. A set of contradictory rules might create better outcomes than a set of straightforward rules.


I don't think this is downvote-worthy, even if it is something of a personal opinion. Heck, half of what transpires for discourse on HN technical topics boils down to personal opinion. On days when I am being cynical, I think that the phrase "Best practices say that" is a veiled synonym for "I prefer that", for example.

This thing -- that different rules get layered in inconsistent ways that cannot easily be algorithmized -- is indeed an inconvenient part of adulthood. But it is a subset of an even more difficult problem. Shermer put it this way, that "smart people are very good at defending beliefs which they arrived at by non-smart reasons." Perhaps the most jarring studies proving this happen on split-brain patients -- people who, because of for example seizures, had a medical procedure where their corpus callosum was severed and now their two halves of the brain cannot talk to each other anymore. Some studies of such patients document trying to prompt their right half of the brain to do something, then verbally ask the left half why they are doing that thing. The shocking thing is that they usually do not get some "I don't know" response from the left-brain; they apparently usually get a comprehensive justification of the action complete with all sorts of details except for the crucial one ("I did it because you asked me to"). So like if the behavior was standing up they will explain how they felt uncomfortable about sitting too long and needed to stretch their legs, but the actual reason was that the experimenters asked the other half of their brain to stand up and it felt agreeable to that suggestion.

Justification is in other words something that we generally backtrack to find. And this in many ways kindly resolves the problems of inconsistencies. Because a better way to look at it is that we have various values which our actions can either serve or betray. Those values are not perfectly orthogonal, and thus alien to any sort of conflict: rather, certain actions will invite you to trade off your value of (say) living an ambitious daring life of adventures with the value of (say) being compassionate to others. So if you were to have an affair you will need to evaluate a moderate betrayal of your value of simplicity/honesty and a large betrayal of your compassion (and if you have this value a massive betrayal of your humility and self-vanishing) for a minor benefit to your curiosity and a moderate benefit to your risky ambitiousness. All of these values come in to speak a separate statement over that choice, and the judgment must be made by you-as-judge to weigh whether the benefits are worth the costs. [This particular schema of five values was in fact the result of a large amount of introspection in my more vulnerable years but now I am not sure that I find it correct -- but it is still useful for this discussion I think.] In my case my judgment is simply that I'd lose way more than I'd gain, so cheating is immoral -- and not just immoral "for me" but that in my ideology pretty much everyone will lose way more than they'd gain, so that "all other things being equal" it is immoral to cheat on your spouse. And indeed if these values are objective, as we have some reason to believe they would be -- like, human biology just functions better if we can generate societies, and those societies just function better if they can have values such as these -- then objectively cheating is immoral. So the fact that we can backtrack from our actions to our values and weigh those actions with those values really frees us from having to somehow "orthogonalize" our values into nonoverlapping Kantian precepts that might hopefully never conflict with each other and generate a logical derivation of what we should do, "People should never lie unless they are saving folks from Nazis or they are preserving the surprise of a surprise party or... or... or..."


There's a whole class of formal logic systems that accept "a and not a".

They are called Paraconsistent Logics (https://en.wikipedia.org/wiki/Paraconsistent_logic), which allow contradiction without explosion.

Ideal for mathematically representing the kind of reasoning that bubbles up from contradictory positions and goals as you described, where organisations manage to do sane things anyway, sometimes!


It's easy to make this non-contradictory by making the rules apply only to future contracts, or only when the decision maker wants them to apply.


this guy manages!


Maybe it doesn’t make sense to think about large organizations as monolithic entities. They’re composed of numerous sub entities that may follow different rules. A platoon may be ordered to shoot, but it’s conceivable that not all soldiers will comply. This is not at all contradictory.


I can't help but think of the legal system.

You would think it was rigorous and logical and would have parallels in science in some way, but when you can "exclude evidence" you realize they are different animals.

as to large organizations... Look, it's just Game of Thrones all the way down.


Ruthlessly applying logical rules onto reality tends to create singularities :)


Organizations are composed from many people doing independent decisions and forgetting about old decisions. There is nothing difficult about organization being inconsistent, the difficult leadership task is to make it consistent.

Inconsistency is natural state even when we ignore the fact that all humans are internally inconsistent - including mathematicians.


I mean, the reality of any large institution (or even a single human being) is that there will inevitably exist many contradictory "rules" that must be handled. I'm not sure how it is amazing. It's a fundamental necessity for their existence.


Policy compliance is a constant battle of wills in most large organisation. Enough of a battle in some places that they have compliance staff.

Just because they sign up to do something in a particular way doesn't mean everybody in that org is going to follow those rules.


Just like the humans that make them, then.


> they are perfectly capable of managing a set of self-contradictory rules. It is actually an amazing power.

Not amazing. Rules are for the ruled. The executive is not bound by them.


> 2. No author will be required to relinquish copyright, but instead will be provided with options that enable publication while also providing authors with generous reuse rights.

I've published a few papers (and have worked closely with Elsevier in a business context). When we published in a paywalled journal, we had the option of paying an APC [0] to make the particular article open access. So in that sense, we were not 'required' to relinquish copyright. It was just costly to retain it.

I suspect this is how some publishers argue that their contracts do not violate these maxims, because an APC option is fairly common, even at Elsevier, I think. Many journals also permit authors to post pre-edited versions of their papers on their personal websites.

I also think that, in line with what a commenter below, a large organization like MIT is not a coherent actor. There are political battles and compromises we don't see. So publishing this list could be someone's tactic for winning the internal struggle against negotiating with Elsevier.

Chris Bourg, who is a prominent advocate for open access at MIT, is my best bet for the driving force behind this:

https://twitter.com/mchris4duke/status/1271094535297339399

[0] https://en.wikipedia.org/wiki/Article_processing_charge


That's like saying that getting a prison sentence doesn't really remove your liberty, because it's just costly to regain it (with a period in prison).

The point is whether the publisher respects the authors right of republication of its own work, which is enshrined as inalienable right in [several jurisdictions](https://aisa.sp.unipi.it/attivita/diritto-di-ripubblicazione...) like the Netherlands. Even better, the publisher can avoid asking for exclusive rights, which are wholly superfluous for the operation of a journal.


> When we published in a paywalled journal, we had the option of paying an APC [0] to make the particular article open access. So in that sense, we were not 'required' to relinquish copyright. It was just costly to retain it.

It's worth noting that some (maybe even many) journal open access options do not allow the authors to keep copyright, even if they pay money! Open access ≠ keeping copyright.


good point. I just double-checked and ours says:

> Copyright: © Cambridge University Press 2018

> This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

So I guess Cambridge is the copyright holder but it's available under CC, which I think qualifies as "generous reuse rights," as originally stipulated.


I don't think that was an "or" clause, I think it was an "and". The author will retain the copyright, and they will have generous re-use rights (i.e. they will not agree to a restrictive exclusive license).


But Elsevier's lawyers would presumably argue during negotiations that the option to have CC0 distribution of text meets this stipulation, whether in spirit or letter.

I would bet that MIT's choice to divorce from Els came from a combination of ideoligical, budget, and reputational concerns (e.g. the UC system divorced from them as well and MIT now stands in solidarity). So I don't think issue was likely to be decisive.

I think that if the price had been right, all parties would have found a way to argue that the stated terms had been met -- or to simply not address them.


Chris Bourg sent the email announcing it.


As far as I understood it 2. isn't about open access per see. It's just completely absurd that e.g. if I publish an paper and than want to re read it in a year I have to buy my own paper. Same when I want to give it to an intern to read and similar.

So it's about the author being able to reuse it for himself, not about open access.


But in practice "reuse it for himself" means "upload to arXiv."


I honestly don't understand why JSTOR, Elsevier and others like them still need to exist.

Top universities should just found a non-profit, per subject, with a single paid facilitator and a single paid editor (per journal) to find peer reviewers and edit the papers into a monthly journal.

Modern tech has made it ridiculously easy to type, edit and publish such a thing if the inputs are LaTeX, Word, Markdown files or a Google Doc. And if you want it printed, there are shops that can do that for you for a small fee as well.

This should be 100% open access to everyone, extremely cheap and could be 100% funded by those who are still willing to pay for paper versions or by tiny contributions from the top 100 universities.


I'm an academic.

For years, my academic niche has tried to break free from the likes of Springer/Elsevier. Here are the bottlenecks:

* There are wonderful "pre-print" servers like arxiv and eprint.iacr.org. However, these do not maintain the "archival quality" document storage that is needed for academic scientific literature. In day-to-day, all researchers use these to stay informed on recent results. But how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old? How to guarantee that these documents are available 75 years from now? I'm sure that many of you can devise solutions to this, but they will be costly, and they will need constant labor to implement. How do you pay for this? It is OK when 20,000 researchers in a field are downloading papers every once in a while, but what happens when every student in the world wants to read these? The bandwidth charge becomes non-trivial. It seems like it needs to be outsourced, and some commercial entity with experience handles it.

* The tenure process is slow to change. Many academics need publications in prestigious journals with "high impact factors" in order to get tenure because the upper-level tenure committees in older institutions use these metrics to evaluate cases. These people are not stupid: it is just hard to evaluate cases across a university when you are not an expert. Instead, you assume that certain journals represent "the highest quality work" and thus use the presence of those publications to judge researchers. This means that the top papers still end up in Elsevier/Springer journals.

When I was a grad student at MIT, it was easy to read papers; if your IP was from MIT, every paper was 1 click away. I wonder how it is going to work now that Elsevier's catalog won't work this way...


I'm a random industry software engineer. :)

> In day-to-day, all researchers use these to stay informed on recent results. But how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old?

Printed versions + digitally signed and timestamped PDFs. This is a solved problem in the world, at least up to the level that Springer can solve it.

> How to guarantee that these documents are available 75 years from now?

I trust MIT and Harvard to keep PDFs and printed versions available much more than I trust Elsevier or Springer to be around in 75 years.

> Many academics need publications in prestigious journals with "high impact factors" in order to get tenure because the upper-level tenure committees in older institutions use these metrics to evaluate cases. These people are not stupid: it is just hard to evaluate cases across a university when you are not an expert. Instead, you assume that certain journals represent "the highest quality work" and thus use the presence of those publications to judge researchers. This means that the top papers still end up in Elsevier/Springer journals.

I don't disagree. This is why the change and the first wave of papers will likely come from already-tenured professors, who still publish high impact papers.

> When I was a grad student at MIT, it was easy to read papers; if your IP was from MIT, every paper was 1 click away. I wonder how it is going to work now that Elsevier's catalog won't work this way...

Now imagine the same situation, except you don't need your IP to be from MIT.


Just pointing out that Elsevier is ancient in business terms, it's origins as a publisher goes back to the mid 16th century and the modern version of the company is from around the 19th century. I'd be surprised if the company isn't around when I die.

In addition to publishing, they (RELX) is one of the biggest companies you've never heard of. They provide information systems to governments all over the world and span multiple market segments. I guarantee you're in a dozen of their databases right now. And that your local, state, and federal taxes all funnel into their pockets in one form or another. Along with some of the money you pay for various insurances throughout the year. When you buy a house, rent an apartment, get a job, or basically have any major event in your life, they get paid.


> Just pointing out that Elsevier is ancient in business terms, it's origins as a publisher goes back to the mid 16th century and the modern version of the company is from around the 19th century.

The 16th century publisher has nothing whatsoever to do with the current one, which shamelessly pirated and stole everything to plagiarize the prestige (so its ideas of business ethics go right back to its founding). Apparently, it worked.


It's not so much as they aren't an established company it's that that their business model has been broken/bypassed by technology. They've been reduced to being a middle-man that obstructs value rather than providing it.

The only part of biz model left is "prestige" (very fickle), "customer lock in / inertia" (which is already going away re: OA), and lobbying government to prop up/expand their monopoly (ever extending/expanding copyrights, which is the one thing that doesn't seem they will ever lose on cause ever other bypassed dinosaur broke ass business model publisher spends tons on it).


I disagree. It seems like you only know Elsevier as a publisher of journals, but that's only about 1/3 of their overall business. They (RELX) provide a lot of useful services to companies and governments.

About half their revenue and profits come from Risk and Legal services, which are not things you hear about in the news. They offer services for police, airlines, legal firms, insurance companies, accounting firms Hell, they have an analytics tool for agricultural businesses. They also have enough money to throw around in these spaces to prevent any startups from getting large enough to be a threat.


Tends to be ignored but the process of extracting a profit has costs. Both internal costs and external costs. Sometimes the external costs imposed exceed the profit extracted by a large amount.


The life expectancy of long-lived companies is shorter than you might expect:

> Based on detailed survival analysis, we show that the mortality of publicly traded companies manifests an approximately constant hazard rate over long periods of observation. This regularity indicates that mortality rates are independent of a company's age.

https://doi.org/10.1098/rsif.2015.0120


I don't have to imagine it: https://scihub.wikicn.top/


this site doesn't even load.


Might want to change your DNS to 1.1.1.1 or 8.8.8.8


>this site doesn't even load.

Probably time to use an ISP that is less willing to mess with your Internet access, or at least use a VPN and a different DNS server.


Loads in Belgium ( no censorship )


Everyone here is imagining all the technical ways to replace publishers. That's quite feasible as you and others point out. I think there is also real work needed to solve social (people) problems, for example:

- explain to the stakeholders by preparing various text and other media about how your format/venue/website is different and better, and convince them that this solves a real problem they should care about

- solicit requirements from universities, funding agencies, various governments, about archiving and metadata requirements. Consider security, accessibility, long-term preservation, financial model, etc.

- respond to the questions and pushback from numerous stakeholders about problems (real or not) about your proposed solution, debate them in a polite and professional way in semi-public forums, converge on a solution that's acceptable (or at least not overly repulsive) to the key stakeholders. Deal with any PR backlash, response from existing publishers, etc.

- inform authors, potential authors, readers, journalists, universities administrators, students, etc. that there is a new publishing format/venue/website and that it is well managed and has a plan to be around for a long time

- coordinate and schedule a team of people to work on this with you, to figure out policies (author plagiarism, recruiting editors if needed, dealing with potential lawsuits, bad actors, copyright and IP issues, etc.)


The same way Wikipedia and Reddit manage to provide quality platforms that provide tools to address these social issues, built for longevity, and have strong community moderation.


> However, these do not maintain the "archival quality" document storage that is needed for academic scientific literature. In day-to-day, all researchers use these to stay informed on recent results. But how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old? How to guarantee that these documents are available 75 years from now?

I have never had a link to Arxiv or Biorxiv break, and I have never had difficulty finding a copy of a paper on them either, going back to Arxiv's founding in 1991. On the other hand, on a daily basis, I struggle to get a copy of a paper published often just years or decades old from these 'archival quality' publishers like Elsevier, and they break my links so frequently that I spend some time every day fixing broken links on y website (and for new links, I have simply stopped link them entirely & host any PDF I need so I don't have to deal with their bullshit in the future). I guess "archival-quality publisher" is used in much the same way as the phrase "academic-quality source code"...


Hey Gwern, big fan of your GPT2 work. I notice I'm surprised to hear you say you struggle daily to fix broken links to the Elsevier catalog at ScienceDirect, because the links are used by libraries all over the world & they don't have the same feedback. Would you have a few examples available for me to send to the folks responsible?


Nature does it all the time. Here's one I fixed just this morning when I noticed it by accident: http://www.nature.com/mp/journal/vaop/ncurrent/full/mp201522... (Note, by the way, how very helpfully Nature redirects it to the homepage without an error. That's what the reader wants, right? To go to the homepage and for Nature to deliberately conceal the error from the website maintainer? This is definitely what every 'archival quality' journal should do, IMO, just to show off their top-notch quality and helpful ways and why we pay them so much taxpayer money.) Oh, SpringerLink broke a whole bunch which I am still fixing, here's two from yesterday: http://www.springerlink.com/content/5mmg0gmtg69g6978/ http://www.springerlink.com/content/p26143p057591031/ And here's an amusing ScienceDirect example: https://www.sciencedirect.com/science/article/pii/S000632071... (I would have loads more specifically ScienceDirect examples except I learned many years ago to never link ScienceDirect PDFs because the links expire or otherwise break.)


Isn't this exactly the intended use-case for the DOI?

Your first article has the DOI 10.1038/mp.2015.225, and the resulting link (https://doi.org/10.1038/mp.2015.225) properly directs to the article's present location.


DOIs link to paywalls or temporarily-unembargoed papers, have to be hunted down (many places hide the DOIs in tabs or, like JSTOR, actually bury it in the HTML source itself!), and break things like section links as well. Adding yet another level of indirection is not my idea of a solution and hardly speaks well of 'archive-quality publishers' that we have to resort to third parties to work around their hideously broken websites which, like Nature, go out of their way to make links not just break but actively misleading.


To solve your immediate problem, just grab the DOI here: https://apps.crossref.org/SimpleTextQuery They also have an API from which you can fetch DOIs in various ways.

DOIs are a solution to the issue of having persistent, publisher-independent links that will always resolve, even if a journal changes publisher or goes out of business. Academia uses them because link rot is unavoidable across the web, but there must always be a link to the publication that resolves so that when someone in 2070 wants to follow a citation in the references of a work published today, they can do that. It's the same thinking that underlies people pointing to the internet archive in Wikipedia citations. It's a layer of redirection, but in a way that preserves accessibility for the long term. It's also the same thinking that underlies DNS. There shouldn't be one company that controls how to resolve an IP address to a domain name, and likewise you shouldn't have to go through one publisher to resolve a reference to a research article.

As a side note, Crossref is staffed with exactly the sort of web geeks that you would see at an Internet Archive get-together (#).

So I hear your frustrations, but I think you're giving DOIs short shrift.

(#) I mean, just look at this. A dump of all journal metadata on Academic Torrents. Is that not cool? https://www.crossref.org/blog/free-public-data-file-of-112-m...


Journals do transfer among publishers, go out of business, etc so you shouldn't expect a direct link like that to be stable. The recommended practice is to use the DOI. Would using a DOI meet your needs?


Do Nature's spinoffs have any prestige any more? Anything in a Nature spinoff related to batteries comes across as PR Newswire level material. If that.


Digital archival of PDFs weighing a few hundred KBs to a few MBs is definitely a solved problem. And there are already arXiv overlay journals out there, and platforms supporting them. Tim Gowers' (Fields medalist) blog posts on this topic are quite informative:

https://gowers.wordpress.com/2015/09/10/discrete-analysis-an...

https://gowers.wordpress.com/2019/10/30/advances-in-combinat...

Highlights: $10 per submission, plus some fixed costs, including archival with CLOCKSS. No Elsevier extortion ring needed.

Impact factors are of course kind of a chicken and egg problem. Need to have enough high profile journals move off Big Publishing, or have enough high profile ones started.

> When I was a grad student at MIT, it was easy to read papers; if your IP was from MIT, every paper was 1 click away.

When I was a grad student at <institution of similar caliber>, or an undergrad at <another institution of similar caliber>, accessing papers was rather painful off campus. One either has to use EZproxy, which might decide to block you if it doesn't like your IP range (say in a foreign country), or use some godawful proprietary VPN client that I would stay the hell away from unless necessary.


Today it's much easier, practically all universities participate as Identity Providers in SAML Federations and digital libraries participate as Service Providers. So you can just use your institutional login credentials to the identity provider page of your university. The service provider receives a signed SAML assertion that, well, asserts that you belong to your university and you are, say, a student. Most popular software is Internet2 Shibboleth (IdP and SP) in the academic field. It all works very well and has been for some time.

In the country where I live, you get access to office365, (physical) books, digital libraries (including Elsevier :)) and a wide variety of other services all via your institutional login.


> But how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old?

Trusted timestamping.

https://www.ietf.org/rfc/rfc3161.txt

It also happens to be very widely deployed and supported by well-established companies, because it's an integral part of executable cross-signing. That is, this exists right now.


Wouldn't saving the hash of the document on some blockchain be a more simpler solution to proving the integrity of a document?


No, because you need to convince 3rd parties to participate, and you need to convince a majority of these parties to be honest.


I’m also an academic. Hash each paper, then hash the hashes. Publish the result with the proceedings. After year one, include the hash of the prior year(s). Problem solved.

Recently I downloaded one of my old peer-reviewed papers. The “archival” service added a spammy logo to the bottom left corner of each page.

I’ve been meaning to find the original and put it on my web page. Honestly, I might just add a list of all my papers with links to SciHub instead.

I’m allowed to post them on my personal web page according to every copyright agreement I recall signing.


>Problem solved

Not at all. There are corrections and amendments. It isn't as complicated as in law, but still.


Corrections and amendments are separate (but related) documents. Preventing them from being retroactively applied to the original version of the source document is the specific thing that a archival-quality document storage is supposed to do (as opposed a non-archival-quality storage, which only needs to protect against data loss (as in turn opposed to a cache, which can rely on a backing storage)).


Looks like you agree that an archival-quality document storage assumes a bit more than hashing papers. That was my point.


If that was your point, it was very poorly made, since you appeared to be claiming that archival-quality document storage required much more than hashing papers.

Archival-quality document storage requires two things: 1, hashing papers; 2, guaranteeing that the preimages of those hashes (ie the papers) remain available despite accidental and deliberate forces toward their destruction.

Non-archival-quality document storage already requires thing 2, we just want to add more nines of reliablity to those guarantees, which is a fundamentally technical endeavor that the likes of Springer/Elsevier don't particularly help with.


Signed declarations of amendment, then amended papers being added as new one with proof of amendment and link with the original. Kind of like how a keyserver deals with revoked keys.


> Many academics need publications in prestigious journals with "high impact factors"

It wouldn't surprise me if this is a massive part of the problem. Any new system to replace Elsevier may be perfect in lots of ways, but it doesn't count as prestigious, which means everybody will still want/need to publish with Elsevier. How do you magically grant a new publishing platform this 'prestige'?


When they mess up is when it changes. When you look at old institutions and powerful people, sometimes their rule ends abruptly because of scandal, bad decisions, or corruption. Bear Sterns, Enron, and Nixon are examples of this. For a new publishing platform to succeed, the old one needs die. For an organization built on prestige to die, it needs to be mired in scandal wrapped up and packaged in the political zeitgeist at that moment that not only affects its small community but also develops the ire of the entire society. At that point a new platform will emerge, likely backed by, and inheriting its prestige from, another institution.

Edit: I realize, unfortunately, this post doesn’t give anything actionable that anyone can enact. It at least offers hope that things can change.


"Flipping" journals is an option, but doesn't happen often because it's risk for the editors with little personal benefit.

The answer the project I volunteer for [1] is that the prestige of a journal comes from the researchers who submit or review for it, so we can also employ their reputations without the middleman - by having them endorse works, thus having their names instead of the journal names attached to the works.

[1] https://plaudit.pub/


When the prestigious expert editorial board resigns at the same time and creates a new journal. It has happened several times, e. g. Glossa for linguistics.

Also JMLR in machine learning is independent and still well regarded.


> But how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old?

Mostly you should stop worrying about this. Other people have explained various countermeasures that could be used, which are very cheap, but mostly nobody cares.

And already today, without anybody altering anything, it is very common for papers to use misleading citations. You take a paper that found some clowns like cake, you write "Almost all clowns like cake" and you cite that paper. It's possible a reviewer will notice and push back, but very likely you will get published even though you've stretched that citation beyond breaking point. Why "hack in" to change the paper when you can just distort what it said and get away with it?


I don't mean to be rude, but I think you are greatly exaggerating the technical and cost considerations behind this effort.


Just about the bandwidth costs: You can rent a server at Hetzner.de for 40 EUR with 1 Gbit/s. Let's say each PDF is around 100 kb, then you can serve 1000 PDFs per second. Say there are 50 million active research students in the world, then the single 40 EUR server can serve them about 10 PDFs/week on average.


With that requirements you can do much cheaper even. 1000 PDFs of 100KB is just 100MB. You can put them on a 1Gbit VPS for just $1.5 to $5/month.

https://www.serverhunter.com/?search=PLJ-9TW-TWN

A dedicated box would cost just $7/month.


From my brief time working at Springer, seeing how their business model shifted towards services and processes aimed at enabling as many publications as possible -

I think basing tenure decisions on the fact papers were published there is based on archaic notions and misguided.


These are valid concerns but in 2020 very easy problems to solve. There’s little reason why a small consortium of institutions couldn’t build a very robust system to accomplish all that. Use digital signatures and distributed mirrored storage and problem solved. Charge a very modest fee to members to cover fixed costs and make it free to the public. Heck a few we’ll organized S3 buckets with a search engine attached would be better than a lot of what’s out there today.

Not to pick on academics but the commercial publishing houses basically prey on the stubbornness of the academic community here. In the pure private sector someone would come along tomorrow and make Elsevier and others obsolete and they would go bust quickly. MIT is making the moves that might just whip something into shape to remove Elsevier’s role in the market.

On “high impact” if the top universities in the world en-mass unsubscribe from the commercial players that will change quick.


I heard the archiving argument before.

But I can't imagine that this is an expensive role for an organization like library of Congress or similar to take on. Many counties have a national library of sorts.

Bandwidth/storage costs are limited, we're talking about PDFs.


> I wonder how it is going to work now that Elsevier's catalog won't work this way

MIT alum here. In my experience you can always request a copy directly from the author by e-mail. There is ResearchGate which aims to make this easier, but doesn't because the fundamental problem is academics don't have time to respond to every e-mail, much less every ResearchGate notification. So yes sometimes you have to ping them by e-mail about 2 or 3 times.

I think ResearchGate -- or even Google Scholar -- should add a feature to allow manuscript requests to be auto-replied with a copy of the document instead of waiting for the author to manually send a copy.


This is the first blockchain example that makes sense.


> how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old?

You publish hashes of all the documents. It's trivial to distribute lists of hashes. You can even put them into existing blockchains, which guarantee they won't change.


Re: hacking, preservation - - does Elsevier make reasonable assurances in this space beyond being a strawman that can be torn apart in a lawsuit? Is there a reason an open access platform could not make technological assurances that are as good or better?

This really seems like a good and fairly low-risk opportunity for universities to form a consortium of sorts to make their own publishing platform. And then maybe "impact" would be built in because they'd all be getting high on their own supply. But these institutions are strangely prone to silos despite the collaborative spirit of academia writ large so I don't see that happening.


I think there are field-specific differences. In biology / bioinformatics world the "high impact factors" journals are the norm for tenure or even confirmation of work. But highly influential computing related papers are rarely from those journals. Bioinformatics is an interesting exception because it's bridging these fields and you'll see references from both highly reputable journals and biorxiv.


> But how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old?

I can't believe I'm going to say this, but this sounds like an actual problem well-suited to a blockchain solution.


>But how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old?

And how Springer/Elsevier guarantee that? Is it written into cintracts?


Even if it is... who enforces it? Who checks it? What does that guarantee worth? Who cares really about papers getting numbers changed...

90% of them are absolute crap and only max. 10% of the remaining 10% replicates just based on the text of the paper alone.


>When I was a grad student at MIT, it was easy to read papers; if your IP was from MIT, every paper was 1 click away.

Isn't this predatory pricing?


> I honestly don't understand why JSTOR, Elsevier and others like them still need to exist.

I suspect you're being rhetorical here, but just in case: your premise is wrong; they don't need to exist, in fact they need to not exist; preferably they need to die in a fire.


Oh dear, the burden of organizing peer-review and of consolidating some sort of "quality" stamp (I said "some sort of") is much more expensive than "nothing".

I am not pro-Elsevier, I am just stating a fact.


The only people getting paid in the reviewing process are the journals that are only coordinating the reviews. Actual reviewers (aka other researchers in the same field as the paper) are working for free.


That is largely correct, however the editors are very key to maintaining this process and especially the standards. A prestiged academic will not blindly accept any review request without having a level of trust in the process and also the coordinator.

Reviewing others' work is rather tedious and I think it will be a challenge for any fully open platform to demonstrate that it will not be a waste of time to do peer reviews on them.


I don't think that's true 100% of the time. There are review-as-a-service solutions offered (and charged for) by the publishers.


Perhaps this is the real change that's needed. Getting a review structure that rewards really thorough reviews, monetarily. Those reviewers then become like YT stars where yes, everyone can review, but these reviewers are top-notch. The payment structure would depend on fees from accessing the works or fees for subscription to access.

That might finally break (or finally justif) Elsevier and their ilk.


"Rockstar reviewers" seem like a cure that's almost as bad as the disease. Some scientific fields already have a problem with groupthink, with a few well-defined and vigorously-opposed schools of thought. I would vastly prefer a broader reviewer pool to the usual suspects from the same few labs.

Everybody likes money, but I'm also not sure that's the way to go either. It would be great if reviewing directly impacted people's academic/research careers; I suspect the ability to review well is highly correlated with the ability to successfully run a research group. However, there are lots of thorny issues involving power and interpersonal relationships.


I know: I just stated that "organizing" it was expensive.

I do know that reviewing is free (I have done it).


Organizing is not expensive.

What is expensive is paying the typesetters to place the movable type in various places, and to create plates with the various graphics. Oh wait, we don't need to do that anymore.

At this point, there is no rational justification for what Elsevier is doing now except greed. They actually have some other services that makes sense, but this lock on academic papers is simply a historical accident that is no longer relevant.


It's time consuming and that is money (granted, Elsevier still gets more money than what is paid to editors)


OK, maybe I exaggerated when I said nothing.

Per subject, per journal, the same person who edits a journal today at Elsevier could do the same thing for the same salary at a university consortium-backed non-profit.


Yes, per subject per journal but then how many non-profits do you need? How do you organize them? How do you get a coordinated best-effort, etc...

I mean: corporations do not exist in a vacuum, they (usually) DO provide benefits to the society also.

I insist: I am not trying to defend abuses, I am trying to clarify that a for-profit corporation dealing with those many editorial issues is not bad per se.


Wikipedia editing doesn't cost much. Open, public review is free anyway. We would need a prestige-setting institution, i m sure we can come up with a substitute.


Why though? It is not like you don’t have access to high quality cheap talent in the form of RAs/TAs etc why cannot that part be done by students ? It will also actually help them learn their subjects


And yet much of the work of peer review is done on a volunteer basis.


Organizing it and providing the "quality stamp" is what I was referring to.


> Organizing it and providing the "quality stamp" is what I was referring to.

Lol that's also done by volunteers.


Then you'll have replaced the problem of making research results actually available somewhere. But what it doesn't solve, is the problems of a) deciding what research to read and b) deciding which researchers to hire.

Note that the current system, which relies on the brand name of the journals in which works (or an author's works) are published, is very flawed, but it's what people use, and is therefore what's making people refrain from actually publishing in those journals the universities would found.

(Disclosure: I volunteer for https://plaudit.pub, a project that aims to contribute to solving the mentioned problems to enable transitioning to Open Access journals.)


I love the idea of plaudit, it wold be interesting to tie into dlbp or semanticscholar. As it is now, I have to see if a researcher tweets paper recommendations. Are you working with either?

I am sure you are aware, posting for the wider audience.

Availability is the hard part, formats, indexing, a handle so that it can be referenced. We already have an awesome model for this with the e-print archives [2..=4].

As for what to read, this is what overlay journals are for! [1]. By splitting the mechanics of submission, serving, basic vetting, etc. any other group of people can create as many overlay journals as they deem necessary. Sorting, ranking and clustering of the research now is not coupled to getting the knowledge recorded.

This excellent article [5] linked from the wikipedia entry has the perfect description of the concept,

>>> The Open Journal of Astrophysics works in tandem with manuscripts posted on the pre-print server arXiv. Researchers submit their papers from arXiv directly to the journal, which evaluates them by conventional peer review. Accepted versions of the papers are then re-posted to arXiv and assigned a DOI, and the journal publishes links to them.

[1] https://en.wikipedia.org/wiki/Overlay_journal

[2] https://arxiv.org/

[3] https://www.biorxiv.org/

[4] https://en.wikipedia.org/wiki/Category:Eprint_archives

[5] https://www.nature.com/news/open-journals-that-piggyback-on-...


> I love the idea of plaudit, it wold be interesting to tie into dlbp or semanticscholar. As it is now, I have to see if a researcher tweets paper recommendations. Are you working with either?

I'm not, unfortunately, but if have any contacts there please do point them my way (Vincent@plaudit.pub) :)


Well, despite the parasitic nature of the modern commercial journal world (and originally they did come from more benevolent aims, but got consumed by corporations) -- they do serve an important filtering and quality control mechanism.

A 100% open and free journal cannot achieve selectivity without having some judgement and bias applied. One that's hard for an intrepid band of volunteers to recreate without funding and full time commitment. Who will be the editors? There's also the problem of how to create a new journal that has the prestige of an old established one. Which new journal will we select to have the prestige?

But yes, they have become parasites, who prey on the free labor of eager young academics, take their work and sell access to it, enforce copyrights on knowledge created by taxpayer money, and bundle useless journals in with important ones so everyone has to pay more.

It's in the public interest for academic fields and the universities to come up with a reasonable alternative.


> Well, despite the parasitic nature of the modern commercial journal world (and originally they did come from more benevolent aims, but got consumed by corporations) -- they do serve an important filtering and quality control mechanism.

No they don't. Their editors do, not the entire organization, and really it's the selected (volunteer) peer reviewers who do.

> A 100% open and free journal cannot achieve selectivity without having some judgement and bias applied.

Agreed.

> One that's hard for an intrepid band of volunteers to recreate without funding and full time commitment. Who will be the editors?

That's why I think universities should be the founders. The top professors in a certain field can nominate a good editor, who will be paid full-time.

> There's also the problem of how to create a new journal that has the prestige of an old established one. Which new journal will we select to have the prestige?

Prestige comes from being relevant and innovative. Also, who said this has to be a new journal? Why not convert an established on?


The majority (70% or so) of submissions are desk-rejected without even being sent for review, and the ability to do that well is something that's learned over time with extensive detailed knowledge of the particular field served by the journal. Note that there are more kinds of editors than just academic editors, too, even at places like PLOS & eLife.


> "A 100% open and free journal cannot achieve selectivity without having some judgement and bias applied."

Establishing reputation is the central challenge for a lot of the internet. Sorting spam from mail, sorting useful search results from SEO, sorting legit programs from malware on app stores.

"Let's just have a small handful of people manually review everything" is not a terrible first approach! It is the naive solution, and will work if you don't have to scale. It even worked for search for a couple years.

And you might argue that it's ok for journals to keep doing that because they don't have to scale. They don't have to review, rate, and publish everything good. They can have a very, very tiny output and it's ok.

But there is some cost to rate limiting scientific output.

So I'm surprised there hasn't at least been a good competitor incorporating what we've learned from other domains. It wouldn't be the same, but at least trying to use some things like citation counts and reader behavior for an initial guess at what deserves review.

All the arguments that "we need a small group of professionals curating these" lose a little weight in a replication crisis.

If you really wanted to try this, you might want to go after low hanging fruit. Someone should make a nutritional science journal, using purely algorithmic data to score proposals. Not much to lose there.

https://io9.gizmodo.com/i-fooled-millions-into-thinking-choc...


A 100% open and free journal cannot achieve selectivity without having some judgement and bias applied.

How does this follow? "Open" doesn't mean "anyone can publish", it means "anyone can read".

Funding for editors and webhosting should come from the universities themselves. Replace Elsevier with a nonprofit consortium funded directly by universities, and a lot of these problems just go away.


> One that's hard for an intrepid band of volunteers to recreate without funding and full time commitment. Who will be the editors?

I've been a reviewer and editor for various IEEE and other engineering publications and have never been paid. Of course funding for editors is helpful, yet it may be like open source where some are willing to put in work for free.


> Well, despite the parasitic nature of the modern commercial journal world (and originally they did come from more benevolent aims, but got consumed by corporations) -- they do serve an important filtering and quality control mechanism.

I generally agree with you here

> A 100% open and free journal cannot achieve selectivity without having some judgement and bias applied. One that's hard for an intrepid band of volunteers to recreate without funding and full time commitment. Who will be the editors?

But the editors in the majority of journals are already volunteers. They might get some minor amount of money for their work (we are typically talking maybe $100 a month max), but that's it. The only journals that have full time editors are the highest impact journals like nature and science, but it shows again and again that they are not really domain experts and are not necessarily acting in the interest of science. I actually have heard a nature editor say "our business is to sell journals, not to publish the best science".

>There's also the problem of how to create a new journal that has the prestige of an old established one. Which new journal will we select to have the prestige?

Well if the big universities and funding agencies would push, this would happen quite fast.

> But yes, they have become parasites, who prey on the free labor of eager young academics, take their work and sell access to it, enforce copyrights on knowledge created by taxpayer money, and bundle useless journals in with important ones so everyone has to pay more.

> It's in the public interest for academic fields and the universities to come up with a reasonable alternative.


"Top universities should just [fund]..."

I want something more crazy, daring.

Kickstarter, meets blogging, meets X-Prizes, meets startup incubator, meets moon-shots, meets those McArthur genius grants, meets Y-Combinator classes (cohorts?).

Start with a fund.

Recruit some lunatics, err, mavericks, err, battle weary scientists to judge proposals.

Make a couple lofty problem statements. Maybe one per year.

"Create next generation peer review system."

"Invent open access research thingamajig."

"Launch competitive journals for emerging disciplines."

"Deploy FOSS collaborative content management system for science reporting."

Define some semi-plausible victory conditions. Number of papers refereed. Number of research projects hosted. Number of citations.

Each applicant does a pitch.

Fund a reasonable number for groups each round.

Beer and pizza.

Wet, lather, rinse, repeat.


>I honestly don't understand why JSTOR, Elsevier and others like them still need to exist.

I'm going to guess they exist because, despite decrying these companies, scientists still want to snag that spot in Nature for the same reason that writing a front-page New York Times article is considered a bigger deal than publishing the same story on your blog, even if the content is identical.


I believe peer review should be supplemented or even replaced by social review methods where not only arbitrary reviewers but the whole scientific community might have chance judging, discussing and commenting any paper. Online. The logic and safeguards may not be that easy to create in the first place but in my opinion it would worth the effort eventually!


We will never be all experts on all subjects. Peer review is by peers, not laypersons. From the STEM perspective, a democratic solution would be a disaster. Two immediate reasons come to mind, the loss of peer expertise in the noise, and brigading.


I wrote "scientific community". What I meant is the "relevant scientific community", it wasn't evident apparently.

Also the selection of the reviewers is just partially depends on the expertness already, several other aspects affect it quite a lot. Not to mention that why a certain selection should be the one why not an other, why not the relevant community chooses the reviewers then? Just because not every details are fined carved the idea should not be dropped. (I was participating in certain peer review processes where I was an almost outsider and very far from being an expert, I have little conviction that the current one works well)


Imagine that you have voting rights to review a paper (a-la slashdot, where random people got opportunity to tag something as insightful, interesting, etc).

Now imagine that there comes an article in X subject (say, Agent Based Modeling).

When you "vote" in that article, the "dimension" of your vote is proportional to your "impact factor" in that subject (i.e., say you published 20 articles in ABM and you got 10 "votes" on them, then each of your votes count as 10 votes). On the contrary, if your impact factor is negative, your vote doesn't count. That way people that are considered "knowledgable" in their subject, will be able to peer-review other articles.

Another method would be something like what StackOverflow has: Initially everybody gets 1 vote (or 10, or 1 every month, or whatever), and you "transfer it" by voting for an article (maybe to the 1st author, or evenly distributed), so because the "votes" are scarce, people with care for them. And people with articles that are most voted, can themselves vote more.

There are plenty of systems that could work. And the beauty of it is that they could be "layered" on top of Arxiv with a Chrome extension or similar.

Mhmm, sounds like a good weekend project .


I think what you are describing has a name: Reputation Systems (https://en.wikipedia.org/wiki/Reputation_system).

There seem to be more effective ones and less effective ones.

Unfortunately sometimes economic incentives cause them to be gamed against real quality. E.g. think of shill online shopping reviews, and circular voting to boost reputation.

In academic terms, that would be "citation rings" to promote their rankings. Like web rings, there are nice and friendly ones, and there are heavy, spammy clones of sites. I would expect the rise of junk-article, plagiarism-from-elsewhere, machine-learning-assisted-plausibility mutual citation rings if there were no good controls to detect and prevent that sort of thing.


You can already judge, discuss and comment on any paper you want - create a blog and do it.


Hm, perhaps the HN should be closed as well and everyone should have their own blog instead?... or ask the peer reviewers to discuss and judge papers on their blogs instead?


Not sure what you mean, sorry?

When a conference chooses which papers to accept, that's them accepting the paper.

If you don't want to accept the paper then you don't have to, but yes you don't get a right to veto some other conference doing it.


But you are aware of journal articles as well, dont't you? And how standardized and comprehensive the peer selection is? (it is not, in general, with huge variations)


> But you are aware of journal articles as well, dont't you?

Yes.

> And how standardized and comprehensive the peer selection is? (it is not, in general, with huge variations)

Yes.

So what?

My point was you can accept whatever papers you want. You can't make or stop someone else accepting. Seems like a fine situation to me? Anyone can recognise the papers they want to.


Elsevier is a for-profit publisher, but JSTOR is a non-profit University consortium.


Why does MIT pay anyone for software?


An interesting side-effect of the shift to open access will be that anyone who's still publishing in closed-access journals will be at a severe disadvantage in getting cited compared to people who publish open access and make preprints available.


I believe this is called "FUTON bias" (https://en.wikipedia.org/wiki/FUTON_bias).


Some software I wrote got cited in a paper once. They had used it because it was readily available on the net. I didnt think anyone actually noticed.


Well crap there are pictures of it in a book too now.


What is the name of the software?


It was a simple voxel data viewer I wrote circa 1997? Called pkvox.

It could load, slice, and iso-surface up to about a gig of data with only 128meg of ram. I provided C++ source to write the file format.


Thanks for your contribution! I can't find it in the usual open archives. You might want to deposit the code on Zenodo, so it gets long-term preservation and a citable DOI. https://zenodo.org/


I'll look, but I'm not sure I even have the code any more. That was a long time ago. It had a software triangle rasterizer that pulled from a 3d voxel array to slice it. I always wanted to update it to store in 3d z-order so it would perform well on all 3 axis but never got around to it.


This seems to be the mentioned paper (program name was in https://news.ycombinator.com/item?id=23489882):

"Robust invisible watermarking of volume data using 3D DCT", doi:10.1109/CGI.2001.934699 , https://www.researchgate.net/publication/3904386_Robust_invi...


probably not as 1) most academics are just as familiar with extra-legal ways to access papers and 2) there's likely a few specific papers you need to cite, maybe from your PI, or a partner lab, or other person pretty close to your network who is working on the same type of stuff and you're building on it


I have seen a recent trend in my field of people citing recent papers for things that were literally known in the 1800s. However this is probably due to fraudulent citation rings and relationships, so I'm not sure if it contributes to my point.


In one particularly embarrassing example, in 1994, a diabetes care journal published an article which described a novel method for estimating the total area under a metabolic curve:

https://care.diabetesjournals.org/content/17/2/152

(PDF: https://math.berkeley.edu/~ehallman/math1B/TaisMethod.pdf)

It wasn't until some months later that a response pointed out that the author had rediscovered the trapezoidal method, which was known to Newton and is typically covered in precalculus courses.

https://care.diabetesjournals.org/content/17/10/1224


This....doesn't seem crazy to me? Somebody might have known it in the 1800s, but any given modern reader may not.

You could cite the original source, but it might be inaccessible (and possibly not very complete). A modern article could be easier for the reader, especially if it's a review. Sometimes you might just want a few examples of a phenomenon too, so the choice doesn't matter much.


No, I'm talking about when the fact was fully understood and widely known in the 1800s.


This is google scholar optimization 101. I doubt it can be considered "fraudulent" at this point.


That's what I do. If I want to say something obvious, I Google for the first citation which says that thing in a way which is readable, understandable, and correct.

Honestly, the problem isn't with whom I cite, so much as:

1) Academia deciding to use metrics inappropriately (as well as expecting citations on well-known obvious things). I'm not going to adjust my work style because academics decided to adopt dumb metrics and build massive, dumb incentive structures around those metrics.

2) Reviewers expecting citations for everything, and not caring about quality of those citations. "The sky is blue" doesn't work. "The sky is blue (Blitzerman, Tinkledorf 2012)" works fine, even if Blitzerman, Tinkledorf 2012 is complete nonsense.


No, you got that wrong. The way you do it is:

"As previous research indicates, the sky is blue ([1], [2], [3], [4], [5], [6], [7], [8], [9])."

Of course you haven't actually looked at all those 9 sources, except maybe the abstract of the first three.


Actually, it's not the academics who have adopted (or pushed) these metrics. They essentially came with the general trend of trying to make everything measurable (usually pushed by MBAs).


Also:

It's difficult to know whether a publication is open or closed when you download it from a research institute or a university (access is automatically granted based on the ip address).

You only realize how much it cost when you try to download it from home.


This is already happening! Not quite "severe" disadvantage, but OA papers are getting cited more often.

https://www.timeshighereducation.com/home/open-access-papers...


Further indications that Aaron Swartz didn't die completely in vain...


Let's not get carried away here.

MIT administrators still use Swartz as an example to threaten potential whistleblowers. Faced with that, people back down. For all the public repentance, this was intentional and malicious by at least parts of the Central Admin.

MIT has good guys and bad guys. The good guys won here, rejecting Elsevier. The bad players on the Swartz front are still there, and still making decision similar to Swartz.

People back down much more quickly with that example held up as what happens if you don't play ball.


Came here to see that Aaron Swartz was mentioned, for people to remember and to learn about him and his incident.

I don't see it as MIT "doing the right" thing, as much as virtue signaling. As an institution and bureaucracy it tries to self-preserve. Either by saying "he are against technology being used for piracy" or "we are for the open spread of information". Of course I am cynical, but I don't think MIT decided this out of the "goodness" of their heart.


Just came across his name yesterday when I was looking for python markdown parsers!

Thinking about what happened to him, I'm a bit ambivalent. He was a great dude, for sure, but what he did with downloading the entire JSTOR database sneakily does sound a bit out there. It's definitely a Robinhood move, but Robinhood also had an arrest warrant on him. Expecting no less of a retaliation would have been naive at best.

Many other hackers have been arrested, spent jail time and have come out of it to still go on with their life. I suppose you have to know which side of the law (agreeable or not) when you go the hacker route. He unfortunately elected to end his life over this. That is a great, great loss, but I'm not convinced we should start judging things in the world because someone killed themselves over it.


There are two pieces here: How the legal system behaved, and how MIT behaved. What you're saying makes 100% sense for an aggressive prosecutor. On the other hand, MIT was behaving in a way which was pure evil.

To go back to the Robinhood analogy, I would expect the Sheriff of Nottingham to go after Robinhood with perfect dedication -- that's his job. On the other hand, if Friar Tuck made it his life's work to go after Robinhood, that'd be a different story.


I don't totally understand why there's so much hate on MIT for this, so consider this more of an inquiry rather than an outright defense of what happened and let me know if I'm missing something critical:

Based on what I know, Aaron Swartz - someone with no affiliation to MIT - abused MIT's open campus/network policies and tried to download all of JSTOR by hiding a laptop in a closet. At best, this is at least something sketchy to do by someone with no connection to the university. MIT discovered this, turned the matters over to the police, Swartz got caught, and the (overzealous) prosecutors took it from there. MIT did not "make it [its] life's work to go after [Swartz]". You can certainly criticize it for not trying to help him, especially given MIT's hacker-friendly culture and the fact that his actions did not hurt anyone, but I think it's unfair to have expected MIT to make a stand against the prosecutor/criminal justice system and predicted Swartz's suicide, after he rejected the 6 month minimum-security prison deal and decided to go to trial instead.

I'm all for open access, but Swartz's methods were at least a bit questionable and he should have expected some repercussions should he get caught (I'm sure he did, hence the hiding of the laptop). And in the end, blaming MIT for being neutral in a politically-charged case and for Swartz's death seems unfair, it is an academic and research institution after all, not a public defender.


A few corrections:

1) Saying Aaron had no affiliation to MIT does not reflect the reality of the situation. MIT, at the time, had an open door policy. There were a lot of people who hung out at MIT -- accepted, participating, contributing members of the MIT community (often actively participating in running MIT classes or doing MIT research), who just happened to not be in a formal role (student, faculty, etc.). MIT has clamped down on that since, but it's a lot of what made MIT awesome in its heyday. The reason the MIT community was so offended by the MIT administration is because it was an attack by the administration on a member of the community.

2) Saying MIT took a neutral role is also false. JSTOR took a neutral role. MIT actively pressed charges.

3) As unreasonable as Swartz' actions seem in 2020 mainstream culture, they were not out-of-line with MIT culture of the time. People were encouraged to actively pushed boundaries, and property was a bit more communal. As an undergrad, I might go into a lab I had no affiliation with, and use equipment to build something. I wouldn't do that if it was indicated that wasn't okay, but for the most part, there was an expectation that if the Institute had a classroom no one was using, you could use it to run a community activity. If there was a lab with equipment you needed, unless there was a sign posted to the contrary, then you should just made sure you left it better than when you found it (and if it was something like a bandsaw, had the safety training you needed). I was trained on equipment in a several labs I had no formal affiliation with, and regularly used them for personal projects. This was 100% okay and everyone knew about this.

4) I don't have any reason to believe Swartz hid a laptop in a closet. He left a laptop in a closet. There were some things he did -- like spoofing IPs -- which were less transparent. But there were plenty of times I'd left equipment connected to random places on the MIT network for long-running network operations, never nefariously. It's an unlocked closet with an ethernet drop. No one in the community would think twice about using it for e.g. a large download overnight.

A lot of these things would not be done in 2020 MIT. Not a million years. The administration's handling of Swartz was part of this culture change, and a lot of MIT's soul died in the process. It has had a continuing chilling effect on the culture of the MIT community. What's really evil is that the MIT administration continues to uses Swartz as an example to intimidate community members into compliance with what it wants them to toe the line.


> an attack by the administration on a member of the community

Prosecuting someone for breaking into your network and using it for illegal activities is attacking them?

> What's really evil is

Every time I see someone use the word evil in a debate, I just ignore everything else they say, because there's no rationalizing with emotions. "Evilness" is subjective, which is why we have laws. I tend to stick to the laws and not demonize people for doing things that were according to it, since if it were truly evil the people would at least demand the law be changed.

If we all really cared enough about open access journals, there would be a world-wide boycott of science education until a national law was passed blocking all public funding to anything but open access. But that's not going to happen, and we all know why: it's not evil enough for us to drop everything and use collective action, but we still want to pose it as evil because we're really angry, and we're really angry because we don't know enough about how it all works to find a better solution.


> Prosecuting someone for breaking into your network and using it for illegal activities is attacking them?

It depends on where you draw the boundaries between "yours" and "ours." My family can walk into my closet. A stranger can't. From the perspective of most of the MIT community, Aaron didn't break into the MIT network. He had both legal and moral access to use it, and he did that like many other community members. People plug things into MIT network jacks all the time. I did that too. I never hid that I did that, and no one thought I was breaking, stealing, or doing anything else wrong. I saw others do likewise. It's how the place worked at the time.

He did bypass protections on JSTOR's systems. If someone had grounds to press charges, it was JSTOR, not MIT.

> Every time I see someone use the word evil in a debate, I just ignore everything else they say, because there's no rationalizing with emotions. "Evilness" is subjective, which is why we have laws. I tend to stick to the laws and not demonize people for doing things that were according to it, since if it were truly evil the people would at least demand the law be changed.

That's a very culturally narrow point of view. It places you as being mostly likely of either of Western European descent or Japanese, but there are a few more cultures which define morality in terms of following the law. I think this is the point at which you check out from the conversation -- most monocultural people get very uncomfortable with foreign things, but:

That worldview completely breaks down when Hitler makes it a crime to kill Jews, you have BLM protests in the US, or democracy protests in Honk Kong. It breaks down in more subtle ways across the board.

And yes, people did push for change. Aaron's Law was almost passed, but ultimately, it was blocked by Oracle. Oracle is a pretty evil corporation too, although for the most part, it does a good job following the laws. That's your queue to check out, I guess.

But the point is that the laws are at least as subjective and arbitrary as people's cultural biases about what constitutes good and bad. If you want more rational opinions of good versus evil, you can start with utilitarianism and other philosophies of morality.


Thanks for taking the time to write this. A few questions/comments on your points:

1) Ok, from that perspective Aaron Swartz was part of the MIT community, but I don't agree that his case "was an attack by the administration on a member of the community". MIT did not speak against (or in support of) Swartz, did not press charges, told the prosecution it should not think MIT wanted a jail sentence for Swartz, and really was not involved in the trial. As I said before, you can definitely criticize MIT for not actively supporting Swartz, but I get the sense that people think MIT was out there encouraging the prosecution and pushing for some severe punishment, which it was not.

2) MIT did not press charges, MIT called the police. I just took a look at the Abelson report and it in fact states multiple times that MIT did not press charges, although it also points out that MIT was not opposed to charges. I would call that being neutral. I believe the report is fairly objective in its fact-reporting, but of course we have to acknowledge that it comes from MIT itself, so please let me know if you think Abelson/the administration is outright lying when they say they did not press charges.

3) I agree that people at MIT are/were actively encouraged to push boundaries, and that MIT resources were for everyone in the community to use, although your description of using random lab equipment does strike me as a bit odd. Granted, I was not part of the makers community, but while I had friends who actively used the hobby shop for personal projects, I never heard of anyone just going into random labs while people weren't there to play around with equipment and build something. Not saying it's wrong, just wondering how common this actually is at MIT.

4) Maybe he just left a laptop in a closet, but I would argue Swartz had at least some idea he was doing something questionable - I'm not saying something illegal or even wrong - but somewhat inappropriate. When he was caught near Central Square, the MIT PD officer identified himself and said he wanted to speak with him, after which Swartz dropped his bike and started running away. If I'm a member of the MIT community working on a project while connected to the MIT network, I don't drop my stuff and run away when MIT PD approach me asking to speak with me if I don't think I'm doing anything wrong.

I agree that the culture at MIT has changed over the past decade (in mostly a negative way), in large part because of the administration, but in this specific instance I feel like MIT is getting more hate than it should.

    What's really evil is that the MIT administration continues to uses Swartz as an example to intimidate community members into compliance with what it wants them to toe the line.
Do you have any sources for this? I haven't heard of it, but if that is the case, that's a pretty shitty thing to do


1/2) I stand corrected. I will reread about what happened.

To answer your questions, though: In the generic case, the MIT administration lies or at the very least badly misleads in these reports (see the Epstein report, for example). In this specific case, MIT chose Abelson to author the report precisely since he has unimpeachable integrity. I'd trust whatever he signed his name to, as would everyone else in the MIT community.

3) It sort of depends. I wouldn't e.g. walk into a random biology lab where I knew no one, and no one knew me, in the middle of the night and use a centrifuge. On the other hand:

* There were plenty of times when I walked into 38-501 lab, which was a big undergraduate EE lab, and made things, even long after I graduated (and other alumni did too), waving "hi" to the desk workers if they were around (who knew me and knew I had no affiliation). It was pretty normal. And I think that extended to any member of the MIT community. Cheap things like resistors were also free. More expensive things, unaffiliated people were respectful of (indeed, more respectful than the people managing the lab expected them to be).

* I used a few machine shops around MIT, where I was safety trained by the staff but had no formal affiliation with the shop or lab. I'd walk into these and use them casually. When the people who managed these came in, we'd usually have a friendly chat about what I was making.

* There was one makerspace in the Media Lab where I used equipment regularly without asking. I knew the prof in whose jurisdiction it was, and it was sort of symbiotic. I reverse-engineered a lot of his equipment in the process, which was useful to his research group. There were plenty of hanger-ons too, doing likewise.

* Institute-wide communal resources, like classrooms or network drops, you'd just use. You definitely didn't ask anyone. I can't imagine I'd think twice about leaving a laptop connected (except for having it stolen; theft was not uncommon). Indeed, I'd likely look for a place like a network closet where it wouldn't be as likely to be casually stolen.

And there were plenty of places which were restricted-use. For example, Edgerton shop made it clear it could only be used for specific uses. I didn't use that one. You got a feel for the specific lab.

That's actually a lot of what made MIT great. If you wanted to make something (virtually anything) you had the resources at your disposal to do it. I learned a lot from more experienced people who were hanging out making things when I was an undergrad, and I think undergrads learned a lot from me once I was an alumnus.

To be clear, that culture is dead now. I couldn't go to MIT and use a machine shop or EE lab right now, at least without paying an annual alumni membership fee.

And to be clear, there were people doing the same who weren't alumni too, but who were accepted as members of the community.

4) re: Using Swartz as an example: I can say that happens with 100% certainty, but these things wind up under NDA, so I don't know of public sources.

re: Doing something wrong: I don't disagree he knew he was doing something sketchy, but it comes back to who the victim is. It was JSTOR, not MIT.


> On the other hand, MIT was behaving in a way which was pure evil.

And they then proceeded to whitewash it, adding insult to injury.


It's a shame that our expectations for prosecutors are so low.

(to be clear, I share them. But, it's a shame)


The current copyright regime is too strict. As someone else pointed out JSTOR went open access without much ado. So basically the whole thing led to the death of some dude because he dared to show how much what he did did not really matter. It's not like universities just stopped paying for access because there's SciHub. And it's not like academics stopped paying journals for publishing because old stuff is open access.


Obligatory "for those who haven't seen the documentary about Aaron's life": https://m.youtube.com/watch?v=9vz06QO3UkQ


Thank you so much for posting this; I hadn't heard of this documentary before, and the name Aaron Swartz was just something that sounded vaguely familiar. Just watched it, and I'm shocked and saddened.


This, alongside JSTOR's declaration of open access[1], shows that all major players in the Aaron Swartz case had other choices.

RIP

[1] https://about.jstor.org/oa-and-free/


While this is really good news, it's still mind boggling to me why giving taxpayers free access to the research they paid for is still a controversial thing at all in 2020. Not to mention the benefit to science of giving everybody free access to everything.


When thinking about the cost of providing the services journals provide, I’m always struck by the example of PloS - non-profit, set up with the mission of promoting open access, and just about surviving by charging authors $2000-$3000 per article.

I can only conclude that managing journals is not as cost free as most people in this thread seem to think, for whatever reason.


You should not be getting down voted.

I am an early career academic; there are some new non-profit open access journals that I would consider publishing in that cost ~$500 dollars to publish a paper. Everything is on arXix so I cannot justify the expense (perhaps if I was late in my career with lots of grant money I would help to support these journals..)


That’s interesting. I always thought that it was early career researchers who had to publish in journals to establish their reputations. Does it not work that way for you?


In some fields you can probably just put up preprints if they are important enough and get noticed, otherwise I assume the OP meant they are publishing in for profit not open access journals that cost less for the author (via charging everyone else).


What!? 2-3K? For .. what? Where does that money go?

I mean referees don't get paid. And there's nothing else to do really if it's a digital journal.


And PLoS has, unfortunately, been getting in financial problems because Nature started a competitor that provides exactly the same service (I think slightly more expensive, but might be wrong there), but with the Nature brand name.

Which provides a hint as to what the most important product is that publishers sell.


Better to get institutional support up front and make scientific knowledge freely available then to not charge authors and then limit access with noisome subscription fees.


Maybe Aaron will get to see his dream come true posthumously -- I sure hope so.


This is great news! Going from prosecuting Swartz to ending a contract with Elsevier.


Recently (I believe around a year ago or so) my university lost access to many publisher services because of some negotiations. The negotiations were about a new Norwegian law requiring all publicly funded research to be publicly available. I don't remember exactly what happened with the negotiations, but I presume they somehow found a solution.

EDIT: Found a relevant website about open access in Norway https://www.openaccess.no/english/


The Norwegian case and many others are listed in the SPARC cancellation tracker: https://sparcopen.org/our-work/big-deal-cancellation-trackin...


Great news. Elsevier is a parasite.


It is a rare pleasure to see people taking their principles seriously: "... the MIT Framework is grounded in the conviction that openly sharing research and educational materials is key to the Institute’s mission of advancing knowledge and bringing that knowledge to bear on the world’s greatest challenges".

Take note, corporations: this is how you live a mission statement.


You should check out https://www.knowledgefutures.org, a non-profit founded by the MIT Press and the MIT Media Lab to build open authoring/publishing tools and a distributed knowledge platform.


That's great. It's good to see MIT standing up for open access principles.


Finally. But several years late on the side of MIT in this case but at last better late than never.


Snapshot for those who get a blocked message:

https://archive.is/jISGG


Thanks for that.


Publishing industry is ripe for disruption. Publicly funded research should be publicly available. Any 'fees' charged by publishers should be proportional to the value-add those publishers provide. They can't claim the 'review process' is part of that monetary value-add when reviewers are almost never paid.


Woohoo! First UC now Elsevier! The arc of the moral universe is long, but ever so often, it does indeed bend towards justice.

RIP A.S.


I'm curious (playing devil's advocate): Does this mean that MIT can now do research and not be required to publish this anywhere, not even journals?

And, if somebody from MIT does publish something under this framework, can they claim copyright and disallow any use of the contents of the papers if so convenient? Move away from patents and straight into copyright. We all know how well the US copyright system works, right?

I mean, doesn't Elsevier guarantee that the paper will be 'free-from-copyright' of the original institution/country, to any other research institution part of their network. Like, share the knowledge to those connected.

Do these moves away from Elsevier mean a more open-access, or a more-copyrighted-access to papers? I see no commitment for MIT to relinquish copyright, nor any commitment to make everything open access.


I don't really get your questions. Elsevier was never the one who demanded that MIT published research, so this shouldn't change anything there.

When a researcher publishes with Elsevier, the common practice when not transferring copyright is to give them a perpetual licence to publish the work.

Additionally, presumably MIT requires their researchers to publish their work as Open Access, i.e. with a licence that allows re-use and distribution for everyone - so they cannot disallow use of the contents of the papers arbitrarily. Elsevier is not needed for that.


> Does this mean that MIT can now do research and not be required to publish this anywhere, not even journals?

They were never required. Scientists do research and share their results so they can be improved and combined and provide insight. It started out with private letters between scientists in the early days, then journals appeared that would distribute the incoming letters to other interested parties. Over time it became a formalized system with metrics, incentives, publish or perish etc. But the original goal was to share and announce your results.


Pretty much every research grant that feeds university researchers (as opposed to industry researchers) will require that the results of the research must be published; often with some more specific criteria - e.g. a peer-reviewed journal with an impact rating in the top quartile, not just on your webpage.


The state university I attended here in Texas has started publishing a list of their subscriptions and the costs.

https://library.unt.edu/collection-management/transparency-l...

What's notable is the ones who have tried to put non-disclosure clauses in their subscription contracts. In my opinion a state university should not be allowed, legally, to enter into a contract which is not fully available for public review.


MIT has its own publisher initiative that suits modern science - online, interactive, and open-source. PubPub[1][2] is closer to something like Authorea[3] and Overleaf[4].

[1] https://www.pubpub.org/

[2] https://github.com/pubpub

[3] https://authorea.com/

[4] https://www.overleaf.com/


Elsevier does what exactly? Publish articles? Can't the various education / research organizations set up their own?

Other than existing horrible contracts, what exactly is stopping them?


Come a long way since Aaron Swartz. Let’s hope the reform continues.


>"MIT has long been a leader in open access. Adopted in 2009, the MIT Faculty Open Access Policy was one of the first and most far-reaching initiatives of its kind in the United States. Forty-seven percent of faculty journal articles published since the adoption of the policy are freely available to the world.

Could someone comment on the figure of 47%? What are the reasons this wouldn't be higher than this given that the policy has been in effect well over a decade now?


Question: is there a consequence to MIT's library system? Do they lose access to that publisher's journals for the patrons of their libraries? Or are their subscriptions to the journals a separate business deal? How does this all work?


As a former employee of Elsevier, you really do love to see it.


I don't know, does it matter at this point? All papers are available for free now anyway thanks to a Russian based operation...


Yes but not legally available for free for everyone.


The world is slowly becoming a better place.


Why can't techincal papers use something similar to github/git Pull-request model?


Can you elaborate? Do you mean curating papers via pull request?

If that's your question, the answer is that there's absolutely nothing in the way technologically: academics could easily form shoestring "journals" around a github README files with links to arXiv.

As others have said, the real issue is historical: for better or worse the currency of science is still peer-reviewed publications in prestigious journals, and a lot of those journals are still owned by Elsevier.

Obviously this is starting to change, but old traditions die hard.


I think you're looking for something like the Journal of Open Source Software [0].

[0] https://joss.theoj.org/



This would make Aron Swartz proud


Welp, apparently we aren’t shifting away from toxic publishing mechanisms any time soon.


Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: