Hacker News new | past | comments | ask | show | jobs | submit login
Peer Review (rodneybrooks.com)
188 points by haltingproblem on May 24, 2020 | hide | past | favorite | 112 comments



I don't consider myself an expert in computer networks research but i still have 10 accepted (and ~20 rejected) papers with ~100 citations in journals, conferences and workshops within my 3 years working on R&D. Hence my comments below derive from my non-academical and more development-oriented background.

Peer review is completely broken nowadays for the following reasons:

-Reviewers might not have relevant expertise to judge whether a paper is good or not. I am not talking about top-notch conferences, mainly lower-tier ones and workshops. I have seen a lot of professors pushing paper reviews to their PhDs, or even students. I was also invited to review a paper as soon as i submitted my first one (and it was rejected btw)! In another workshop i was invited to review papers just because i was a colleague of one of the organisers.

-Old-school academia. This might not apply to all fields or all academia, but i have had good papers rejected because they didn't have a lot of math or simulations! My paper was examining an actual implementation of an SDN (software defined networking) protocol or a strategy with platforms and orchestrators which require weeks to setup and implement (Opendaylight, Openstack, OpenMANO, etc) with actual experiments on real users' video activity, yet to be rejected because i didn't provide any simulation. Jeez, novelty does not come only from theory, somebody has to implement and test these things in actual machines..

-Politics. I won't say much on this aspect, other than a colleague of mine had 1000 citations within the same time just because there was a "mafia" (not my words but his) of professors, lecturers, PhDs reviewing and accepting all the group's papers because they were overloading them with each other's references and co-inviting them to each other's conferences and workshops.


Your points are mostly fair, though I'll push back on one. For context, I'm a late-year PhD student in ML, but I primarily do theory.

> Reviewers might not have relevant expertise to judge whether a paper is good or not...

Finding enough quality reviewers is hard. There are so many papers (heck, you put out 30 papers in 3 years!) and there is not much incentive to review. I review maybe 5x more papers than I write each year. It's doubly hard to get good reviews out of senior people. For this reason, I think grad student reviews are not a bad solution. In my experience, grad students puts in more effort and write decent, careful reviews. The very worst, low-effort reviews seem to come from established people who can't be bothered. A lower reviewer bar at workshops is fairly common and reasonable. Workshops are not the main publication venue (at least in my areas), so they mostly just want to filter out papers that are bad rather than only accept papers that are good.

> Politics...

I can't say too much about this since I primarily work in theory, but my perception from the outside is that the more "lab"-dominated a field is (i.e., accepted papers tend to come from some group with 20 PhD students and 10 postdocs) the more likely there are to be "mafias". Maybe this is a function of the kind of people who want to head up a lab, maybe it's a function of the kind of work that gets produced in factory environments, or maybe it's baseless prejudice from me.


You are probably right about postgrads. Although i would not consider myself the best reviewer i was really investing time and effort to produce a fair and honest review with the "limited" knowledge of the field i had, and i was probably a better reviewer than "experts" who would produce 1 line reviews or generic comments "add more references". However i would beg for highly-technical reviews, providing harsh theoretical and technical feedback, which i guess would come from more experienced or expert reviewers than postgrads or myself. And by expert i mean people who have knowledge and publications on that specific paper topic, e.g. you might not be able to provide expert feedback on P2P or distributed networks just because you are an expert in QoS (network buzzwords here, please ignore :) ). Which is probably an effect of what you mentioned; the amount of conferences/workshops/journals and papers submitted and lack of incentive from people to review.

The ~30 papers were an outcome of collaborative R&D projects with universities and companies, most of those just having my name in the authors and requiring me to quickly read and comment them, instead of spending effort to produce the results presented. From a third person perspective this somehow justifies the 3rd point of my original post on how these mafias are created and maintained. The thing is that i didn't care for citations or more paper invitations because my position or wage didn't depend on them (i was just doing my research for the implementation and the trips in the conferences my papers were accepted) and eventually i am not doing R&D anymore, however i can imagine how unfair this whole setup is for people who want to follow an academic path.


> However i would beg for highly-technical reviews, providing harsh theoretical and technical feedback, which i guess would come from more experienced or expert reviewers than postgrads or myself.

Grad students are plenty capable of writing these reviews. They might even be responsible for most of them. But the problem is still that there's very little incentive to write these kinds of reviews. As a reviewer, I don't even know how the author will accept it -- will they engage with the criticism and re-work their paper? Or will they ignore it, mildly re-write the introduction, and then re-submit to the next conference? I have seen enough examples of the latter to blunt my enthusiasm as a reviewer for providing detailed constructive feedback. Unfortunately, I think the most useful feedback may come after acceptance, when you give the talk/present the poster and people actually pay attention. It's much easier to interact in good faith when the person is right in front of you.

I sympathize with relative "outsiders" to a field who can only get expert attention through reviews. The above issue makes things difficult. Attending workshops can be useful for this.

> The ~30 papers were an outcome of collaborative R&D projects with universities and companies, most of those just having my name in the authors and requiring me to quickly read and comment them, instead of spending effort to produce the results presented.

Ah, this is not the norm in theory. Our author order is alphabetical, so the working assumption is that every co-author has seriously contributed to at least one (and hopefully all) of 1) original idea for the paper 2) technical work (proving a result, implementing an algorithm) 3) actually writing the paper. There are still people who publish 20 papers a year, but this is rare.


Yeah I’m in cryptography and our papers typically have 1-3 authors. Meanwhile the broader computer security conferences routinely have papers with 8+ authors and I’m thinking “How can 8 people write a paper and not produce total garbage?” Well it’s because most of them didn’t actually write anything in the paper, they just contributed to a code base or data collection.


While peer review has its problems, I have the feeling that the problems that we're talking about here are not primarily caused by peer review per se, but by a set of perverse incentives that surround academia these days. I'm talking about the "publish or perish" culture that is results from the (sometimes) automated metrics based on which promotions are decided, grants are handed out and academic staff is assessed. Isn't it these incentives that can create attitudes like "Who cares? It's one more publication!" (as quoted by another commenter in this thread). And isn't an overwhelming amount of mediocre journal submissions that result from this one of the reasons peer review is under pressure, qualified reviewers are rare and politics are so common?


From the original piece: I don’t have a solution, but I hope my observations here might be interesting to some.

I have a partial solution: researchers "publish" papers to arXiv or similar, then "submit" them to the journal, which conducts peer review. The "journal" is a list of links to papers that it has accepted or verified.

That way, the paper is available to those who find it useful. If a researcher really thinks the peer reviewers are wrong, he/she can state why, and why they're leaving it up, despite the critiques. Peer-review reports can be kept anonymous but can also be appended to the paper, so that readers can decide for themselves whether the peer reviewers' comments are useful or accurate.


This is how many CS conferences already work (even with public anonymous peer review, go to openreview.net to see it in action). It does seem to be better, although the frustration level for researchers is not really less.

Edit: also, in CS there is a norm that reviewers will not go out of their way to figure out if a submitted paper has been previously posted on arXiv. I don't think it's possible to have perfect anonymity without forbidding researchers from publicly talking about work in progress, so this seems like a reasonable compromise.


If the paper is on the arXiv then it will not be anonymous to the reviewers.


I'm in philosophy, though personally working in formal philosophy, so this is quite different from CS. In my area practically all top journals reject more than 95% of all submissions, and if you get accepted, it takes about a year until the article is published. Many journals get so many submissions that they do not provide any reviews or feedback - not even a paragraph or line - if they reject a paper.

Although I have been able to stay a fully paid postdoc for more than 10 years without problems and get decent amount of publications out, I'd agree that the peer review system is broken and seems to be strongly biased in favor of mediocrity over originality. It's particularly bad in fields like philosophy in the humanities, where you could argue that there are no clear-cut criteria for evaluating the content other than merely formal issues. (Philosophy is not a science.) There are lots of people who game the system, I'd say many of my colleagues are doing it, though not (yet) the majority. By gaming the system I mean writing articles they barely endorse themselves, tailored to get past reviews. I've literally had a very successful colleague say "Who cares? It's one more publication!" when I raised doubts about a joint paper, because it involved too much hand-waving and didn't seem to work.

Long story short, I have the impression that in the humanities whether you're accepted or not mainly depends on the writing style and on whether you win the "reviewer lottery." One and the same paper can get accepted in one journal and rejected in another, for journals with similar reputation. I also believe that there are certain niche areas in the humanities that are (almost) pseudo-sciences and based on a small group of researchers colluding with each other and passing opportunities for publication to each other. I won't mention the disciplines, though.

I'm a bit dismayed that you describe CS in a similar way, I've always thought its much more rigid there and easier to evaluate CS papers.

Having said all that, I don't think there is any alternative to the peer review system. Any alternative I've seen so far or come up with myself seems to be even more prone to manipulation and gaming the system. I believe that it's not the peer review system itself, but over-reliance on objective indicators by university administrators that has led to most of the problems. But the humanities have certainly changed to the negative, at least in philosophy I'm certain that many seminal and famous articles from the 70s of last century and earlier would not make it past the peer review today.


sadly, CS is also about the reviewer lottery. however because it is conferences rather than journals, turnaround is much faster at least. and there is a culture of preprints, so having a paper rejected once doesn’t delay getting it out into the world.


Rodney Brooks subsumption architecture revolutionized robotics. Back in the 90s you could replicate his work with Attila and Genghis and recreate hexapod robots for under $1000 which was an pittance for a robot.

He was at MIT when he submitted those papers. The churlishness of those reviews is staggering. What if he had given up and gone on to something else?


What if he had given up and gone on to something else?

Maybe he would have accomplished even more?


What would have happened if he had moved on to another field? Subsumption architecture would eventually have been discovered by someone else, standing on the shoulders of giants and eventuality of all discoveries etc.

Or it might have taken another 50-100 years. Fundamentally unknowable.


> What if he had given up and gone on to something else?

Science isn't supposed to be easy and it's the faithful ones that thrive. If anything, i wish reviewers were more critical of stuff before it's published.


Why shouldn't science be easier? We should work at making the doing of science as easy as possible. Getting ideas out there is a boon, suppressing them is of no value. Great ideas should percolate up because they work well, not because someone that has never tested them doesn't like the author. There is no moral superiority in making ideas harder to propagate. Science itself is tough enough as is.


In fields that are more theoretical and far removed from application, the “works well” signal will only be weak and delayed. Then a filtration process is needed to separate coherent ideas from incoherent ones.


Innovative ideas are indistinguishable from crazy ideas until they are sufficiently developed. "Supressing" them in the early stages does serve to cut down the amount of "science spam", which is at record levels already...


Because science isn’t just science, it’s also a career. Humans respond to incentives and scientists are no exception. Great work doesn’t just magically happen; it takes the right environment with the right incentives and cultural norms to induce people to put in the irrational level of effort required for great work.


Peer review is broken. More than once I've learned that papers get easier through peer review if you cite the paper of a reviewer (of course you don't know in advance who reviews it so it's basically luck). I often got comments about missing related work "from relevant authors".


I'm not sure that's exactly the reason it's broken. Usually an editor will give a paper to people who are specialists on your topic, and they will have thus likely written papers related to the topic as well.

One major issue is that, because of this, they will sometimes have a vested interest (conscious or not) in blocking the publication of your paper if it shows flaws in their own.


Many journals I'm familiar with allow you to suggest referees, as well as suggest which researchers should be avoided as referees. If I'm publishing something that is competing in some sense with the work of other people, I will usually suggest to the editor to leave them out from the peer review process.


This has only been occasionally available in my experience, and itself introduces perhaps worse problems. If I'm explicitly arguing against someone's theory, they are likely to be a relevant judge of the merits/demerits of the arguments.

Having a good editor who can discern the difference between a reviewer's relevant responses and 'turf warfare' is probably best one can wish for.


And now you're ability to exclude reviewers is a significant factor in the acceptation of your paper... Not sure if this is really what we should whish for.


The quickest path towards realize peer review means almost nothing is... reading peer reviewed papers.

Of course, take my semi-anonymous random internet opinion with a grain of salt (I've dropped out of college not once, but _twice_, so what do I know?). However, I'm moderately good at computer science topics, and have spent the last year reading and implementing papers in detail in pursuit of a fairly niche topic, and, what I was dismayed to find, is that most papers are absolute, complete and utter garbage. Like, honesty embarrassing. And that's before you even get into the fact that _you can't reproduce their results_.

Encountering this has been something of a crisis in my entire belief system, to be honest. Although, maybe I've just been exposed to what "real" science looks like: a big, gross, often wrong mess of assumptions about the world that takes a long, long time to correct.


I suspect if you think this of most papers you've either stumbled into a really bad cluster, or you've missed something about the purpose of the papers you've read.

I would agree that some papers are terrible though, I've seen a few I would never have let through review including in the very top journals.

I like the Plos mega journal approach but we can't do away with peer review altogether otherwise what separates science from the conspiracy posts that circulate on Facebook ...


"From relevant authors" is such a cowardly review. Either give me the citation to the work you want me to cite or keep quiet.

So often as a reviewer I have avoided asking for citation of my own work, even if highly relevant, as I perceive that to be an abuse of power.

I've also started a comedic collection of terrible reviews I've received, though it took me 15 years to gain the confidence to know when these are an indication of the reviewer's failings, not mine.


> often as a reviewer I have avoided asking for citation of my own work, even if highly relevant, as I perceive that to be an abuse of power.

That's noble, but isn't it counter-productive? If you're reviewing a paper whose readers would ultimately benefit from a citation of your work, then would the paper not be improved by citing it?


Sure if it's worth breaching anonymity for. Last time this happened I recommended about 5 things one of which was mine. I guess it's the more borderline cases I lose sleep over.


I'm surprised that many people automatically assume "you should cite this paper" means "you should cite my paper". I tell reviewers to cite papers all the time. They're almost never my own papers. They're just relevant papers.

I think there is some aspect here of just finding reasons to dislike the reviewer (though I've certainly had reviewers where those reasons were pretty easy to find).


Agreed, I certainly don't think any request to cite X means the reviewer wrote it. But when there's a vague request to cite more without any indication of what (and sometimes a vague request to cite more on a rather niche topic tangential to the paper being reviewed) I would suspect the reviewer of trying to get their own work mentioned without saying so directly. Because if you know the field well enough to know what literature is missing, then it doesn't take you a moment to paste a couple of citations into your review.


There are two types of peer review.

1. Pre-publication peer review Reviewers examine the work before it's presented to the rest of the people.

2. Post-publication peer review Paper on an online forum where others can read and comment and cite them. Papers can be accepted into journals and go trough peer-review later.

Post-publication peer review does not mean that authors can't use pre-publication peer review. Scientists can (and should be encouraged) to ask their colleagues to review papers before posting online to spot errors.

Some fields have long tradition to publishing and circulating working papers, manuscripts, drafts, posters etc. years before they submit the final paper for publication and pre-publication peer-review. That works as well.


I think peer review is quite important.

Note the word “peer,” though.

I would not peer review papers on robotics or AI, as those are not my forté. I could review papers on some generalizations of software engineering or the Swift programming language.

Editing is a different matter. My mother was a scientific editor, and absolutely brutal. She edited a book I wrote on writing software for a certain specialty of Web site, and it was quite humbling. She knew nothing at all about the technology, yet provided an immensely valuable service.

The book is long past it’s “sell-by” date, yet stands as maybe the only “perfect” artifact of my life, thanks to her.

Then there’s comment threads in venues like this one, which some might equate to “peer review.”

There was an old Dilbert comic on Peer review. I don’t think it can be found online, but was in “Build a Better Life by Stealing Office Supplies.”

It was entitled “The Joy of Feedback,” and probably applies to comment threads.


Peer review is flawed, but I don't think it's broken. For context, I'm a late-year PhD student in machine learning theory, and I submit almost entirely to conferences, which is the norm in my field.

To me, the big problem is that there is little incentive for experts to review papers. I write a few papers every year, and I review 5x that number. Slightly more senior people -- postdocs and junior faculty -- may review more like 30-40 papers. Most papers I review are not good papers. This is especially true at the big machine learning conferences like NeurIPS, ICML, and ICLR, where a disconcertingly large fraction (1/3?) of submissions are outright bad. Most of them are at best "meh". So there's not much benefit to the reviewer beyond some notion of community service.

I think my ~colleagues (PhD students, postdocs, young faculty) have similar opinions. We've certainly discussed these issues in the past. One positive step NeurIPS has taken is giving "best reviewer" awards that come from with a free registration. That's a nice benefit, even if it won't on its own incentivize good reviews. Some people have suggested outright paying people for reviews at some quasi-competitive rate, maybe $100 per review (note that this is not great -- a good review takes at least several hours if the paper is not terrible). NeurIPS also added a requirement this year that all submitting authors must review, which is an interesting experiment.

My impractical wish is that we also have some kind of "worst reviewer" sticker for all the reviewers who just paraphrase the abstract, add some milquetoast feelings on novelty, and weak reject. Less facetiously, some kind of StackOverflow reputation system might be useful. As is there's so little external signal about being a good or bad reviewer that the incentives are pretty wrong. Some kind of points system might help.

But my overall point might be: reviewers by and large aren't malicious, jealous people who are only rejecting your work because they want to protect their own turf. They are more likely good faith actors who are trying to balance time spent on a pretty thankless but societally useful job (reviewing) with time spent on much more personally enjoyable and rewarding activities (research, family, hobbies, etc). There aren't really obvious solutions to tilt the balance toward better reviews that can still handle the volume of papers.


You’re fortunate to be in a field of theory that is (1) relatively close to a source of feedback from objective reality and (2) formal. Then theoretical discussions can center around what will work and whether the math makes sense. The horrible politics and vitriol happens when you have neither (1) or (2).


Dunno about paying. Remember Amazon Turk? Folks just spammed it with zero-effort submissions and 'closed' thousands of contracts with bots. Instantly gamed, so some jerk could walk off with thousands, $1 at a time.

The shaming idea, that could work.


My understanding of the payment strategy is that it would be targeted at people who already qualify as experts but are on the fence. For example, I know postdocs and young faculty who would like to review more as a kind of community service but can't quite justify it in the face of other demands on their time. The money might be enough to tip the scales.

I don't think it will work on the other more senior pool of expert reviewers. On the other hand, I think most of the really bad reviews come from this senior pool, so maybe that's fine.


Mechanical Turk is still useful, though. And there’s a big difference between paying a bored rando $1 to identify pictures of frogs and asking an academic who is highly invested in the relevant ideas to review a (hopefully interesting) paper.


But is it different in the, nobody-will-scam-it way? Remember, we're talking starving academics. They don't have a great history of philanthropic behavior.


Amazon Turk is still around, and if it had that problem they fixed it pretty quick. The poster reviews the submission before you get paid, and the same system would be in place in this scenario.


That's peer review from 3+ decades ago. Things have changed substantially. PIs dont have enough time to review all that stuff, lot of it is delegated to students, reviews can be hasty, with reviewers who may latch on to a specific detail and let the rest pass through. It's not even that they are not trying hard enough - Peer review is not doable in 2020. If you honestly want to scrutinize and improve every detail of a manuscript, you re better off posting it in reddit or twitter.

> clamor for double blind anonymous review

That's not even possible in most fields. you can easily tell which lab it comes from (you can even often guess who the reviewer is)


Any ideas for how the system might work differently? Could peer review learn anything from the open source community?


Currently, peer review fulfils three distinct functions, to the detriment of all of them:

1. Acting as a filter for what research is worth reading.

2. Providing input to the author on how to improve their work.

3. Influence the author's professional evaluation.

The first is the most blunt instrument: when something is not considered worth reading, it doesn't get published at all. This is problematic in the cases where the peer reviewers misjudge, or when an article is evaluated on more than its academic merit, e.g. whether it's "groundbreaking" enough for a particular journal.

The second is useful, but made less so due to the power imbalance - an author cannot judge for themselves whether the feedback makes sense in the context of their work. This is especially problematic when someone has to take their work to multiple journals and/or gets conflicting feedback.

And that power imbalance is the result of the third point: it can be more important to the author to get that stamp of approval by the reviewers than to meaningfully contribute to science, because that's the only way they can stay in academia.

Ideally we'd split them up, but that does require aligning the academic community - which is a hard problem, especially considering there are strong vested interests in the status quo.

(Disclosure: I do volunteer for a project (https://plaudit.pub) aimed at splitting evaluation from publication, because I think it's an important issue.)


Plaudit is nothing more than like an upvote, right? I don't think this is really helpful. We already have a very similar metric, which is citation count. Which is definitely a useful metric.

At first, I thought that Plaudit adds some post peer review platform, or discussion forum, for each paper. Similar to OpenReview but for any paper. I think this might actually be a great idea.

I think that https://fermatslibrary.com/librarian actually provides a similar functionality. But it has not seen much adoption.


It's not a metric, it's an endorsement. Think of it like being published in a journal: it's not about in how many journals an article gets published, but in which ones. Likewise, what's important here is who has endorsed it. Instead of a journal name acting as a proxy for "someone I trust thinks this work is worth reading", you can directly see the name of that person.

(That said, a big challenge with citation counts is that they take a long time to accumulate.)

As I mentioned, I think the three functions should be separated. Although Plaudit by itself does not facilitate giving feedback, it does not prevent it either. Ideally, if someone comes across e.g. an error in an article, they provide that feedback to the author (e.g. through an email, of using Hypothes.is - a project that I think you might like). There's no need to only contribute to improving scientific literature through a formal journal-assisted peer review process.

And who knows: if the author incorporates that feedback, the giver of the feedback might decide to endorse that article using Plaudit :)


As another comment implies, there is a fourth function that has been saddled on the process against its original intent: provide a “stamp of approval” for journalists to write misleading articles about “peer-reviewed research”


I'd include that under 1., but I can certainly see why you'd list it as a separate point too.


Interesting list!


I think a first step is to make reviews public, in particular also rejects. In particular the big journals like science and nature seem to rely on a very small set of reviewers and often papers get reviewed by nonexperts, we had a paper rejected when a reviewer contradicted scientific fact that is in textbooks. When we protested the decision they just send it back to the same reviewer who stayed with his claim. I think if these reviews would be public the journals would get much clearer feedback on how bad some of the reviewers are.


> I think a first step is to make reviews public, in particular also rejects.

As a reviewer, I have no objections towards my reviews going public. However, I have often had to tell authors that their paper is dumb (not in those terms, obviously), and I'd rather keep those reviews private. Being rejected is hard enough without the entire world knowing how much your paper sucks.

Of course, there are terrible reviewers too. But IMHO that's better solved via better area chairs than with a Twitter mob. I would rather not being 4chan into the review process.


From the article:

Peer review grew up in a world where there were many fewer people engaging in science than today. Typically an editor would know everyone in the world who had contributed to the field in the past, and would have enough time to understand the ideas of each new entrant to the field as they started to submit papers. It relied on personal connections and deep and thoughtful understanding.

That has changed just due to the scale of the scientific endeavor today, and is no longer possible in that form.

I don't know how you fix that. But it points to why peer review worked well at one time and became some kind of gold standard and why it's failing to be some kind of gold standard today.

I think you have to parse what was going right about the system that created peer review in order to have any hope of parsing how to replace it at scale for a world that's grown vastly larger in terms of sheer population numbers.


Yes, peer review should be an informal process that happens after publication, not a formal one that turns a 40 page paper into a 100 page one and takes six years.

Many fields have developed work arounds to peer review in the form of working papers, preprints and conferences. That this is necessary suggests that peer review be burned to the ground and the ashes sown with salt.

https://www.stat.cmu.edu/~larry/Peer-Review.pdf

> The refereeing process is very noisy, time consuming and arbitrary. We should be dissem- inating our research as widely as possible. Instead, we let two or three referees stand in between our work and the rest of our field. I think that most people are so used to our system, that they reflexively defend it when it is criticized. The purpose of doing research is to create new knowledge. This knowledge is useless unless it is disseminated. Refereeing is an impediment to dissemination.

...

> Some will argue that refereeing provides quality control. This is an illusion. Plenty of bad papers get published and plenty of good papers get rejected. Many think that the stamp of approval by having a paper accepted by the refereeing process is crucial for maintaining the integrity of the field. This attitude treats a field as if it is a priesthood with a set of infallible, wise elders deciding what is good and what is bad. It is also like a guild, which protects itself by making it harder for outsiders to compete with insiders.


You’re creating a false dichotomy. It’s all about maximizing signal to noise, and expert reviewers are absolutely required (at some point in the process) regardless of whether they’re perfect.


What false dichotomy? If you want to keep journals revert to editorial review. If the editor thinks the paper isn’t garbage they publish it. Sometimes they’re wrong but you don’t get papers published six years after they’re written.

Maximizing signal to noise is a terrible goal to put so much weight on. We should be trying to maximize the growth of knowledge, not minimizing the number of wrong papers, which is what max(s/n) leads to.

Follow the editorial philosophy of Max Planck editing Annalen der Physik during the period when that was the pre-eminent journal in Physics during the most productive epoch for Physics research ever.

> To shun much more the reproach of having suppressed strange opinions than that of having been too gentle in evaluating them.


> We should be trying to maximize the growth of knowledge, not minimizing the number of wrong papers, which is what max(s/n) leads to.

IMO, filtering out noise is a big part of maximizing the growth of knowledge. I cannot read every paper in my area that goes up on Arxiv, let alone every paper in related areas. That's why it's useful, albeit imperfect, to have gatekeeping peer-review organizations to force experts to look at papers and pick out the ones they think are most useful.

There is a lot of noise on Arxiv.


> That's why it's useful, albeit imperfect, to have gatekeeping peer-review organizations to force experts to look at papers and pick out the ones they think are most useful

Gate keeping and review do not require anything like the current peer review system. The current peer review system retards the flow of knowledge, or it would if anyone relied on journals to know the state of the field.

Gate keeping can be done by editorial review. If you really want to you can have accept/reject only with no revise and resubmit cycle like Sociological Science. They aim for 30 day from submission to publication.

Refereeing wastes more than $50m a year in value in Economics alone. Burn the system to the ground.

> IZA DP No. 12866: Is Scholarly Refereeing Productive (at the Margin)?

> In economics many articles are subjected to multiple rounds of refereeing at the same journal, which generates time costs of referees alone of at least $50 million. This process leads to remarkably longer publication lags than in other social sciences. We examine whether repeated refereeing produces any benefits, using an experiment at one journal that allows authors to submit under an accept/reject (fast-track or not) or the usual regime. We evaluate the scholarly impacts of articles by their subsequent citation histories, holding constant their sub-fields, authors' demographics and prior citations, and other characteristics. There is no payoff to refereeing beyond the first round and no difference between accept/reject articles and others. This result holds accounting for authors' selectivity into the two regimes, which we model formally to generate an empirical selection equation. This latter is used to provide instrumental estimates of the effect of each regime on scholarly impact.

https://www.iza.org/publications/dp/12866/is-scholarly-refer...


I agree that the multi-year publication timelines for journals are in general silly. I agree that keeping up with a field by reading journals doesn't work. If by editorial review you mean "the editor alone decides", I don't think that's a good replacement.

I think the computer science publication model of a few conferences per year with ~3 month submit-to-decision timelines, based on 3-5 reviews per paper, is pretty good. But it's probably not appropriate for fields with deep contributions that take substantial time to verify, like math.


The ostensible purpose of peer review is filtering: making sure that the signal to noise ratio in published journals remains high enough for the journals to be useful to other scientists.

That logic probably made sense back when journals were the primary means of communicating scientific results. But the Internet has changed the situation drastically. Scientists don't need to wait for a journal to publish things; they just go to arxiv.org. Nor do they need to wait for a journal to filter things through peer review; they have plenty of other ways of judging papers for themselves.

So at this point I don't think we need a "system" at all for what peer review used to accomplish.


I can tell you that this system would make many things even worse. The problem is that so much gets published so it's very hard to follow everything, so peer review is a good first filter to reduce numbers.

If we move to a system where everyone just publishes on e.g. Arxiv what happens is that people will just follow certain researchers that they deem good (you can already see that happening in some fields). So you have cabals developing who only read each others articles and it becomes very hard for new researchers to break into.


> peer review is a good first filter to reduce numbers

This is only true if (a) it filters properly, and (b) scientists actually pay attention to what it filters.

I'm not sure either of those is actually true now (though they might have been true in the past).

> So you have cabals developing who only read each others articles and it becomes very hard for new researchers to break into

If "cabals" that are hard to break into are an issue, it's not because of lack of peer review; it's because governments are now pretty much the sole source of funding for scientific research, so there is stiff competition for grants and many people will do whatever it takes to elbow out others, including forming cabals.

The way to fix that is to not have governments be the sole (or even the primary) source of funds for scientific research.


Why not a system somewhat like GitHub? Anyone can publish any code they like on GitHub and anyone can submit an issue. It also has a built in reputation system.


I disagree. A better approach would be to formalize the contributions made in a paper so that people can more effectively filter down to their needs.


What other ways are you referring to?


Their own judgment on reading the paper, talking to other scientists they know, going to conferences, watching or participating in online discussions, etc.


This works if your time is infinite.


Not even then.

I used to be on a health list that was ostensibly looking for a "cure" or treatment for cystic fibrosis. They frequently posted links to abstracts of papers and discussed it on list and I had mixed feelings about that. I often didn't understand the medical terminology and it was just a blurb and yadda.

At some point, I talked on the phone with a guy with a PhD that I knew through an entirely unrelated email list. And he told me to not bother to read the abstract if I couldn't get hold of the full paper because it often said something different from what the paper said. I was relieved to hear this and not at all surprised, so I quit reading those abstracts and trying to pretend to participate in those discussions or whatever.

Some years later, a woman on that same list offered to co-author a paper with me and give me credit and yadda. And then she changed her mind and began trashing me on list and unsubscribed.

Her son died of CF (at the age of 17) years and years before and she was in search for some pill to make it all better, obsessed with finding a cure because of his death. I use a lifestyle based approach. She was a smoker. I think the implication of my ideas is that it's partly her fault her son died. And I think her guilt over that possibility is why she's obsessed with finding an answer, but it has to be an answer that somehow absolves her of guilt and doesn't in any way suggest it's her fault. It needs to do the opposite for her to be satisfied.

I was homeless at the time that she offered to write a paper with me. Once she kicked me to the curb, I didn't really have any place else to go.

Anyway, regardless of your process, you need some signals of some sort. I relied on the opinion of the guy with the PhD in part because he had a PhD. And I think my life has been better for that.

It's good to have a so-called "bullshit detector" but one of the ways we filter information is by relying on our understanding that some people can be believed even if we don't understand what they are saying because it is over our head.


> our understanding that some people can be believed even if we don't understand what they are saying because it is over our head.

First, while this might apply to lay people trying to evaluate scientific claims, it should not apply to scientists trying to evaluate scientific claims in their own field.

Second, peer review does not even pretend to be the kind of filter you are talking about. The fact that a paper has been peer reviewed is not evidence that it is correct or that its claims should be believed. Indeed, many peer reviewed papers later turn out to be wrong. (For example, all of the papers that were part of the "replication crisis" that got a lot of recent press were peer reviewed.) All peer review signals as a filter is that someone presumed to have relevant expertise thought the paper was worth publishing. I think recent experience has shown that this signal is virtually worthless now, though it might have had value in the past.


My general understanding is that most people do take peer review to be such a filter.

I'm "just a lay person" but I'm also someone solving a real serious scientific problem and I'm dependent on trusting that people with PhD's who answer questions of mine about cell biology or whatever can be trusted because, unlike me, they had the capacity to pursue a PhD. I simply can't. I'm too busy solving a scientific problem like my life depends upon it because it does.

And I'm sure this is a really stupid to comment to leave. I've written several drafts and I don't know a good way to engage your comment.

I primarily replied to the comment about "infinite time" because I sort of have "infinite time" in that I'm an unemployed bum with a bit of unearned income who also occasionally tries to earn a few bucks and whose primary focus is on understanding a scientific question so that I don't die and all that.

And as someone with "infinite time" to devote to reading articles and what not because reading those articles and what not is a higher priority than almost anything else since my life depends upon it, I find that I still need filtering mechanisms.

Now that I've dug my grave deeper, I shall endeavor to tear myself away from the computer and try to get a little shut eye.

Have a good evening.


> My general understanding is that most people do take peer review to be such a filter.

If they do, then they're wrong, since peer review explicitly disclaims any indication that a paper is correct or should be believed. And this is supported by the fact that a large fraction of peer-reviewed papers turn out to be wrong.

> I'm also someone solving a real serious scientific problem and I'm dependent on trusting that people with PhD's who answer questions of mine about cell biology or whatever can be trusted because, unlike me, they had the capacity to pursue a PhD.

It depends on what questions you're asking them and what their basis is for their answers. If their basis is solid experimental knowledge of whatever aspect of cell biology you're asking about, then yes, their answers are probably trustworthy. If their basis is speculation about something we don't have much experimental knowledge of, then no, their answers are probably not trustworthy. In either case, you're not going to find that out by looking at whether what they're saying has been peer reviewed. You're going to find out by asking them how they know the things they're telling you and critically evaluating their answers.

> I'm too busy solving a scientific problem

> I sort of have "infinite time"

You can't be "too busy" if you have "infinite time". It's just a question of how you choose to employ your time. It would seem to me to be a better idea to employ it in becoming an expert yourself, so you can critically evaluate what other experts are saying, than to look for a shortcut filter, whether it's peer review or anything else. Asking other experts questions can certainly help with that; but it might be helpful to view that not as looking for answers from them, but as using what they say to help you build your own expertise.


It's just a question of how you choose to employ your time.

I'm going to try one last time to clarify what I meant and then I am done here because this is going all kinds of weird places I never expected.

I'm medically handicapped. I have a very serious and deadly condition and failing to die of it takes all my time.

That fact has prevented me from finishing my BS in Environmental Resource Management, a major chosen to further my career goals. My interest in and need for information on cell biology is driven by the fact that I was diagnosed somewhat late in life with a very serious genetic disorder.

So I wouldn't be interested in pursuing such knowledge if I weren't medically handicapped. I would be a city planner or something, like I wanted to be.

The fact that my medical condition has stomped my life into the ground and made me unemployable is why I have a lot of time on my hands and I choose to prioritize learning knowledge pertinent to addressing my health issues under circumstances where, no, it's simply not possible for me to do something like pursue a PhD in biology.

And I am getting results, so your opinions about how I should spend my time aside, I'm fairly satisfied with the choices I am making (all things considered), though I wish I didn't have this personal challenge. I wish I were healthy and life had gone differently. But it didn't.


I'm very sorry about your condition and that you're having to go through all you are going through.

> it's simply not possible for me to do something like pursue a PhD in biology

Becoming an expert is not the same as getting a credential. It's perfectly possible to become an expert in whatever particular condition you personally have, without getting a PhD in biology, or indeed any relevant degree. Indeed, since you say you are getting results, I suspect you are becoming an expert in your particular condition, whether you planned to or not. My wife has a chronic illness and she has been forced to become an expert in it (with no relevant degrees at all) even though she would much rather have spent the time on other things.

> I am getting results

I'm glad you are; that's the most important thing.


> Could peer review learn anything from the open source community?

I think that's a great question. Imagine papers with pull requests, issues, and additional experiments contributed from anyone with relevant experience. And imagine those experiments being open data and up for third-party verification before publication. That could be powerful.


Yes, and a reputation system built in but still democratic enough that any modern-day "Einstein" could break through.


An article detailing this and similar ideas can be found here, for those interested: https://doi.org/10.12688/f1000research.12037.3


I think we can learn from the OS community, but this requires changing the incentive system of academia. Two ideas:

* Research papers should be living documents that are properly versioned; updates should get reviewed again. This obviously means that incremental improvements to papers and reviewing need to result in something tangible for one's CV. The versioning makes sure one can still point to a specific revision of a paper. Of course, updates should consider the implications of changes carefully (as is the case for software). For example, I might get familiar with a paper over the course of a month, and then use it as the cornerstone of one of my own works. Then, I want that the paper is stable and does not constantly mess with important formal definitions et cetera. Otherwise, people won't be able to agree on the implications the paper has for the community.

* It might be better to make review and reviewers fully visible and move to a more interactive review process. For obvious reasons, practically "double-blind" review does not work, because if you're a visible member of your community and submit a paper, most people can guess that it's yours. If we would move from "one-shot" (or: one-shot plus rebuttal) reviews to more interactive, public, non-anonymous discussions, we would get rid of at least some of the social back-channeling that is only available to the bigshots in the community.

Of course, I don't think we will get there quickly.


One issue is that reviewers themselves don’t get rated. They also don’t have any incentives to do quality reviews. I think StackOverflow model is the only one I know of which is able to scale this process. It’s not without flaws but it’s content usability and quality is extremely high than any other collaborative filtering model out there. Academic reviews can perhaps be scaled with similar model. Imagine each paper is question and each review is answer. People vote for which answers and questions were useful and everyone accumulates reputation that in turn can be used for career development. You can see everyone’s questions, answers and votes. People with enough karma gets moderator privileges to be able to knock off bad content. People will then have incentive to do the right thing, improve quality and avoid favoritism.


A couple of ideas I've been mulling over for a while:

- the quicker but less rigorous way: set things up like a forum like HackerNews, let the crowd-sourced upvote and comment system work its magic; make the karma/reputation points act as a proxy for the quality and relevance of the article, its authors, as well as the commenters.

- the slower way: let articles "bubble" up from smaller but more specialized workshops toward more generalized and prestigious conferences and journals. Have the technical committee recommend the best articles to those more reputable venues, along with whatever it is the authors need to work on to re-submit the extended version at that "better" venue. This is kind of already the case with workshops or conferences that have an associated Special Issue of a journal; I'm advocating for a generalization of the practice to all venues and more than a 2-level hierarchy. The reputation of the lower venues could be backpropagated by the number of papers they successfully "bubble" up, PageRank style, so that would be the incentive the committees would have in recommending their papers to the other venues.


Your option 2 might work if the bubbling process is somehow made informed and fair, but option 1 is an invitation to promote charismatic loudmouths. Using pure populism as the means for seeking truth clearly doesn’t work very well, as the rising tide of of beloved trolls in media and politics attests.


Fair point. There's a sweet spot to be found between full-on crowd-sourcing and giving all the power to 2-3 blind reviewers. Maybe limit the participants to be the members of the technical committee? Or maybe expand it to include members of the given conference?


This paper proposes a return to editorial review: https://sci-hub.tw/https://doi.org/10.1007/s11017-012-9233-1


This topic comes up again and again. (Here some references from a quick search: https://news.ycombinator.com/item?id=10531374 https://news.ycombinator.com/item?id=18523847 https://news.ycombinator.com/item?id=8731271 https://news.ycombinator.com/item?id=18595074 https://news.ycombinator.com/item?id=22251079)

But there are not many good solutions.

Most seem to agree that we should move over to a more open system, like OpenReview, and also that all publications should be open for everyone afterwards, and not behind some paywall. But these are only two aspects of the system, and this is less about the peer reviewing itself.

The community already has the trend to publish more and more directly on ArXiv. Just in addition to submitting it to a conference. If the conference peer review filters it out for some unlucky reason, it's still open for everyone to see. That might be already an improvement of what we had before. But probably it's not really the optimum yet. But what is the optimum?


There are good solutions, but existing publishers like Elsevier like the current situation because it makes them rich (https://www.universityofcalifornia.edu/press-room/uc-termina...). Any existing individual academic is strongly incentivized to work within the rotten system, even though most know its pathologies and want to defect—but one person defecting alone will screw up their own career while likely leaving the system intact. The equilibrium is bad even though the solutions aren't complicated. https://news.ycombinator.com/item?id=23291869


I love this topic of peer review. Feels like arguing about methodologies, software QA & test, learning organizations back in the 90s.

Peer review and the reproducibility crisis are the absolute bleeding edge of the human experience. Science, governance, transparency, accountability, progress, group decision making. All of it.

I encourage all the critics, thought leaders to also consider how to apply these lessons elsewhere.

Every thing said about peer review also applies to policy making, product development, news reporting, investigative journalism.

Peter Drucker, way back when, stated that management is a technology. Same applies to peer review and reproducibility.

For whatever reason, this post prompted me to think "sociology of scientific progress". I encourage critics to also factor in the collective human psychology. Covered by so many books, like the classics Structure of Scientific Revolutions and The Diffusion of Innovation. (Sorry, none of the more contemporary titles, a la Predictably Irrational, are jumping into my head at the moment.)


Brooks initial papers like "Intelligence without representation", "Elephants don't place Chess" and robots like Ghengis and Attila changed the robotics landscape. It was a leap akin to going from a Mainframes to an Apple II.

Brooks was extraordinarily well placed to lead this switch in Robotics. He was at MIT, a place with very talented engineers - mechanical, electrical - etc. who could work on the robots. MIT also had limitless funding and a culture for disruptive research with long payoffs. This might not have been possible at other places like CMU where robotics has more industrial applications. I remember watching the movies of Atilla and Genghis over and over again, downloaded over an FTP connection!

https://www.youtube.com/watch?v=-6piNZBFdfY


How is it that peer review hasn't changed in 30 years? With the amount of people afflicted has nobody had the time and/or influence to improve the process?

There are no doubt lots of smart people concerned by this...


Science did well before radical ideas could be voted down by peer review.

Would Einstein and Darwin have made it through peer review? I doubt it.

What current day people like that are held up by their "peers"? I fear we'll never know.


> Would Einstein and Darwin have made it through peer review? I doubt it.

What are you talking about? Einstein first outstanding contributions (in his 1905 Wunderjahr) were all published in Annalen der Physik , a peer-reviewed journal. Even years after becoming a worldwide-known name, he received rejections (which he was not too much fond of). Einstein was always part of the conventional publication system of physics,

Darwin , and Wallace original papers were read before the Linnean society in London and were published in its Zoology journal. You can read it here: http://darwin-online.org.uk/content/frameset?itemID=F350&vie...

In the history of modern science , let's say from mid 19th century on almost the entirety of solid scientific contributions have been published via the regular channels.


Einstein sent his papers to Planck, and Planck, as an editor of Annalen der Physik, decided to publish them in the journal.

Here's what Einstein had to say about the modern peer-review process in which a draft is reviewed by anonymous referees:

> We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorized you to show it to specialists before it is printed. I see no reason to address the—in any case erroneous—comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere.

https://physicstoday.scitation.org/doi/10.1063/1.2117822

On the other hand, Einstein was really into what we might today call post-publication peer review. A significant chunk of Einstein's publications were reviews of other scientists' publications.


No journal had anything like the modern peer review system in 1905. That system is maybe 40 years old. You can think of it as a response to the vastly bigger volume and specialization of papers.

Back then, if the editor thought the paper was decent, it got published.

Perhaps the biggest problem with peer review is that your peers are often also your competitors/rivals. Imagine if each Apple product had to be approved by Google, and vice versa.


Would Einstein and Darwin have made it through peer review? I doubt it.

Einstein worked in a patent office for years, unable to get a job in academia like he wanted. There were two attempts to prove his Theory of Relativity. They occurred years apart because it involved photographing a total eclipse of the sun and while eclipses typically happen every six months in pairs (one solar and one lunar, though sometimes there are three in a row), most of them are partial eclipses, not total eclipses. It had to be a total eclipse to prove his theory.

The first attempt to make the photograph failed. There were only two teams and it was during World War I. One was stopped or arrested or something because they were assumed to be spies. The other failed due to rain.

After this failure, Einstein realized there was an error in his math and redid his formula. Had the first attempt at taking photographs succeeded, we might have never heard of the man because the world might have decided he was a quack since his math didn't hold up, so, "obviously," his theory was bunk.

He probably had one and only one shot at this unusual peer review process.

By the time they made the second attempt, it was something like six or eight teams of photographers trying to get a photograph of the total eclipse of the sun. Only one succeeded. I think the rest were again rained out.

Nicely, it was a team that had also tried several years earlier that actually succeeded.

Once there was photographic evidence supporting his theory, he became world famous overnight and then was finally able to get a job in academia, IIRC.

What current day people like that are held up by their "peers"? I fear we'll never know.

Me. But just saying that is likely to get me mocked. No one is trying to prove I know anything. No one wants to peer review (or otherwise check) my ideas. People mostly just want me to shut up and go away.


> Einstein worked in a patent office for years, unable to get a job in academia like he wanted.

Actually, while the patent office was his day job, he was working on his Ph.D. when he published his classic 1905 papers including the one on Special Relativity. He got the Ph. D that same year, and got his first academic appointment three years later, in 1908. He didn't start working on General Relativity in earnest until about 1911 (although he had the initial ideas as early as 1907).

> There were two attempts to prove his Theory of Relativity.

More precisely, two attempts to verify one particular classic prediction of General Relativity, that of bending of light by the Sun. (The second, in 1919, btw, is now generally agreed to not have actually been a valid proof: the observations weren't actually good enough to see the effect when you take into account the appropriate error bars. But they were accepted at the time, probably in large part because of the influence of Eddington, who spearheaded the process.)

A second classic prediction, the extra precession of the perihelion of Mercury, was already known in the 19th century, but was a mystery to physicists until General Relativity explained it.

The third classic prediction of General Relativity, gravitational redshift, had to wait until the 1950s before reasonably accurate experiments were done to confirm it.

> Once there was photographic evidence supporting his theory, he became world famous overnight and then was finally able to get a job in academia, IIRC.

As noted above, he actually started working in academia well before the 1919 eclipse expeditions. But the results of those expeditions were indeed what made him world famous.


Einstein published on Brownian motion, the photoelectric effect and relativity in 1905.

He was awarded the Nobel Prize for his work primarily on the photoelectic effect, not relativity. That alone would have made him famous, so I'm not sure what the thread on proving relativity is on about.

https://www.nobelprize.org/prizes/physics/1921/summary/

(His work on relativity was probably based on his patent reviews of European train schedule calculation filings.)

While I'm at it, Maxwell's equations as we know it were written by other scientists later on. Maxwell was an experimental scientist who wrote 20 formulae in longhand, and a group of scientists, including Oliver Heaviside, put them into the more compact 4 equation vector format later on.

https://en.wikipedia.org/wiki/Oliver_Heaviside

An American physicist working as a military advisor updated one of Maxwell's equations fairly recently to correct for a minor mistake discovered during antenna research for the US government.

Also, a PR pieces on extending Maxwell's equations to nanoscale:

https://isn.mit.edu/cheers-maxwell%E2%80%99s-electromagnetis...


> He was awarded the Nobel Prize for his work primarily on the photoelectic effect, not relativity.

Yes, the Nobel committee still thought relativity was too controversial to merit a prize.

> That alone would have made him famous

Yes, two years after he became famous because of the 1919 eclipse expedition.


> I'm not sure what the thread on proving relativity is on about.

I don't see it as being "on about" anything. I think it's worth observing that passing peer review, in itself, does not make a scientific claim correct, and that testing scientific claims experimentally is often much more difficult, time consuming, and technologically challenging than developing them theoretically.


The larger number of the original Maxwell equations has little significance, it is due to the fact that they were equations for the components, as the vector notation was not used yet. What is important is that the original Maxwell equations were in integral form, so that they are applicable to more general cases than the differential equations of Heaviside. Even if the Heaviside equations are what is known by most people as the Maxwell equations, they are not equivalent with the original Maxwell equations. The Heaviside equations are a consequence of the original Maxwell integral equations in certain special cases (no discontinuities, no movement). Heaviside did a good job at simplifying the teaching of the Maxwell equations, but at the same time his approach resulted for many students in a superficial and incomplete understanding.


> An American physicist working as a military advisor updated one of Maxwell's equations fairly recently to correct for a minor mistake discovered during antenna research for the US government.

What are you referring to? I'm not aware of any such thing.


There was one article published on this, I'll link to it when it shows up again.

Likely he's one of the few people with access to that much antenna data of various designs and powers.


No, Eienstein got his PhD in 1905 and he couldn’t find academic jobs, partly because of being Jew and partly because he had burned bridges with few professors, which is why he had to work at patent office. His special relativity paper was published after he got his PhD. At the time, there were very few journals and even fewer scientists publishing papers. You basically mail your paper and there would be one editor who would check it and if it is not very bad, it simply got published. The journal Annalen der Physik had 90-95% acceptance rate in 1905. The journals assumed role of aggregating as opposed to filtering. During those years US barely even had any PhD programs and people were expected to go to Germany to get PhD in Physics. The process only became stricter as more PhD programs started humming, more papers flowed in, more universities started popping up and journal can only have so many printed pages to be sent to its subscribers.

In more recent years, number of printed pages are no longer the constraint but still available number of oral/poster slots at physical conference venues remain limited while the quantity of papers keep growing. Also, number of papers people can read are also limited which means people look for selection filters to get quality content. There are also too many people trying to publish just to have a badge as opposed to people who truly want to advance the science. This tension results in paranoia and lack of trust and hence the current review process that assumes guilty until proven sentiment from reviewers.

https://theconversation.com/hate-the-peer-review-process-ein...


> Eienstein got his PhD in 1905 and he couldn’t find academic jobs, partly because of being Jew and partly because he had burned bridges with few professors, which is why he had to work at patent office.

Einstein started working at the Swiss patent office several years before he got his Ph. D. He could not find an academic job after graduating from the Zurich Polytechnic School in 1900; that's why he ended up at the Swiss patent office.

> His special relativity paper was published after he got his PhD.

They were both in the same year, 1905; the actual publication date of the paper may have been after the date he was awarded his Ph. D., yes.


Have you fact checked yourself?

* On 30 April 1905, Einstein completed his thesis*

https://en.m.wikipedia.org/wiki/Albert_Einstein#First_scient...


How is what I said inconsistent with that?


> After this failure, Einstein realized there was an error in his math and redid his formula. Had the first attempt at taking photographs succeeded, we might have never heard of the man because the world might have decided he was a quack since his math didn't hold up, so, "obviously," his theory was bunk

While the failure of the first expedition might have been a factor, it was certainly not the only one. Einstein did not like a number of things about that version of his theory and probably would have revisited them anyway; his writings during this period show him going back and forth between liking the theory and thinking it's obviously wrong and needs to be patched up.

The key thing that appears to have gotten Einstein on the right path to the final 1915 version of General Relativity was a visit with David Hilbert in Gottingen in the summer of 1915. Ironically, it also got Hilbert started on a path that led to the same final field equation that Einstein found, but by a different route (a much more elegant one mathematically), and which has spawned a long-running dispute about priority [1] (though as far as I can tell, not one that either Einstein or Hilbert ever participated in--AFAIK Hilbert always credited Einstein for the main physical insights that led to General Relativity).

[1] https://en.wikipedia.org/wiki/Relativity_priority_dispute#Ge...


> Would Einstein and Darwin have made it through peer review? I doubt it.

Einstein did make it through peer review; his classic 1905 papers were published in Annalen der Physik, the most prestigious physics journal in the world at that time, after being refereed and accepted.

Darwin didn't technically go through "peer review" since he published his results in a book, not a scientific journal. But when he did so, what we now call scientific journals were extremely rare.


From what I've read the vast majority of Einstein's work was not peer reviewed.

http://michaelnielsen.org/blog/three-myths-about-scientific-...

> How many of Einstein’s 300 plus papers were peer reviewed? According to the physicist and historian of science Daniel Kennefick, it may well be that only a single paper of Einstein’s was ever subject to peer review.


Peer review, in the modern sense, only became common about 40 years ago. In Einstein's time, the journal publisher decided on their own what to publish.

Or so I'm told. I know that when I went through a pretty serious physics education in the early 80s, I never heard about peer review.


Einstein was never published in a peer reviewed journal. Annalen der Physik had editorial review. The only time one of his articles was sent out for peer review he withdrew it from the journal in question.

> For instance, the Annalen der Physik, in which Einstein published his four famous papers in 1905, did not subject those papers to the same review process. The journal had a remarkably high acceptance rate (of about 90-95%). The identifiable editors were making the final decisions about what to publish. It is the storied editor Max Planck who described his editorial philosophy as:

> To shun much more the reproach of having suppressed strange opinions than that of having been too gentle in evaluating them.

...

> We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorised you to show it to specialists before it is printed. I see no reason to address the – in any case erroneous – comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere.

—- Einstein

https://theconversation.com/hate-the-peer-review-process-ein...


in Einstein's time there was very little peer review,

there is a famous anecdote there, the first time that his paper was actually peer-reviewed (perhaps in the 50s) and sent back to him with instructions on what to change he wrote back something along the following lines:

"there must be a mistake there, I sent in this paper for publication, not to get an opinion on how to improve it"


Editorial review is not peer review; and it's arguably the system we should be moving towards. Preprint servers mean you could appoint yourself as an editor and more or less DiY. The problem is, the way the journal system is set up; nobody wants to pay for this, and it would be career suicide to do it for free.


Darwin sorta did go through peer review, in one instance, giving us the immortal peer reviewer revise-and-resubmit suggestion that Darwin rewrite On The Origin of Species to focus on pigeon fancying, because "Every body is interested in pigeons."

https://en.wikipedia.org/wiki/Whitwell_Elwin#Works


Peer review is something I've been thinking about a lot lately as an outsider. I've never gone through a formal peer review process or worked in academia, but it's pretty obvious that the process has significant problems.

However, the problem that peer reviews are trying to solve (conferring legitimacy on a paper by the scientific community) is so important that we can't give up on it. Sure, pre-print servers mean science can move faster, but it also means anything can be "science." If we normalize publishing papers as preprints, there's very little stopping conspiracy theorists from publishing a paper and having it look plausible.

The sudden rush of media coverage for COVID research has really exasperated the problem. Journalists should responsibly cover preprint studies, but frankly most don't, and who can blame them when even the scientific community is struggling to review the papers itself.

I volunteer with a group[0] that's trying to address this problem in a small way by creating visual summaries/reviews of popular preprints for mainstream readers. This is a side project, so the review process is just a shared Google doc for each paper, but what I'd like to see:

As a reviewer:

  * View diffs between paper revisions
  * Comment on papers inline, like a pull request
  * +1/-1 a submission, or just leave questions without a decision
  * Gain "karma" for good reviews to build my reputation in the community (like Stack OverFlow)
As a paper author:

  * Reply to comments to understand and explain context
  * Post revisions
As a reader:

  * See visual summaries of papers, with inlined reviews (like Research Explained)
  * Vote on both studies and reviews to surface the good ones
  * Browse profiles of authors and reviewers to understand their qualifications/contributions
I do think https://distill.pub is on the right track. Their papers are significantly more visual/readable than most, and they require contributors to open a pull request against the distill repo. However, their process isn't approachable for non-technical fields.

[0]: https://www.researchexplained.org


The problem also works in reverse. Probably the biggest example is Plate Tectonics - this idea took decades longer to be accepted than it needed to because the generation of expert geologists who were the gatekeepers for the journals refused to consider it seriously, and the guy who came up with the idea wasn't a geologist [0].

Scientists are just humans, after all. Subject to human flaws. And the scientific method works "in the end", not immediately.

[0] - https://en.wikipedia.org/wiki/Continental_drift


I have only the slightest introduction to actual peer review (pre-qual computer science PhD student) but I have to say that my views have shifted dramatically after moving from “outsider” to “insider”.

The problems you mention seem more about how the public consumes and interprets scientific research, rather than how scientific communities share knowledge and keep their fields moving forward. These are two different challenges and it seems unreasonable to expect the peer review process to solve both of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: