Hacker News new | past | comments | ask | show | jobs | submit login
Authors’ names have ‘astonishing’ influence on peer reviewers: study (nature.com)
437 points by respinal on Oct 11, 2022 | hide | past | favorite | 307 comments



When I was an undergrad in college, I helped design a study on the role of gender perception in expertise.

We had a piece of text that the subjects (undergrads) would read and rate the expertise of. We had the same text but we randomized whether the name would be a commonly male name, a commonly female name, or initials.

We also randomized if they'd watch a clip of men's sports beforehand (men's basketball), women's sports beforehand (women's basketball), or no sports. And finally we randomized whether the person giving the subjects instructions would be a young woman (one of my classmates) or a young man (me).

Our small study showed what you'd expect- students- men and women students rated male writers more highly than female writers, and initials were right in the middle, though they tended to be more like responses for men.

The tester's gender made no difference that we found.

The sports thing made a measurable difference, but it didn't reverse the skew.

My 18/19 year old self was somewhat skeptical we'd find a difference, but I was totally wrong. It taught me a lot about bias and perception, which this study also shows.


Another interesting bias along those lines: https://en.m.wikipedia.org/wiki/Women-are-wonderful_effect

I imagine these biases swing in all sorts of directions depending on the context. Some are intuitive, many are not.


I'm not a psychologist, I have nothing but that BA in the field, but if I had to make a guess, my guess is that "wonderful" is not the same as "competent".

Even at the time, there were studies that showed traits associated with women were rated more positively by men and women than traits associated with men.

The way you study this is you'd give a list of words:

gentle caring assertive stubborn aggressive loving

(ideally you'd randomize the word order too)

And then you'd have a group of subjects rate them as more associated with men or women on a scale.

Ideally you'd get a big sample and replicate this study.

Then either in the same study at a different time, or another study, you'd take those same words and you'd ask your subjects to rate them as positive or negative.

That's where you see effects like the one mentioned in the wikipedia article.

That kind of thing's been done a bunch of times.

But studies on gender and competency have been done too, and at the time they showed the same pattern as we found.

I say "At the time" because this is >20 years ago.


>my guess is that "wonderful" is not the same as "competent".

Right. There are two different sets of benefits that help or hurt someone in different ways. Being competent when standing trial can work against you while being wonderful will reduce the risk of a conviction and reduce the sentence if you are convicted.

We have identified the "competent" bias and are taking steps to correct it, but we need to do the same with the "wonderful" bias in other systems. For starters we need to recognize how strong that bias is in certain fields. For one example, there are specific crimes that people would bet are extremely gendered in nature, and the crime statistics show they would be right, but interviewing the population at large and querying victims, including those who never went to the police or who were turned away by the police (or even worse, who couldn't legally be victims because of how biased even the laws are), we see the gender component goes away. The rate of men victimized by women and women victimized by men are at near a 50/50 ration (I think 49.8 to 50.2).

Even the extent of studies measuring the impact of the wonderful effect is lacking compared to studies measuring the competent effect (which itself is likely a bias of the wonderful effect).



Not GP, but:

From your wikipedia link

> There are two distinct numbers regarding the pay gap: non-adjusted versus adjusted pay gap. The latter typically takes into account differences in hours worked, occupations chosen, education and job experience.[1] In the United States, for example, the non-adjusted average woman's annual salary is 79% of the average man's salary, compared to 95% for the adjusted average salary.[2][3][4][5]

The remaining 5% could be from the "women are wonderful" positive attributes not being the narrow selection of ones that are highly sought after in well-renumerated jobs, or (spitballing here) psychosocial (Expectations of a pay gap driving negotiation behavior or something).


Not needed because "women are wonderful" is to some degree mostly orthogonal to "men are more professional/deserve more/"... wonderful good attributes are not needed or even to some degree counter to "high performance" (but depends a lot where we look at, sure!)

Also pay-gap has likely also to do with the current men-bias also? Wonder if that would be the same if we had 90% woman throughout in these decising positions..

In the end all mine assumptions, what I actually want to say is: I don't think those two contradict or relate at all.. what GP already said well with "biases swing in all sorts of directions depending on the context"


They're two related but ultimately very different phenomena, they don't necessarily need reconciliation.


good point.


That reminds me of a bias study where the researchers were investigating the effects of stereotypes.

They told one group of Asian women that “women are worse at math than men” and another group of Asian women that “Asians are better at math than non-Asians”. Both are common stereotypes.

They then measured how well the two groups did on subsequent math exercises.

Interestingly, they found that the latter group (positive stereotype) performed better than those in the former group (negative stereotype).

As I recall, other research found similar results with Black males and golf scores when told “white people are better at golf” as opposed to “black people are better athletes”.

Not only do stereotypes influence our perceptions of “the other”, but they influence the performance of the other.


That’s called “stereotype threat”, and it’s been caught up in the replication crisis. In short, the effect is hard to reproduce and tends to be small. It’s been known for some time now, here’s an article from 2015: https://www.psychologytoday.com/us/blog/rabble-rouser/201512...

I wish people would stop bringing up studies that don’t reproduce, they’re no better than anecdotes.


The problem with that is, at least in my experience, that you read such things once from a reputable source and never check it again.


When I was studying undergrad psych, Psychology Today was falling into disrepute.

Are they still a reputable source, today?


Psychology Today is not great in general, but the article I linked to is written by a social psychology PhD (Lee Jussim).


Ok, good to know, thanks!

Setting this aside for a deeper read, but it appears at first glance that the concerns revolve around statistical technique rather than methodological soundness?


Isn't that one of those clickfarm blogs?


It can't be "still reputable" after "falling into disrepute".


“was falling” in the imperfect tense.


The experiment you discuss is called "stereotype threat" and it belongs into the huge set of psychological results that cannot be replicated. [1] It seems that there is a bias against publishing studies with null results, which skews the overall picture both among the researchers and in the media.

[1] https://www.tandfonline.com/doi/full/10.1080/23743603.2018.1...


Interesting, good to know, thanks.


In other words, when society attaches a label to people, those people start behaving accordingly.

”You call me an addict and refuse prescription? Yeah sure whatever doc, I’ll just buy that fentanyl off the street since I’m already an addict.”

We need to end prohibition.


> Our small study showed what you'd expect- students- men and women students rated male writers more highly than female writers, and initials were right in the middle, though they tended to be more like responses for men.

How much more? That's really important.


We didn't have a huge study, <40 participants, so whatever we found wasn't publishable, but I remember the results surprised me.

At the time, I felt like "Sexism is dead" and the university I was in was 2/3rds women to 1/3rd men, so we'd never find a bias.

As a young man (18/19) I learned to question my assumptions. I thought "It's the 90s, sexism is dead.", but it just wasn't true.


Yes, but also is that "what you'd expect", why?

Most things I read I have no idea of the sex of the author. Do people even look at author names before they start reading (online, say), and even then they can be nom de plumes, or non-gendered names (sometimes surprisingly).


We announced the name of the author. The script went something like "We're going to ask you to read this piece, written by J. Smith.".

During debriefing, we would tell the subjects what we were actually looking for (perceptions based on gender) and most reported not remembering/caring about the gender of the author, but nonetheless the results were clear that there was a gender bias, regardless of whether they reported remembering or caring about the gender of the author.


Sorry, but there are several things problematic about that study, if your description is accurate.

Just one of the issues is the specific perception regarding that particular sport (basketball) and relative perceptions between sex differentiated similar sports (men’s and women’s basketball are not the same sport, albeit similar). A sport should have been chosen where there are as few differentiating perceptions as possible, which is nearly impossible, because men are inherently more competent at sport due to physiological realities. It bakes in a bias towards men, which is what you likely actually confirmed with the study you described.

Introducing sports alone essentially corrupted that research and likely biased the responses for relative innate competency of males in sports.

For example, ignoring its own challenges, what could have been done, is show video of a relatively broadly positively evaluated sport (not basketball), an activity that is also relatively broadly and positively evaluated female clustered competencies, e.g., effective child rearing, communicating effectively, conflict resolution, creative outputs, or even beauty pageants, etc., along with the inverse, i.e., poor performance of men in the same sport and poor performance of women in whatever is chosen, e.g., screaming and yelling at poorly behaved children.

It is in the past and by no means are or were you the only one who has long engaged in this type of poorly executed “research” that has merely been confirming researcher biases and often, thereby immensely damaging society, but maybe my illustration of a few of the several issues with what you described, will cause some changes in thinking.

There is a real hidden epidemic of not only single order thinking, but what really should be called negative order thinking in research, where research is not only not finding the truth, but rather even doing damage through confidence in false findings.

I say these things with an expensive relevant background, in the trenches of bad and … frankly … destructive research, if you will.


Why are you ignoring the stronger non-sports-relates results that PP reported? Your bias seems stronger than the bias you are complaining about.


If it is known that most discoveries are made by males it is reasonable to give larger weight to new male-done research, if the only thing that is known about the researchers is their gender and if the reader is not enough of an expert on the subject matter to treat it 100% on its own merit.

I don't see this "bias" as inefficient or counter-productive. It is just an artifact of the way you designed your flawed experiment.

To find any real bias you would have to assure the subjects that the researchers were equally accomplished.


Even if one sex was 10x as likely as the other to produce good research, it is still sexist bias to judge research on the basis of the author's sex. Fairness and meritocracy mean everyone gets a chance; no one should be stopped from succeeding because they belong to a low-performing group.

I think it's safe to say that this study can be treated as an anecdote because we don't know the effect size or exact methodology and it was never peer-reviewed. And I'd agree with your point if we were talking about the Nobel prize study, which evaluates people as individuals. But arguing that 'real bias' is bias that doesn't come from empirical evidence doesn't make a whole lot of sense- all bias comes from empirical evidence of varying quality.


> To find any real bias you would have to assure the subjects that the researchers were equally accomplished.

What do you mean? I think you didn't understand the experiment. Subjects were shown the same text with, at random, a male name, female name, or initials. These names were not real researchers, nor familiar to the subjects.

> If it is known that most discoveries are made by males it is reasonable to give larger weight to new male-done research

I'm sure you can see the two different fallacies in this short sentence:

- There have been more male researchers than female, therefore more "discoveries" were made by men than women. No shit. This does not mean male researchers are better than female, obviously.

- There are more men being researchers therefore men's research should be given more weight therefore there are more men researchers therefore...


You will find it beneficial to learn what a "prior probability distrubution" is.


It's actually you who doesn't understand probabilities. If a woman is twice as likely to make a discovery but women are only 5% of researchers, then it's both true that:

1. most discoveries are made by males

2. it is reasonable to give larger weight to new female-done research

contradicting you point.


But you pulled those number out of thin air. The fact of the matter is that if I pick one research paper at random out of all the papers written by men and I pick another research paper at random out of all the papers written by women the former will be of higher average quality.


Did you just pull that out of thin air?


You will find it beneficial to learn about "selection bias".


> If it is known that most discoveries are made by males it is reasonable to give larger weight to new male-done research

No it isn’t. Dont you see this is just circular reasoning?


An experiment like OP's would 100% show bias. There weren't multiple researchers, each reader read the exact same paper, but with a different gendered (or non-gendered) name as the author.


the sports thing seems like a weird addition. did the text have some relevance to sport? if not it seems like the kind of thing that would skew results just by reminding the participants they’re in a weird study


Would be interesting to hear from OP, but I assumed it was a reminder to check ones stereotypes -- eg 'remember women can be good at traditionally male pursuits' (would work for some sports only).


It wasn't anything so complex as that.

The simpler version of the study had been done before, and the class I was a study design class. Our class assignment was to design, execute, and analyze a study.

We were simply combining two existing studies into a new study.

It bears mentioning- this was ~1996/1997 (I don't remember), I was a freshman, it was a single credit supplement, and we never published the results.


Can you elaborate on the sports thing? What effect did showing the sports had and do you have a thoory on the mechanism?


This is a really long time ago but if I remember, we ready a study about how sports effected perception. In the other study they did the male sport vs female sport thing, but not the name part, and so we just were combing two previous studies.


What p value did you get?


From a one credit course I did in 1997, I don't remember :)


It’s one thing to observe a pattern in a certain subset of people. However, why do you think there is this apparent bias or perception?

If there is a bias, is it good, bad, or neutral? Are there biases in other directions?


The example described is the textbook definition of bias. Given the exact same text, the participants rate it higher if a male name is attached.


> If there is a bias, is it good, bad, or neutral? Are there biases in other directions?

Bias isn’t inherently good or bad - they are normative judgements of distance from rationality.

Some biases are probably evolutionary advantageous to the individual but harmful to the society or others.

Sticking with the group is great, except when it isn’t, for example.

Or, a “gut check” is great, but you can’t always make decisions based on your gut:

https://en.m.wikipedia.org/wiki/Affect_heuristic


Maybe someone in your field would know, is there any difference in the actual reliability/reproducibility difference in research done by men and women? Maybe you are simply measuring a real phenomenon but in a round about way?


The OP mention it is the same text which is rated differently depending on the gender of the author.


This was one of the few factors we kept the same- same text.

I think if it had been a larger study (not a one credit class run by 4 undergrads but an actual study with funding) we might have had multiple texts and randomized them, but we had one text- I don't even remember what the topic was.


As a former researcher with multiple high-impact publications I can 100% confirm it:

- when I was a researcher in Italy for an unknown lab, all of our articles were generally very scrutinized and went through years of reviews before publication

- when I worked in Michal Graetzel[1]'s laboratory things get published much more easily on more higher impact journals with less scrutiny. Not only most of the publications didn't really add much to the scientific knowledge, but they didn't even have the very high standards required when I was in Italy

Why is this bad? You get more funding the more you publish...So labs that publish easier due to the names involved get much more money...which they can use to publish even more and get even more money...which they can use to publish even more...

There's excellent scientists anywhere, really. But funding, fame and politics are extremely asymmetric in academia.

[1]https://en.wikipedia.org/wiki/Michael_Gr%C3%A4tzel


I don't understand this. Don't you have blind peer review?

In my discipline, most journals even have double blind peer review. I thought that's the standard.


> Don't you have blind peer review?

It's fundamentally impossible because when you publish the topics are going to be in a way or another niche enough that you know all of the people working on those topics and conferences make those circles even more public.

Say that you are studying e.g. "helical peptides", which is a vague and broad term on one side, on the other one, all the people that work on specific aspects of helical peptides (say, using them to bind surfaces that are very different in polarization, e.g. batteries) know each other and blind review would add nothing really. Even when it is blind you likely know who's reviewing it most of the time.

There are topics that are of course wider.

To make a comparison with programming: imagine you're involved in an open source community with a language that is niche enough that you know most of the people involved in it (it is common for many lisp dialects).

You can likely be given a program in that language and figure out likely who wrote it due to their interests, programming or api designing, or the use of static types, or some kind of linting that you know only a handful of people use, background, etc.

With the huge difference that in case of science the circles are much smaller because you know all your peers are in academia, who of them has specific instruments (not everyone has the same lab equipment), but who doesn't also have some others, you may know that some of your peers have their niche and "main branches" of research so you figure those things much easier. Also, in those circles you also know who is involved in your "code reviews" so if you know that A and B aren't reviewing it (you know they would've told you) it can only be C, D and E, but the feedback requesting this kind of experiment could only come from "C"...


But this is contradictory. Can you explain?

On the one hand are famous scientists who have wide influence, on the other hand are small academic circles working on obscure, niche topics where everyone knows each other so well that blind review is impractical.

How can one scientist be both so reknowned that their name tilts the balance of grant money unfairly away from other less known researchers, and at the same time their research is so specific that only a very few others can even comment on their work?


It is likely a relative matter. “Famous scientists” still tend to be famous within a rather small subset of even other scientists who are in a similar niche, no matter how large that niche is. It is the intensity of engagement of that community that likely makes it nearly impossible to even not tell exactly who wrote something, simply by the writing style.

An analogy may be how heavy readers can immediately tell which author of their particular topic area they are reading, based on a short excerpt. It is what makes alternative noms de plumes almost impossible, especially now with systematic and documenting AI.


I moved from CS to an experimental field where there is no blind review, and I don't buy these arguments. Most experimental fields are sufficiently large that when you submit an article to e.g. Nature, reviewers don't necessarily know who you are.

Even some Ivy League universities are now requiring blinded applications for faculty positions and they have double-blind reviews to avoid bias, e.g. hiring a graduate from Oxford instead of no-name university.


I can name a dozen people who, for anyone in my field, are extremely famous - to the point of if you meet and talk to them at a conference, it'll be the talk of the lab for weeks afterwards and everyone will gush about how cool it is that you got to meet them and talk to them.

If I walk two corridors over - still in my department, but just a slightly different aspect of the field - they won't even recognize the name.


>It's fundamentally impossible because when you publish the topics are going to be in a way or another niche enough that you know all of the people working on those topics

Even if you know the people in the field and you can guess the authors, double blind does not hurt. Actually, I don’t see any downside of double blind.

Actually, names are very often used to let you out of the field if you are not part of the usual family…


You can't see any downside of double blind?

There are many systems we use which provide anonymity, and they always result in some form of abuse. Such as phone numbers (scam calls), Internet (cyber harassment, scams, viruses), cryptocurrency (all the scams). Any system where humans can gain advantages from abusing anonymity they do so.

I'm not an academic so maybe there are already safe guards, but from my understanding people already cut any research into as many small papers as they can. Truly double blind peer review would likely encourage this. Maybe it would also further encourage people to steal research or peer review outside of their expertise.

The benefits may outweigh the negatives, but I'm sure people will find a way to abuse it too, consciously or not.


I don't understand what you're trying to insinuate and how negative effects can arise. We have double blind review in all major journals.

1. If a reviewer recognizes the author without declining the review, the result cannot be worse than without blind review. A difference would only be if the names of the reviewers are also not anonymous, but this creates much worse problems and would seriously jeopardize the review process.

2. If a reviewer doesn't recognize a dupe or paper very similar to one already published, then the reviewer is not competent and the journal has a problem with the reviewer pool or reviewer selection. That is a problem in any case, and is the reason why bad quality journals exist.

3. If the reviewer is incompetent, unfair or even insulting, then the author will complain to the editorial office who will then investigate and possibly get a third reviewer. The editor in chief or area editor can see the reviewer replies, just not the names of the reviewers and authors.

4. Your point about stealing research is not an issue. The publication is not anonymous; theft will be discovered very quickly by the scientific community (which will discover it more likely than two reviewers anyway). However, I'm pretty sure many top journals also use anti-plagiarism software.

I really don't see how double blind review could be abused more than non-blind review.


Stealing research is not a problem specific to double-blinded reviewing, but it is a problem.

Two examples from last year: * https://www.google.com/url?sa=t&source=web&rct=j&url=https:/... * https://www.universitetsavisa.no/etikk-forskningsetikk-forsk...


>Truly double blind peer review would likely encourage this. Maybe it would also further encourage people to steal research or peer review outside of their expertise.

How so?

For smaller papers, if the papers still stand up to review even when the research has been cut up, that seems a neutral effect at worse. It might even be seen as a benefit as each paper is more focused.

As for stealing results, the names will still be attached before the printing is done. Only the review process will be double blind. Thus any ability to detect stolen research will still occur.


If you are worried about "stealing research" , you have no right to public grant funding.


Even with blind reviews, many people cite their prior work extensively, so it can be hard to truly obscure who the paper authors are.


Mathematics doesn't have double blind peer review unfortunately. It will help in a big number of cases, even if some cases fall off (for example, if you do experiments on a well-known software which only one lab works on, then double blind will have a limited impact.)


As the linked article points out, practices can vary widely across fields (and in fields where preprints are available, maintaining authour anonymity in a double-blind setting furthermore relies on revieweres not having come across a preprint of the paper that is being reviewed which in turn can easily happen in smaller research-communities).


Well in various CS fields people preprint on arXiv these days, so it is easy for reviewers to unblind themselves. Sometimes accidentally, because preprint timing and subsequent Twitter advertising coincide so well with conference deadlines...


Why doesn't arxiv allow name-blind preprint posting?


That would be a good idea, and then have the names revealed in a future version like the way the paper itself can be updated. I don't think it is possible though. If it is, it certainly isn't used.


Without peer review, publications without names would be untrusted and untrustworthy, and would probably represent a security issue


The main point of arXiv is not to substitute for actual publication. Some people use it that way, which could still occur. But most intend to submit their work for review to a venue with a supposed double blind process. The use of arXiv is for flag planting and ease of open access.

Often the arXiv upload will be right around the time of a conference deadline in fact. For these people I see no problem with an OpenReview-esque setup where the arXiv page remains anonymous until the authors opt to reveal themselves.

If people don't care for true double blind review then they should stop pretending they do. Conferences like ICLR and NeurIPS are far from blind. It would be one thing if it were just a matter of trusting reviewers to not Google the paper title, but it goes beyond that. There are currently no restrictions on the use of social media to advertise works under review nor on the timing of arXiv uploads, which can trigger alerts to those in the field right around review time.


You can easily find the authors based on the references they use and the topic.


Sure but the idea is that you don't try to find out who the author is, and if you recognize the author anyway, you decline the review. I review regularly and that's what I do whenever I recognize an author.


Don't you recognise authors on topics you're most competent to review on?


Not very often in a way that causes me to decline to review. What happens sometimes is that you have a suspicion but it's not definite. It could be one out of a dozen authors, or a newcomer you've never heard of. I'm in the humanities, however, things probably look different in the natural sciences.


Not everyone seems to have your ethics.


"double blind" is not close to achieved, given differences in writing style, paper structure, graphics, choice of citations, and and and...


What is your discipline? I'm in physics and double blind peer review is not the norm.


Heard from colleagues working in academia that their profs were absolutely strict about the layout, the way graphs look, the colors which needed to be used, etc. All so reviewers could decode who's actually behind a paper.


Not every conference or journal is blinded, no.


I suspect the mechanism goes something like this:

Reviewers prioritize their time above all, and after that correctness,leaving novelty in last place. If a paper comes from a unknown lab it tends to get greater scrutiny because a famous name is a subconscious stand-in for correctness. You spend less time reviewing famous authors because you think they are less likely to have bugs.


Reviewing a paper is effectively unrewarded: you need some kind of reviewing service on your CV, but the amount and quality is barely measured, let alone considered for career progression.

Reviewing a very good paper is quick and easy ("LGTM"). Really terrible papers are not too bad if they're obviously awful, but reviewing something subtly flawed takes a lot of work: you need to identify the flaws and describe them in a way that's compelling enough to convince the authors--or at least the editors. Ideally, you'll also explain how to address them, which is more work now and down the road when you review the authors' often-grudging implementation of your suggestion.

In such a world, people may be increasingly reluctant to review papers without some indication of their quality (for which name is a rough proxy). My solution to this is to somehow make reviewing more valued. It's an important part of science and deserves more than a checkbox.


I think this is the best point here.

Reviewing is hard work with very few rewards - and you have to decide at the start whether it is worth your time to embark on it. Ideally we would like to learn something from the paper.

I and most people here would most likely strongly prefer to review a paper written by someone good at what they do.

In reality most papers are pretty bad - thus when we prefer reviewing papers written by Nobel prize winners we are not "biased" and "humbly bowing" before their greatness - we are just trying to make it worth our while.


I think "aligned with reviewer's biases" is more important than correctness or novelty.


Double-anonymous/blind reviewing is completely standard in most areas of computer science, e.g. networking (SIGCOMM, NSDI, IMC, HotNets, MobiCom, MobiSys), systems (OSDI, SOSP, USENIX ATC, HotOS), security (Usenix Security, S&P), machine learning (NeurIPS and ICML), graphics and HCI (SIGGRAPH and CHI), and at least some top-tier theory conferences (like FOCS).

So I think most computer scientists would agree with the article's conclusion that double-anonymous reviewing, while flawed, is better than the alternatives. I don't think I've done a non-anonymous submission in 10 years, and as a reviewer, usually I don't have a strong guess about who wrote the paper (and when I think I know, often I turn out to be wrong). It's a little annoying that this news article in Nature Magazine ignores the longstanding widespread prevalence of this practice in a conference-driven (but... we like to think important) academic discipline. :-)

But: (1) I don't think the big benefit of double-anonymous reviewing is that a lousy paper from a Nobel laureate (or, from CMU/Berkeley/MIT/Stanford) isn't let in unfairly. To me the big benefit seems to be that reviewers have to review every paper as if it might be from their friends or a famous person, and consequently have to review each paper with due care and try sincerely to understand its contribution, which maybe they wouldn't do if they knew it's from some random place/author they've never heard of or don't think highly of. I think the way it affects judgment may be more in equalizing time/effort spent to read and understand a paper (and the generosity you give a paper because "maybe" it was written by your friend or somebody you respect), rather than a straight-up bias towards liking whatever the faculty at a famous university are writing about this year.

(2) While I do think it's true, and a good thing, that double-anonymous reviewing helps "marginalized groups of authors who often struggle to have their work see the world" as the lead researcher says, we should probably acknowledge that authors are not the only beneficiaries of a scientific publication. The interests of the reader matter too -- the journal or conference has some duty to serve them. On the margin, maybe some readers would be more interested to learn what Albert Einstein is thinking about these days, or would like to see a well-balanced conference program that includes a good talk by a known-provocative speaker, instead of one more random (but adequate!) paper from a nobody. I'm not saying we should give a huge weight to this -- it's fine to make people eat their vegetables, but, I don't think we should act like scientific publication is only to give authors a line on their CV and the readers' preference is 100% irrelevant. A scientific journal shouldn't exclusively serve the authors. (Other kinds of media care way too much about what the reader wants, e.g. Facebook giving you whatever it thinks will keep you clicking things on Facebook, but there is probably a happy medium somewhere.)

(3) The challenging frontier may be in grant submission and reviewing, where proposers are typically not anonymous to the reviewers, which surely leads to some biases. I have heard about government programs where they did use double-anonymous reviewing and it seemed weird to me. (Probably this is a situation where track record should matter, yet trying to summarize your own track record while remaining effectively anonymous seems really hard...)


In my experience, at least in the field of Solid-state Physics, double-blind review is quite rare. Also, it would be challenging to implement, since it's often very clear who is the group that submitted a paper. The community is not so big, everyone know everyone, samples and techniques are quite peculiar for each group and a paper typically cite a lot of previous work, to avoid large repetitions. It's not uncommon to guess well the reviewers


The point is that, there are no downsides to double-blind peer review, and it is hence possible that with double blind peer review, less well-known people might get more journal articles. This might lead to more diverse research techniques being used since you don't need a big name to publish anymore. And then, in a few years, you might find it hard to guess the groups from techniques, because a single group might have multiple different techniques.


> authors are not the only beneficiaries of a scientific publication. The interests of the reader matter too -- the journal or conference has some duty to serve them.

> I don't think we should act like scientific publication is only to give authors a line on their CV and the readers' preference is 100% irrelevant

In many fields, the reader can go and read what they want in arXiv or a similar repository. And in the fields where this is not the case, it should be. The elite authors that you mention, in particular, shouldn't have any problem to get their papers read by linking them in social networks, starting a blog, etc.

While this wasn't the case 50 years ago, right now almost no one reads journals from front to back, people just search for individual papers, and the main purpose of the peer-review process of conferences and journals is basically gatekeeping and providing some signal for career evaluation, i.e., to "give authors a line on their CV". Thus, I don't think there is any reason to judge anything but the paper contents.


I’m confused about your second point: isn’t the reader the one who gains the most from reading the best papers, independently of who’s the author?


It's more like how some people would prefer to watch the next episode of the show they are already watching, even if some episode of another show is objectively better.

Or some would rather read a mediocre Tweet from somebody they follow than a good tweet from somebody they don't know.

That's why OP calls it "eating your vegetables", forcing people to read the "best" articles rather than the articles they "want" to read.


For the most part, sure, but at the margin I'm not sure the "best paper" can be completely well-defined within its four corners, independent of the author. E.g. the reader might rationally care (to some degree) about what a paper says about the future of the field. If an underfunded non-famous researcher describes marginal preliminary results by pursing some approach, that's one thing, but if my very successful colleagues at Berkeley/Washington/Tsinghua or Google/MSR/Alibaba publish a paper that's like, "We have gotten pretty excited about this technique for a bunch of reasons, some quantified but some more conjectural, or qualitative, or just more difficult to measure so not done yet, but here are some interesting preliminary measurements that should whet your appetite, and because of who we are, our track record of success, and our many grad students/researchers digging for gold, we'll probably find more to report in due time" well, am I wrong to be a little more interested in the second paper on the grounds that I'd like to keep up with the field?

(You might also reasonably believe that the job of an editor or program committee is to assemble the most edifying program or issue when considered as an ensemble, rather than each paper individually.)

I also don't think it's completely crazy to consider to a tiny degree what the reader "wants," separate from their true interest in an omniscient sense. When you click around on the Internet, do you only read the "best papers"? Probably not. This is similar to why the New York Times (and Hacker News) don't use 90% of their word count to remind readers to eat healthy, stop smoking, maintain friendships and regular activity, sleep regular hours, and give away most of their money to buy malaria nets.


I think grandparent is suffering from the current academic environment where the main(!) purpose of a paper is getting career points.


Established experts have a certain "weight"; if they decide to endorse a fringe or cutting-edge scientific view, that would be in many people's interest to know.

But if that same paper in support of a fringe theory is reviewed under an anonymous name, it might not get published, even if it's perfectly well-argued, simply because "the topic is fringe and not very relevant to our readers" (circular reasoning).

And even if it's not fringe, how we view an author may influence whether their paper passes peer review for perfectly legitimate reasons. If a random scientist is the 100th one to weigh in on any old debate (let's say Penrose-Lucas), it may get dismissed as non-notable. But Penrose giving his updated thoughts would be notable.

There's costs to anonymous peer reviews. Credibility and notability can't be fully detached from the author.


I think the parent commenter is saying in (2) that sometimes readers want to follow a particular author, like some cartoonists prefer to read XKCD rather than Dilbert, even though they generally study all cartoons of their genre.

I’ve often seen celebrity authors write a “Letter to the Journal”, for those kind of readers, and I think that that might be the safer way for big names to be recognized.

It may give the author space to collaborate with interested parties without necessarily overwhelming the peer-reviewed content.

At the same time, some authors are truly just prolific.


When talking about peer review and open access, the reference point is life sciences, not computer science. CS is very different when it comes to the significance and process of those. In life sciences double blind is practically impossible because people know the people in their narrow field


Double-blinding is not common in PL theory and implementation.


I can't speak to every PL venue, but all four places that csrankings.org lists for PL are double-blind these days (PLDI, POPL, ICFP, and OOPSLA).


Sorry, my comment is based on out-of-date information. The first three of the conferences used to be single-blinded.


It blows my mind that peer reviews aren't done blind. Not only is there this geneder bias but also bias of "I know that person they're famous I'm sure their work is good" or "I've rejected their paper in the past".

Seems like an easy fix would be to make all peer reviews blind.

In fact, for any study that gets federal funding, they should have to publish their hypothesis ahead of time into an escrow system, submit their paper to the same system, and then get blind peer reviews (blind in both directions) where the reviewer gets to see only the initial hypothesis and the paper with the names/institutions removed.

And of course all the papers should be available for free. Maybe the government could pay reviewers directly for their time, but I haven't thought that one through yet.


> It blows my mind that peer reviews aren't done blind.

I've co-authored a few papers and blind reviews have some surprising consequences, like discussions along the lines of "you can't say that you already wrote about this issue elsewhere and put a link in the paper, because that would unblind it". I was a bit uncomfortable with that, because I like to "cite my sources" (even if the source was myself in this case).

This also points to another issue: The more specialized the paper is the less likely the blinding will work. If you know a field by heart then you know who works on what, and probably can guess most authors based on that.

I'm not saying I'm against blind review, but while it sounds obvious, it has some issues in practice.


Whoever told you that about your cites is probably right -- you shouldn't cite yourself while calling out that you are citing yourself. You should cite yourself the same you that site any other work.

The second part is probably unavoidable. If you're working on something super specialized when all your reviewers are the six other people who work on it then sure, it'll be unblinded. Not much we can do there sadly.


I disagree, the style to write as a "neutral" observer (often writing in passive voice) is frowned upon nowadays for good reason.

Part of that is also that the information if the authors wrote a citation or not can be important to a reviewer. For example it is unfortunately quite common that authors publish results in a salami tactic to maximize the number of publications. There can be a significant difference in impact between a citation saying "this is important to work on" which is written by the authors and one which is written by someone else.

Generally we should not write to hide information, and that would include if the authors wrote other work that relates to the work being reviewed. We should not adjust our writing to doubleblind review (and I would argue the advice the author was given is wrong). Doubleblind review is imperfect anyway, I can often tell who the authors are just by topic and e.g. writing and figure styles, so if a reviewer really wants to know the authors they can. We should still do double blind though.


Writing about your own work in third person has nothing to do with using the passive voice. You can use the active voice just fine: "Papers X,Y,Z show that ..." works regardless of the identity of the authors of X,Y,Z So the style argument is wrong.

The other argument you have is also questionable. It doesn't matter at all the identity of the citer when using citations to argue that a topic of research is important. If you cite 20 papers and they are mostly from the very same author, it doesn't matter if you're the author: any reviewer will realize the claim is shaky -- or not, if all those papers happens to be actually outstanding.

Double blind is imperfect but miles better than single blind. And we shouldn't list made-up defects that don't stand to scrutiny to it.


> Writing about your own work in third person has nothing to do with using the passive voice. You can use the active voice just fine: "Papers X,Y,Z show that ..." works regardless of the identity of the authors of X,Y,Z So the style argument is wrong.

I agree that writing in third person about your work is not the same as writing in passive voice. It is part of the same style trying to give an impression of objectivity despite the fact that you did the work.

Essentially you are trying to hide the information that you authored the papers, and did the work. Just compare "In paper X,Y,Z the authors show the importance of proper citing" or "In paper X,Y,Z we show the importance of proper citing". Don't tell me that you would not evaluate the 2 sentences differently.

> The other argument you have is also questionable. It doesn't matter at all the identity of the citer when using citations to argue that a topic of research is important. If you cite 20 papers and they are mostly from the very same author, it doesn't matter if you're the author: any reviewer will realize the claim is shaky -- or not, if all those papers happens to be actually outstanding.

Sure if there are 20 citations it's very obviously shaky, but often enough cases are not quite so clear cut. I still believe one should not deliberately hide information.

> Double blind is imperfect but miles better than single blind. And we shouldn't list made-up defects that don't stand to scrutiny to it.

Just so I don't get misunderstood, I'm not arguing against double blind, we should always do it and I have been advocating for it in several settings. I just say we should not suddenly change the way we write papers, so not to accidentally reveal our identity to the reviewers. That approach will make papers more difficult to read and write with questionable benefit.


> the style to write as a "neutral" observer (often writing in passive voice) is frowned upon nowadays for good reason.

Could you be more specific about who's frowning upon it? Because I've never heard this before in my field (Comp. Ling., where double blind is the rule) and would like to look more into it.


Many style guides now say to write in active voice (Nature is one of them, but many others as well). I don't have the books in front of me, so can't find the citation, but many publications on scientific writing essentially recommend direct language.

The reasoning is that the work was "subjective", i.e. carried out by you. By using "detached third person language" you are trying to give a false impression of objectivity. This is similar to management/PR double speak like "we are forced to raise our prices", "we are unable to compensate you"... (I don't assign malice in the case of scientists though).


Like sibling I'd also like more information on this and can't find anything immediately compelling with a light search. Like this post suggests, obviously there's a use to hiding information, sometimes a bad reputation is undeserved or irrelevant to the work, sometimes there are subtle biases in play.

The reputation of the author and their behaviour wrt the citations used should be considered, I agree with you that it's important information. Maybe the reviewer of an individual paper shouldn't be considering that though, maybe that should primarily be considered in a second review stage or in the context of meta-reviews? Idk, but the spirit of using passive voice in the context of research makes more sense to me.


> For example it is unfortunately quite common that authors publish results in a salami tactic to maximize the number of publications

Why is this unfortunate? I'd argue that splitting results in multiple publications is a) Riskier for authors (higher chance of rejection) and b) More convenient for readers (each paper requires less mental load, being focused on a single aspect). So, even if there's a payoff for authors, it doesn't come for free.


The tactic is more advantageous to authors, because they get more articles (which is used as a metric to evaluate scientific success) and I would argue it's less risky. Say you split up your results up into 3 papers, your chance of one paper being rejected might be higher, however your chance of all papers being rejected is lower.

In terms of more convenient for readers, you discount the mental load required in finding papers. That's in fact one of the biggest problems in many scientific fields at the moment. There are so many papers being published that it is very hard to keep up with the field. Reading the same amount of results also requires a much higher load, because if authors split up the results into 3 papers, the individual papers are not suddenly shorter, but in fact the overall page count is typically almost 3 times of a paper that would have put everything into a single paper.


In some cases the issue that you can't really "blind" it because it is a physical continuation of the earlier work - not just building upon the idea, which anyone could do, but using the exact same experimental device (a unique one, like the hadron collider) or continued analysis of the exact same set of patients or animals, or improving the previously made software tool.


This happens in larger fields too. For example in AI, which is a large research field, you can be blind-reviewing a paper, but it uses a proprietary google dataset, so you know where it comes from. Blind review is not the answer, I fear.


For me the most important advantage of blind review is not that 100% of papers are effectively blind, but that if you're a noname author you have the right to be blind and not having your paper looked down on just for that reason. That alone justifies double-blind review.

Regarding papers that cite proprietary datasets that no one can access, in fields like AI where it is perfectly possible to release datasets (if there's a specific reason it's a different issue), as far as I'm concerned they should be outright rejected due to lack of reproducibility and inability of the reviewers to check the correctness of the claims. Although I know this is a minority viewpoint and it won't happen.


Another factor in my field as well is linking to software -- papers that introduce algorithms or use computational analyses are rightfully expected to include the code (usually github) and reviewers are expected to at least check that the code exists and is reasonably documented. I can remove my name from the paper, but the github page will say which lab it's coming from.


Academics just like every other aspect of human life is full of politics and this includes peer reviews as well. Papers regularly get panned for variety of reasons that have nothing to do with actual content of the paper and maybe even worse sometimes because the reviewer happen to be working on something similar and want secure a bigger grant for themselves.


That's false. Trust the science.


"Trust the science" is an oxymoron. Please do substantiate why grandparent's claim is false. There is a massive literature on the shortcomings and, yes, even potential for malice, of current peer-review practices.

https://sigbed.org/2022/08/22/the-toxic-culture-of-rejection...

https://www.nature.com/nature-index/news-blog/research-misco...


The sarcasm went way over your head lol

The user you're replying to is reminding us we just endured ~2 years Faucism.


Peer reviews are usually blind (authors don’t know who their reviewers are). What you are referring to is double blind (reviewers don’t know who the authors are). The challenge with double blind is anonymizing content that often relies on cited prior work.


> The challenge with double blind is anonymizing content that often relies on cited prior work.

You're the second person to bring this up, but I'm not sure why it's a problem. Just don't say "based on my own previous work" and instead say "based on previous work". I.e. cite yourself the same way that you'd cite anyone else's work.


But as a (final) reader I would weigh supporting evidence by the same author/author group/lab far weaker than supporting evidence by unrelated sources! That being said: One could get around that by replacing the citation style for the review version only. There are typically changes anyways and the given amount of fights that I had with editors for pure typesetting or "improvements of english" that one would actually be helpful to do.


As the final reader you're expected to do some due diligence and at the bare minimum read the reference list. Once you do, you can easily see which papers are self citations and which ones are not. If you're not doing this due diligence you probably don't care too much about this paper anyway and there is no harm done if you think a reference is not a self citation when it actually is.


That doesn't help if references are given as First Author et al. or even as [number] and while I agree in principle it simply doesn't happen in practice.

But then it just weakens your statement to make it worse. And I'm not even sure if it makes a difference for what the article talks about. Because the reviewer needs to actively go looking. And at least to my understanding these effects are not due to people going out of their way to misjudge people, but rather that it's an effect of subconscious prejudices. And for the latter breaking the obvious connection is probably enough.


If the previous work is a single paper, you are right. However it consists usually in multiple papers, it becomes then more complicated to hide that all these papers share one (or more) author(s).


Speaking from personal experience, it can be really difficult hard to write a paper that anonymizes yourself...especially if you are known for a particular approach to a specialized problem. You actually have to begin excluding cites, turning them into placeholders (withheld due to anonymous review requirements), rather than just speaking about the work as if someone else did it.


I like where you come from, but a LOT of phds are so narrow in scope that its almost impossible to be anonymous because the set of peer reviewers is very small. Only the large wide fields like medicine and some variants of humaniora are big enough to do this.


If you're working on something super specialized when all your reviewers are the six other people who work on it then sure, it'll be unblinded. Not much we can do there sadly. But most people aren't working on something that specialized, and most would probably benefit from having reviewers in adjacent fields and not just the other experts in their field. So at least those adjacent ones would maybe be anonymous.


Interesting, I hadn't thought of that. I wonder if that's okay though — if you're that familiar with someone and their work I'd expect your bias towards that specific person overrides any broad gender bias.


As a rule I'm generally not a huge fan of taxation, but I'd be happy to see my tax dollars go towards a system like you propose here.


I recently commented on a similar discussion here about a month ago[1] to the effect that double blind review is a cheap and idealistic shortcut that runs into its own issues (research areas are simply too specific to pretend that we're operating "blind") and doesn't get you what you really need which is to address the deeper issues in the publication system and research system.

These issues? Off the top of my head:

1. Quantity over quality

2. Normalised sensationalising of ones research

3. Neglecting good or even necessary collective scientific practices such as replication studies and data and code sharing and openness.

    a. Here, valuing the actual work of peer review comes in as a fundamental aspect of the scientific process that should be respected, rewarded and published rather than being some silent aristocratic duty that eventually gets navigated around in the sensationalism rat race many researchers pursue.
4. Selecting for "impact" and "novelty" and not the quality of a researcher's/scientist's administrative and leadership skills, scientific method and integrity and teaching skills (incl, importantly, the teaching of graduate students)

     a. Though novelty and impact are important in research, IMO, they're outcomes that are hard/impossible to select for largely because research breakthroughs are often serendipitous and the kind of true genius that will "hit targets no one else can see" is rare and frankly everyone knows it when they see it (provided they're good researchers and not just good salespeople).  A very senior academic once told me in a private context that you can't predict where the breakthroughs are going to come from as an inexperienced researcher playing around is just as likely to make a breakthrough as a senior researcher with many credentials.
5. A personal belief of mine ... resisting the necessary professionalisation of research, by which I mean that the conception of a researcher is based very much in the same model we had centuries ago. IE, a lone genius researcher left to their own devices to find the truth. Scientific research, at least, is now too complex, too hard and involved for this to be true. Collaboration and more and more specific roles are necessary to make the industry work well. The resistance of the "industry" to recognising the importance of software developers in research is a perfect example. Same goes for statisticians and consultants from adjacent fields. My personal favourite for such an "associative" role would be quasi-theoreticians (outside of physics) who can aid in aggregating, reviewing and critiquing the literature in real time without necessarily having a horse in the race.

~~~

[1] https://news.ycombinator.com/item?id=32832836


Very common for anyone who has ever spent time in academia. When I was still in grad school I noticed a trend in some major mathematical and CS journals where a lot of time it would be the same N people writing articles. This was driven home when my advisor told me "without such and such's name this paper has no chance of making it past review".

Non-sense. However, for all the people that say "trust the science" this is the science you're working with. It's politics all the way down just like everything else. For example, if Knuth decided to post some utter garbage chances are it would accepted based on his name alone.


"Trust the science" is a silly slogan and should be dismissed.

But you should also remember that "science" doesn't guarantee that _every_ paper is good. No one can. "90% of everything is crap" is a law of the universe as solid as the second principle of thermodynamics.

Science is a _process_ that is able to weed out the crap more efficiently than anything else we tried. Any big name can put out crap, true, but as soon as they do, there is an eager army of people who jumps at the opportunity of "proving X wrong". If the original paper was crap, that would be a pretty easy thing to do.

Note that I said "more efficiently", not "efficiently" in general. There will always be an army of people "trusting the science" (the authority, really) and so it might take time for the crap to be weeded out. But it will eventually, because science is like evolution: if nothing can be built on top of the crap, it will be progressively abandoned, because it can't reproduce (pun intended).


> But you should also remember that "science" doesn't guarantee that _every_ paper is good.

That's correct and I agree. Actual Science doesn't require trust. As you aptly pointed out it's excellent at weeding out crap most of the time. The issue is the academic politics involved in it make it harder to determine if the process is weakened. The problem of course comes when a major result was not reviewed appropriately and it becomes the standard until someone is brave enough to write another paper failing to reproduce it. That's the scary thing - even if someone is brave enough to write a paper saying its non-reproducible they still have to get through the review committee to have their voice heard. Sure, they could post to Arxiv or a blog if all else fails...but none of that will ever get picked up where it is needed most.

> There will always be an army of people "trusting the science" (the authority, really)

This is really the crux of it. It's appeal to authority. Even higher order thinking people (that is, those outside pop-sci nonsense) fall victim to this because it's human nature. The authority is also how mediocre work gets past reviewers by attaching a name to it.


You trust science because it has the highest probability of being accurate. What's the alternative? Most people aren't capable of analysis scietific research to see if it's correct.

What makes something political? If new research shows that X product is dangerous and republicans, who that company donates to, come out and disagree with the research they just made it political. Now, someone like you, can just say "This research is political"

You've created a way to dismiss scientific research simply by disagreeing with it


Jesus Christ, the point of science is to REMOVE trust - not be the least worst option. Science that cannot exist without trust is faith.



You are confusing trust and faith. Science is perfect that's why it's the least worst option


> Science is perfect

Science is a process and it is never complete. You are confusing a process with the results of the process, which are by definition imperfect. What makes Science good is that it's results are fundamentally imperfect and tainted, and that it respects that.

When you go around saying 'science is perfect' you disrespect its core principles.


You are completely misinterpreting the meaning of GP's comment. First, they are not talking about government politics, but rather interpersonal politics. Second, they are defending science, and saying that it shouldn't be dismissed for political reasons, rather than the opposite.


There's a lot of middle ground between blindly trusting all 'science' and blatantly disregarding everything that rubs you the wrong way.

It's possible to both have a generally high amount of faith in 'the science' while still remaining critical of its flaws, or rather, the flaws of the system in which its conducted.


I don't think this is the case. The point of science as one commenter pointed out is a lack of trust (or faith). Hence the need for pages and pages of sources, concerted efforts on reproducibility, etc. In certain fields reproducibility is extremely important and a single scientific article should be taken with extreme skepticism until it's reproduced several times. I think this is the thing that's often skipped. A single article being published in a journal is not a sign of changing times. If there's no attempts to reproduce it we have an idea of something, but no idea if it works (really). Often times this lack of reproducibility is associated with so-called corporate "science" (Phillip Morris, from another example I posted).

I hesitate to use the word faith at all, even when describing the scientific process itself. Having faith in the process leads to (for lack of a better phrase) a failure to trust, but verify. IMO, it is one thing to approach scientific literature with a trusting demeanor and still walk through the proper processes to verify the results, and another thing entirely to literally take the author's word. Often times especially when big names are attached we aren't trusting-and-verifying we are simply taking their word. That's a problem that weakens the credibility of science.

I don't want to come off as someone who disregards everything he doesn't like. However, I do approach science with great pessimism and have since graduate school. Not because I hate it, or something is wrong with the scientific method, but because approaching it pessimistically allowed me to write better work by forcing me to actually perform the method instead of p-hacking my way to a result.


I want to disagree with a small part of this. Faith is the belief in something that you can't prove. That is, even if you became an expert in a field, read all the papers, etc you couldn't prove the existence of god, hence faith.


Trusting the outcomes produced by science is inherently faith for most people who aren't an expert in that particular field of research, and since no one is an expert in every field everyone has to take certain scientific outcomes on faith.


How? If I'm critical of something what do I do?

If there's a vaccine and the majority of people in that field say it's safe but you want me to be critical what do I do?


> For example, if Knuth decided to post some utter garbage chances are it would accepted based on his name alone.

You mean like Elon Musk? When he came up with that giant box to extract kids from a cave with passages so narrow rescuers couldn't wear their air tanks on them, they had to push them ahead of themselves, reddit and HN and twitter were full of people defending him. There were even people defending him when he accused the rescue leader of being a pedo - simply because he was british and living in thailand.

Or how about the CEO of Brave, who apparently is an expert in public health, viruses, vaccines, epidemiology, etc? Endlessly defended here on HN any time his name comes up.


How about Bill Gates, who apparently is an expert in public health, viruses, vaccines, epidemiology, etc?

Smart people like Bill, Elon and the CEO of Brave are polymaths and can cross domains quite easily.


They can't.

Musk is just a guy getting rich via insider trading. Gates is most likely using his foundation to push his interests.


Apparently you haven't seen the news coverage of Elon's texts to CEOs and investors


First of all, Knuth would never publish something "garbage" because his reputation would be on the line for everyone to read (forever).


I mean it was an example I chose because the name recognition in HN. I would doubt Knuth would publish garbage. Then again, how would you know? If you've never been in academia it's understandable to think that way. But if you have, you realize the immense pressure, self-doubt, and scrutiny you'd have even making a bold, well researched, attempt to prove someone like knuth wrong even if it was obvious he was. I recall several instances in mathematics where this happened and it bordered on starting a war between mathematicians.


> I would doubt Knuth would publish garbage

Rather, Knuth could publish some mediocre paper that would be instantly accepted by all the top journals, widely read, discussed and cited instead of more salient works on similar topics.

That's the problem really, excellence is rare while mediocrity is aplenty and academia is a numbers game. The mediocres with prodigious output will get the cites, tenure, TAs, lab assistants and budgets to do research and perhaps stumble onto some other mediocre result, reinforcing the cycle.


Or he would never publish something "garbage" because people are convinced by his reputation that it will not be garbage.

Emperor's new clothes, while the rest need to be whiter than white and all that. These things happen because people are blinded by titles in a complex world with complex people.


Your response perfectly illustrates the point OP was making


Even great mathematicians try to put out garbage unaware of their senility in their final years. I've seen it personally, and the reviewer made contact with a friend of the author, who volunteered to be a pre-filter to avoid future embarrassment


The point is that a famous name gives an edge in peer review to a (potentially mediocre) paper written primarily by somebody else with the famous name appended.


A while back I subscribed for The Economist on a whim. I'm not really into economics, but I felt they provided a nice alternative view of what was going on elsewhere in the world compared to what the local news here in Norway reported.

One thing I quickly noticed was that the articles were never signed by the authors directly. I found this striking, but also refreshing. It caused me to pay more attention to what was said. And, as I later found out, that seems to be the main intention[1]:

The main reason for anonymity, however, is a belief that what is written is more important than who writes it. In the words of Geoffrey Crowther, our editor from 1938 to 1956, anonymity keeps the editor "not the master but the servant of something far greater than himself…it gives to the paper an astonishing momentum of thought and principle."

[1]: https://www.economist.com/the-economist-explains/2013/09/04/...


Norway really has a huge issue of sensationalism in local papers, and it does not help that even the government funded NRK is deep into sensationalism as well.

Opening NRK right now gives "Now the government must see the madness in this", "Electricity prices can make going to the cinema more expensive", and "He is so tired" as the top articles for today.

I follow so much International/American news thanks to publications like Reuters, AP, and Qz. But no matter how much time or money I use, I can not follow my own country's news. I definitely am not alone in this, and it terrifies me how unworried most of Norway seems to be about this fact as well.

Its a ticking time bomb.


On the other hand, if an author is consistently bad, signing would allow you to skip the content.


This reminds me of a bias I saw on Quora, where some popular contributor gets tons of upvotes by fans regardless of whether their answer is any good or not (often you have to dig deep into the comments to find out that it's not a good answer). (I should qualify that I don't know if it's still that way; I started using it when it was first launched and then quit 6-7 years ago once it started turning into a fanboi and you-upvote-me-I-upvote-you club.)


It's probably inherent to any site that allows you to follow individuals. For Twitter that's not much of a problem because most people know better than to rely on Twitter for life advice or product reviews. Letterboxd, on the other hand, manifests exactly the problem you're talking about, and the whole point of the site is to assign purported ratings to film and TV shows.

Reddit and IMDb are mostly immune. Which is not to say that they're immune to all problems (astroturfing, brigading, personal biases, etc.).


Reddit seems to try and change that, though? I don’t really follow what development happens on new reddit, but it seems more user-oriented?


They have been _trying_ to make it more user oriented for a while but even in the new design, users aren't really that important or recognized any more than they always have.

The main change is you can post posts to your profile rather than a particular subreddit. But that's not a big feature. Back in the day people used to just create a subreddit of their username.


There was a study some time back where they set folks up in groups, then had them listen and vote on bands / music.

Each group trended towards catapulting a single band / musician, but it always just depended on which one in the group got the momentum first.

I wish I could find the study again but it's hard to google for.

Pretty eye opening.


Salganik MJ, Dodds S, Watts DJ Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market

https://www.science.org/doi/10.1126/science.1121066


Thank you for finding this, I've been asked for the source before and could never dig it up!


It's not different from Reddit or H.N. in the sense that most people that vote never bothered to read the entire post and the same thing is true with peer review. This isn't simply the case in soft-science as the Bognanov affair showed.

They skim, see if it looks okayish and give it their approval.


HN doesn't allow you to follow users--that makes a big difference in putting the focus more on the content than the users.


Peer-reviewers also don't follow particular names around; they see the name above the submission and it influences them, as on H.N. no doubt.

But it's far worse, even without a recognized names, most votes are cast without proper reading, and I'm fairly certain also by the least intelligent subsection given how often submissions are upvoted on H.N. and Reddit that are pure clickbait and demolished in the comments by people that actually read it. People that vote by and large only read the title or the first sentence and make up their mind from there.


> Peer-reviewers also don't follow particular names around; they see the name above the submission and it influences them, as on H.N. no doubt.

Agreed. I was referring to Quora.


Well, that's because Quora has distribution mechanics like the home feed and email digests that can amplify content by popular contributors. It's not like Reddit or HN or StackExchange where answers/comments compete directly against each other on the question page. I don't think the situation is analogous.


It was worse on Digg, before it "pivoted" into whatever it is now.

https://www.wired.com/2012/07/mklopez-digg-power-user-interv...


You'll see this on SO where "rockstars" get their answer checked even though a better answer precedes theirs.


We have to come to terms with this.

Academia was bought and paid for long ago and the money was used to build an incredibly broken and overly political bureaucratic engine of scholarly and scientific work that doesn't get anywhere near as much peer-review scrutiny as it should and commands far more respect in politics and legal proceedings than we should allow.

Universities and experts are the best we can do sometimes so we have to rely on it, but it doesn't mean it's truth or absolute and people like to use it as if it is to sell ideas like global warming instead of educating people on climate change.


The 2 academics I know aren’t bought and paid for. Either could make far more money in the private sector than they make now. They work in their fields out of love of research.

This was my impression of most of my professors years ago too, especially as we saw so many quit for high paying engineering jobs at companies year to year — the ones who stayed aren’t in it for the money

I suspect it’s the same at the top: most senior administrators I bet would make more in senior Fortune 500 jobs


Agree. It's stagnant, fearful and bureaucratic. "Trust the science" never changed anyones mind.

However, I think your climate change example is a bit strange. If anything, it's big oil who's been trying to sell the idea that we don't have to stop using fossile fuels. Hiding evidence, and spreading confusion by paying lobbyists and scientists. Global warming was proved beyond reasonable doubt decades ago.


Climate change is one of the most politicized fields at the moment.

Any questioning it, even slightly means being banned from grants and academia.

Its also interesting that most climate models are NOT open source. Most recorded data from satellites is also NOT open source. So everyone works with a pre cleaned data set.

Its also worth pointing out that data sets like HadCRUT have never been audited by any respected scientist/group of scientists. This data was collected in stations not meant for long term measurements and they have a lot of errors. Just download it yourself and see. (Climate scientists are not really data experts, since they go from clean datasets in school to "clean" data sets in real life)

Calculating global temperature is also one of those things that is done in quite an obscure way, extrapolating too much IMO.


> Most recorded data from satellites is also NOT open source.

This is false. NASA and ESA science data is free.

There is often an embargo period for very novel sensors, and always a delay of hours-to-weeks to allow processing to catch up, but it's free.

If it's the source code of the analysis pipeline you mean -- even though you said data - that's a harder lift, because the processing is complex. But even that is changing (https://science.nasa.gov/open-science-overview).

Even in the absence of the open science initiative above, today you can always get the raw data ("Level 1 radiances") or sometimes even uncalibrated straight-off-the-sensor data ("Level 0"), if you want to process it. (https://www.earthdata.nasa.gov/engage/open-data-services-and... -- "All EOS instruments must have Level 1 Standard Data Products (SDPs)")

And if you want to look in to how the processing works, there are detailed documents ("ATBD's") that explain how the pipeline works, for each data product. Also free.

> Climate scientists are not really data experts, ....

Dreadfully wrong. Do you work in this area at all?


You seem to be much more knowledgeable about this than myself. But I was under the impression that while large climate models are used to infer the extent and impact of a changing climate, they are not actually an important component in determining climate change caused by carbon emissions to be a real phenomenon.

No other hypotheses about geophysics are required to show realistic and fault free simulations of the climate of the planet over several centuries to be generally accepted. Why should we apply such extreme prejudice to the hypothesis of climate change caused by the greenhouse effect? The basic mechanisms is quite simple and well understood, there is a variety of kinds of measurements supporting the claim that the temperature of the planet is increasing (meteorological temperature measurements, glaciers disappearing etc.). Full understanding of all the geophysical processes and feedback loops involved is not necessary, and very likely impossible.

I also have a hard time understanding the motives for such an enormous scam. Who would stand to gain from this except a relatively small number of researchers and the renewable energy industry? On the other hand, it's well documented that the fossile fuel industry has tried to sabotage climate science for the purpose of limiting political action on the issue.


Do you have any evidence of this?


HN isn't a journal and so academic sources of evidence really are overkill. However, I'll throw you a bone. Go to your favorite major journal and start looking at the "conflicts of interest" and "grants" section.

Once you take money from someone (except for NIST, in my experience) you're basically beholden to try your hardest to get the results they're looking for. Some scientists are moral enough to still return bad results. A lot of scientists aren't. There's a lot of garbage out there, and the worse the journal, the more garbage it gets. Famously, Phillip Morris studies "passed" the scrutiny of several major journals. It's amazing what greasing a few palms will get you.


"HN isn't a journal and so academic sources of evidence really are overkill"

I disagee, you should backup your accusations or statements (unless widly accepted) with some reputable source. This person claimed that science was bought and paid for. Did they mean all of it? Or most? That's an insane accusation that requires evidence.

"Go to your favorite major journal and start looking at the "conflicts of interest" and "grants" section. Once you take money from someone (except for NIST, in my experience) you're basically beholden to try your hardest to get the results they're looking for"

This doesn't mean people falsely data. You're simply providing motive.

Everyone wants money, you are using greed to then claim mass fraud in science


Is it really that insane of an accusation? This is how almost everything in the world works.

This isn’t only about greed either. People want their research published for reasons other than greed. For example, they want to move up in their career or achieve recognition.

After looking at a lot of medical studies related to COVID during the last couple years, I have seen first hand how biased and inaccurate many of them are. Some of these studies are even mentioned in major news outlet despite their obvious flaws when you actually begin to scrutinize them. Think big pharma providing research grants for studies that conclude their products are effective.

The OP never said that people falsify data as a result of receiving grants from interested parties. They often don’t have to. They can simply design the experiment in a way that doesn’t account for specific variables or behaviors then use the resulting data to reach a specific conclusion.

I remember seeing an article related to AI research on HN a little while ago that somewhat explained this problem. The grant money all goes to people researching deep neural networks which creates a reinforcing feedback loop. Since all the money goes to one branch of research, it creates very few opportunities to research competing ideas. I believe it was this one:

https://nautil.us/deep-learning-is-hitting-a-wall-14467/


I just looked at the first 10 articles in Nature.

Most declare no conflicts of interest. One author of one paper seems to have started a company based on similar technology: potentially a bias, but also potentially putting one's money where their mouth is. One other author lists some consulting work for a few companies.

As for grants, I doubt people are bending their results to appease the NSF or NIH. There's certainly groupthink in what gets funded. We're still throwing money down the ABeta-for-Alzhemier's hole, for example. That eventually shapes what topics get published, but maybe not the specific results. The recent Abeta articles are pretty negative, for example.


Fairness and efficiency are often at odds. As a reviewer, I was sometimes asked to review papers from unknown authors at unknown institutions. My level of effort was likely proportional to how well I knew the quality of the institution from which the paper came. There is a great deal of time involved in understanding a paper well enough to do a full review.


It's true, unfortunately. When I was a TA and graded homework I'd learn which students did well over time and generally would put less effort into checking their work because generally it was more likely to be correct. They were also more likely to self-correct, and to present work well, etc. Reviewing papers isn't really too different.


I agree. I'm in an applied computational field, that attracts a lot of computational and mathematical researchers that don't have knowledge of the applied system. I've learned from experience that often researchers new to the field make a whole lot of poor assumptions and modeling decisions, even if the mathematical part is fine. For better or worse, I'm going to more heavily scrutinize the details of a paper from a lab I haven't heard than I am one that I believe has a strong track record. That doesn't mean I'll give a pass for the latter, I'll still read it with a critical eye. But certain types of trust build up over time.


I'm surprised that there are still non double blind publications that count for tenure cases.

This does explain some of the recent and embarrassing [lack of] retractions of Nature papers though.


Besides the sibling comment, in a lot of fields, people just know which group a paper is coming from even without doing any sort of searching or looking at preprints.


Yeah it's often pretty obvious that a paper extends / is incremental on a previous body of work (maybe the same code, maybe same dataset) such that it could only have come from one lab.


Yeah, but good CS conferences have diverse reviewer pools, including ones from industry or different subdisciplines. Such reviewers often haven't heard of the lab, or paid attention to which grants are currently funded, the movement of students into postdoc positions at other labs, etc, etc.


It is nearly impossible to enforce double blind peer review because of the existence of online pre-prints (hosted on arXiv, SSRN, or just the authors' websites). A referee can copy and paste from the text into DuckDuckGo and find the authors.


I think you are overestimating how much reviewers care which grad student wrote which paper, or who their advisor is.

Reviewers are generally overworked (for other reasons), and the main goal is to be constructive and somehow not sound like an idiot (in front of the other reviewers, who might be subject experts and definitely know who you are!) in any of the pile of reviews you need to write after reading a pile of papers.

Also, reviewers declare conflicts ahead of paper assignments, which makes accidents less frequent.


In this case an easy 50% solution seems better than nothing. Some people would just comply with the guideline, some would not think of your workaround and it would make peer reviews a little more equitable even if many broke the rule.


I've seen many arguments against double-blind which were very unconvincing, and they were all from the people that already had a big name, the ones that decide how journals operate.

In any case, this is the tip of the iceberg. The entire structure surrounding academic publication is absolutely ill-designed and few people in that world are even remotely interesting in safeguarding veracity.

The overwhelming majority of peer reviewed scientific results is infotainment for other scientists with no party having any material stake in the veracity of it, which is almost no one bothers to replicate because accuracy is irrelevant so long it be interesting to read.

The moment a company has bet a sizable stake on it's accuracy, then they suddenly check, and double check and have another party independently verify it because they do not want to loose money, obviously. But most science is nothing like that.


Just to add more to the siblings. Research is inherently social. In a small field you can even guess the reviewers. Sometimes there is the "oh yeah that professor talks like that. It's definitely her writing me this review" moment, and reviews are supposed to be anonymous at least in most of the venues. Unfortunately the social dynamics and trust in some cases extend too far -- many sloppy research get published due to trust or favoritism. Double-blind makes this a bit better, but beyond this, at a certain point you will be changing what research is about from a social enterprise to something much more result oriented.


Even with double blind reviews, it’s often very clear who wrote the paper.

Authors regularly cite work by themselves or their team, so a statement like “In a previous study[1] we established a relationship between X and Y” renders double blind pointless.


Peer reviews are blind, but in a lot of fields, once you've read enough papers it's obvious from subject and style who wrote the paper


This is true even when grading school exams: teachers recognize the names of "bright" students and lean towards more positive grades.

That's why anonymous grading - and in scientific publishing, double-blind peer review is so important. It's part of the scientific progress just as much as replication, the attempt to re-produce results of a study post-publication by other groups (I wish papers' PDFs had a QR code to a Web page that said "double-blind review by x people, replicated by y groups" - the latter changes over time so it's better tracked externally).


It seems to me that some subjects are so specialized that the group that makes up one's 'peers' is fairly small. How do they have real anonymized reviews when it becomes easy to recognize the writings of the author? The more papers that someone writes, the easier it would be for their peers to recognize the writing style and other quirks that would give the author away.


It might happen unconsciously. But in my experience it's not that easy for a reviewer to consciously use writing cues for anything useful.

Sometimes it seems very obvious that a given paper is by specific authors (either because of style, or because of how familiar it is with their previous work) but I've had many experiences where I later learned that my supposition was completely wrong. Similarly, when you encounter a paper that doesn't have any obvious cues (which is the overwhelming majority of them) then it's pretty much impossible to tell whether it's an author you admire or someone you've never heard of -- and this is a good thing.

Some conferences don't use blind submissions, and yes: I have felt an awful lot of influence there. "Surely [famous Turing-award winning authors] don't need me to double-check their proof."


I have a friend who is somewhat famous in his little research niche. Even when submitting papers for blind review he will do things like deliberately spell some words British and others American to make it harder for peers to figure out who the paper is from. The peer group is indeed small and rivalries happen quickly. Sometimes you know what other labs are doing very similar research and it's a race to publish first.


> "he will do things like deliberately spell some words British and others American to make it harder for peers to figure out who the paper is from."

This is a good initiative, but a catch is that if he's the only person deliberately spelling some words British and others American, the spelling choice becomes a unique identifier.

Though, it could work as long as no one within the group knows who is using the varied spelling.


This happens often with coauthored papers as well, so not a total giveaway, necessarily.


This happens all the time in the field I'm in (phages, phage therapy) and the same names appear over and over. Peer review are anonymous but you can easily tell who's reviewed your paper or grant.

I'm not sure how to fix this problem though. In our phage newsletter we try to avoid using names and universities to focus on the paper/topic/finding itself, but I keep finding myself looking at author names and affiliations before diving into any paper.

I know it's "wrong" and I recognize myself doing it, but I still do it all the time.


From what I hear, in many fields the "peers" are better thought of as "rivals".


The nice part is that that can be gamed and should be gamed.

People should copy the writing style of famous auctor's then to expose the system and keep people honest because then they know there are copycats.


After reading the article, I feel like the HN title should rather be « Authors’ status » instead of « Authors’ names ». I expected the study to be about gender inequalities but it’s actually about author prestige (is the author famous or not). I often just read HN title without reading the underlying article, and I wonder how many times I’ve got wrong in my interpretation of what the core of the information was


How is double blind not the norm? there is just too much room for bias


If you want a review to be truly double blind, you have to start censoring the paper -- things like "in previous work [1-10] we showed..." makes it obvious who the authors are even if you remove the names from the top of the paper.


There are some computer science conferences that do this. If you cite your prior work, you are supposed to blank out the names in the citation and then attach an anonomyzed version of the prior work so that the reviewers can reference it if necessary. If the paper is accepted, you de-anonomyze the citation in the camera ready version.


This does not necessarily help. In many fields the reviewers will be familiar with the prior work and therefore recognize it, anonymized or not.


You do not have to censor: you simply refer to yourselves in the third person to make your shared identity plausibly deniable. A work that is worthy of being of publication should largely be able to stand on its own anyway. This may have the effect of reducing minimum publishable unit (MPU) CV spam papers as a bonus.


I'm talking about editors censoring papers. Authors could avoid this problem by writing differently, but as a reader I'd prefer they don't; knowing that a body of work is closely related is very useful, especially when it comes to deciding which references to follow up.


The double blind process only requires that this be done during the review drafts. The final camera-ready version for publication can use the first person without compromising the integrity of the review process. I don't think the editors would remove self-referential text from a final draft after the paper had been accepted.


Usually if you count who has more citations you get the author of the paper (or the advisor, or the leader of the team). Sometimes it's a lifehack to get more citations, but most of the times is just natural because someone of the team was working in tool X and other member of the team in tool Y and now you are adjusting all the details to make X and Y work together and get a new result.


"This work is based on the ideas of [1-10] which showed..."

(Also, it is a major red flag to cite 10 papers from one group and no related work from anyone else. Either the topic is completely irrelevant, or you didn't do a cursory literature search - at least skim the references from papers you cite!)


Past performance is a reasonable indicator in a lot of situations. I've had this discussion in the context of conference submittals. While there are lots of reasons to be inclusive of new presenters, the reality is that I also want to be aware as a conference committee member of applicants who have been very popular in the past.


I've found the opposite. I don't want to hear version 10 of a paper from 2005.

Besides, invited speakers are a thing for a reason.


In my field that would be impossible... people work in collaborations and you know pretty much all of the experiments.


That's usually a consequence of too much nepotism, or some super expensive piece of equipment.

Double blind helps the former, though not the latter.

In a part of my field that is capital intensive, some well funded newcomers have recently invested a lot, and "broke in", while some incumbent testbeds went away; the "expensive equipment" problem is usually temporary if the field is expanding.


It's just that experiments in astroparticle physics are expensive and have lots of people on them. Like, how is the IceCube Collaboration supposed to write an anonymous paper? Even the most cursory description of the detector would give it away...


Well, maybe there will be many IceCube's later? (Probably not, but I'm routinely shocked when I learn about new giant testbeds in my field...)

Alternatively, when IceCube 2 comes out, the old IceCube crowd might be focusing on other stuff, and not paying attention to the IceCube 2 politics. That makes them great peer reviewers (no horse in the race, but knowledgeable).


The proposed IceCube Gen2 is mostly a superset and only slightly disjoint with IceCube (for example, I am not an IceCube collaborator but I am on Gen2...). But the point is that for experiments that are larger than a few PI's, anonymity of authors is basically impossible (since all papers have everyone on them).


It also could just be a small specialty.


I'm not sure what we're supposed to find astonishing about academics (like most of humanity) being vulnerable to reputation biases in their reasoning.

As in, that would be the null hypothesis. It would be astonishing if most academics overcame it.

EDIT: apparently, the size of the effect (factor of six more likely to be accepted if it came from a Nobel Prizewinner) is larger than anticipated.


Are people in general bias based on reputation?


Yes. In fact, people are so likely to make assumptions about future performance based on past performance that we invented a word for that social construct: "reputation."


Yes, because it's easier to do that than assess people on their merits. This is especially true when you are reviewing a technical paper (often for free) when you have competing demands on your time and mental energy (research, teaching). It takes hours and hours to properly review a paper, and that is still trusting that the author is acting in good faith - to reproduce their work might in some cases require a large amount of time and money. These are the same reasons that letters of recommendation (written by people you've heard of) are so useful in academia. Lots and lots of bias problems, but the bias isn't the point, as I think is sometimes implied.


In general? There is no a single person to ever live without that bias.


I saw a tweet by a researcher who sent her work for peer review and got feedback that she should read more works of xxxxx and revise her work.

Great feedback, she thought, because She was xxxxx.


Sometimes the peer review is the one asking for citing him/hers work, not anybody else's...


This is an interesting article but it contributes a bit to terminological confusion about reviewing modes.

First, the article talks about "double-blind reviewing" as being "logistically hard" because reviewers can "find the paper in a Google search", and talks about a "price tag". By contrast, many conferences around me implement so-called "lightweight double-blind" reviewing, which takes zero effort and just means that the authors and affiliations are not mentioned on the copy of the paper that reviewers read. I think this form of double-blind is a no-brainer, eliminating some bias with essentially no downside; and that discussions about the cost and complexity of double-blind reviewing are a distraction from this immediate improvement.

Here is a typical paragraph from a call for papers (here, STACS 2023) describing the policy:

  As in the previous two years, STACS 2023 will employ a lightweight double-blind reviewing process: submissions should not reveal the identity of the authors in any way. The purpose of the double-blind reviewing is to help PC members and external reviewers come to an initial judgment about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult. In particular, important references should not be omitted or anonymized. In addition, authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For example, authors may post drafts of their papers on the web, submit them to arXiv, and give talks on their research ideas.
Second, the article talks about "open review" as meaning "everyone’s identity is public". This is sometimes what the term means, but not always -- for instance the OpenReview.net platform <https://openreview.net/> supports forms of open reviewing where the reviewers are still anonymous. Here "open" means "the discussion happens in the open, and everyone can post comments about the paper and reviews and contribute to the discussion". Here again, I feel that the discussion about "open reviewing" with non-anonymous reviewers is drawing attention away from this model which looks like a net improvement over the status quo.


Of course it does. Professors with high caliber research run tight ship and their labs publish consistently high quality work.

It does not mean that reviewers lower their bar when assigned a paper from a famous professor, they just know a priori that the work will not be completely scam.


I’m a bit conflicted about the goal of separating the “hard, cold, objective” science from “soft, warm, subjective” things such as reputation and trust. It may be the case that some publications can be perfectly evaluated based purely on the text of the manuscript, but in most of the work I’ve read and written there are countless little details that are not discussed in the text. The manner in which the authors have handled such details can greatly affect the quality of the research. How carefully did they write the one-off script that produced the figure? How closely did they follow the study protocols? How much did they look out for issues that might undermine the results? Such an evaluation may only be possible after following the work of an author over time and perhaps through direct collaborations.

In practice, we may get better outcomes if peer reviewers completely ignore these aspects and evaluate all papers purely based on what’s on the page. But I don’t think it is obvious that any reliance on trust and reputation should be derided as bias to be eliminated.


This is not surprising of course. What has changed probably is that science over the past 20 years has become more "social", but not in a good way. Since there are so many good scientists everywhere, people get ahead by using any means to make their science more public, mostly through conferences, organizing conferences, pursuing various committees and subcommittees, befriending journal editors, the press, and of course social media plays a part in this, but not alone. Antisocial people will have a hard time.

Where to start here? What do we want the purpose of publishing and peer review to be? Out of all this publicity dance, which part gets distilled into the solid foundation of science? I do think this whole journals/publishing/conference/apply-for-funding thingy is a bit too ancient and incremental, and more radical solutions would be nice to try. I think fundamental to this is the structure of funding, do we really want small funds going to individual small PIs, or maybe more independence, or less independence ...


I find it ironic that this is published in Nature. In may experience Nature journals are amongst the worst with respect to peer review.

In my field they seem to draw from a very limited set of reviewers (often senior academics who have not worked in the specific field in quite a while). We have been criticised with very outdated information. This is worse because editors are not experts, we had a reviewer contradict textbook established science and when we asked for an additional reviewer, they send it back to the same person.

Even worse, their goal is not publishing good science, they want to sell journals. While they will never publicly admit it, I know that they take the into account the reputation of an author in their decision to send it out to reviewers (for those who don't know, in the high impact journals like science or nature, the biggest hurdle is typically getting the editors accept the paper and send the paper out to review).


https://en.wikipedia.org/wiki/SCIgen#Schlangemann

> In 2008, in response to a series of Call-for-Paper e-mails, SCIgen was used to generate a false scientific paper titled Towards the Simulation of E-Commerce, using "Herbert Schlangemann" as the author. The article was accepted at the 2008 International Conference on Computer Science and Software Engineering (CSSE 2008), co-sponsored by the IEEE, to be held in Wuhan, China, and the author was invited to be a session chair on grounds of his fictional Curriculum Vitae.


Also relevant: "Cognitive ease" from "Thinking Fast And Slow." We often attribute information as higher quality if we're able to recall related information with more ease. Importantly, this happens regardless of the quality of the information. One study they point out is people exposed to random nonsensical words later ranked the words they saw the most frequently as being "better" than ones they saw less.

It sounds like a related bias of automatic thinking here. There's some cognitive ease in recognizing the author, so your bias kicks in that this is better than if you didn't recognize the author.


This such a strange headline. It is not the authors' names that have the effect, it is the authors' past work.

For example, having done work in the past that was the subject of a Nobel Prize might have an "astonishing" influence. Having the name "James" would not not likely have an astonishing influence.


Skimming through the article, it fails to mention that journals have some power to assess systematic reviewer biases based on their past history. If reviewers are systematically more permissive towards specific people's work, then good science is not their objective.

So, essentially it can be viewed as a diversity problem in the reviewer space. The journals have important responsibility on this and they can not present as neutral observers of this phenomenon where they say : "oh every peer reviewing model has problems " and blame reviewers ethos that they are choosing.


Isn’t the article confusing correlation with causation? I read like there’s a correlation between acceptance rate and author’s pedigree, but the article sounds like the pedigree is the cause of the high acceptance.

I’m no expert, so please correct if I’m wrong. But for example, how likely is that a Nobel prize winner produces worth-publishing research? Or how likely he is simply good at the skill of paper writing?


I wonder if anyone changed their name to something like "Maxwell Einstein" or "Shannon Turing" for an advantage


That’s odd. I don’t do a lot of peer review, but when I do the authors names are masked. Sometimes I can figure it out based on the citations, but I haven’t reviewed any papers where the author’s name is listed.

That being said an author isn’t published randomly in Nature, so I expect subsequent papers from an author to be better, on average, than non-Nature published authors.


Good that the study is done and it gets published. But the result is obvious and well known. The terrible part is that the solution is also well known (double blind process) but there is no interest to implement it in most journals. Editors have to defend their field from outsiders… editorials do not care as long as money keeps flowing in…


"A Nobel prizewinner is six times more likely than someone less well known to get a thumbs-up for acceptance, finds study."

Lifehack discovered: legally change your name to that of a Nobel prizewinner if pursuing academia.


Hey, it's me, Niels Bohr.


I'd love to see this tested. Identical name; unique first name identical only; last name...it would be funny to see the results if this research is any indication.


People kept dissing me as Einstein so much I went with Nikola. People now keep asking me if I make electric cars for some reason.


Old news from 2016

Jobseekers with Anglo-Saxon, easy to pronounce and common names are the most likely to get to the interview stage compared to candidates with unfamiliar names, according to research by the Australian National University published in the Oxford Bulletin of Economics and Statistics.

https://www.independent.co.uk/news/business/news/unusual-nam...


Surely they didn't mean actual Anglo-Saxon names, right? Those name are so challenging to the modern english-hearing ear.

https://www.behindthename.com/names/usage/anglo-saxon


Æðelwulf wrote a heck of a quicksort on the whiteboard under the gun last week.


What would Žižek say?

"No no no, within which historical context does it even make sense to ask this question? No? You see?"


"Nobel and Novice: Author Prominence Affects Peer Review" https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4190976


I was really confused because when I last looked at academic job postings it was in Medieval Studies.

“Jobseekers with Anglo-Saxon” means “candidates who speak Anglo-Saxon.”

And I’m like “I thought that was just a medieval studies thing, but I guess the older languages really do help …”


priors matter, right?


Priors are bigotry, sorry.


This shouldn't be surprising to anyone who has ever worked in academia. Certain names have it much easier to publish than others. It's a self-fulfilling prophecy. Double-blind reviews aren't common in all fields (e.g. everything IEEE) which leads to a lot of bias right from the start: Seniority, country, university, ...


That’s what we want in society right? Perks for doing hard work and then being recognized for it. That’s what “street cred” is. Maintaining a reputation is hard work and the fast pass and acknowledgment from the broader society is your reward.

As in this person has more than proved himself, let’s not vet him as much.


This is literally how one of the biggest scandals in academic publishing occurred[1], so... no? Maybe it's a nice way to treat experts when asking them for their opinion, they are experts after all, but we absolutely don't want to give celebrities more leeway than unknowns in research papers where new claims are being made. You better have done the work properly, again, and again, and again. No free passes, no blind trust.

[1] https://en.wikipedia.org/wiki/Diederik_Stapel


In this case it looks like his good reputation was falsified from the start? They were unable to prove misconduct in his dissertation only because the data had been destroyed. So consistently missing his misconduct would be more to blame here than riding on a reputation.


But that ignores the years prior where he had enough reputation to be scrutinized less, which lets him publish more, which made him more famous, which got him scrutinized less, allowing him to keep publishing, boosting his reputation, [etc].

Right up to the very end, this man was able to publish like a celebrity because no on questioned it because he was famous. So let's not bake that into the process: scrutinize all papers equally. Experts don't get a free pass, if they have new claims to make, those claims are just as "I don't believe you yet" as anyone else's.


No. You get gatekeeping and laziness and stale group think and unstated quid pro quo and discrimination. It's worse science.

A famous name getting the same paper published easier is a failure of peer review. The whole point of science is that ideas and evidence stands on its own merit. Not celebrity or seniority or power or any other axis that doesn't matter.


Linus Pauling won _two Nobel prizes_. He also, particularly in later life, got into weird quackery.

You don’t want to go on reputation.


I seriously wonder how people would receive the Go language if they had no idea about the people behind it.


This is known. Hence, when we were building the peer-reviewed moderation / curation function for finclout we only show the content. While in selected cases the poster is identifiable, for most users the decision is done only based on content.


We yeah. At least in science, it’s a small world and reviewers often know the authors personally.

And your reputation follows you. If it’s a big-name lab who has an amazing track record, you’re going to review the paper in the context of their entire body of work.


Nothing Nature publishes now can be trusted as science; they added new rules that they won't publish anything counter to their political opinion.


I've done research at one of the top universities for NLP in Europe and there's a commonly held believe amongst many of my peers that conference acceptance has more or less degraded to a random process.


This is how VC firm halos work. When Sequoia or Benchmark invest, tons of investors want to squeeze into the round -- almost regardless of the fundamental attributes of the company.


Wow. So branding works on human scientists. Shocking.


Every paper Albert Einstein submitted to anonymous peer review was rejected.

(he had only one peer anonymous reviewed paper and it had an error)


I'm confused because of your second sentence but according to this site https://mindmatters.ai/2020/05/einsteins-only-rejected-paper...

". Albert Einstein only had one anonymous peer review in his career — and the paper was rejected2. This happened in 1936."

How is just one paper submitted anonymously being rejected an indication of a trend?


There was no trend.


“Association of popular/celebrity angels with your startup increases the odds of your Series_A fundraise round.”


Wait a minute, does this mean that the entire conceit that this culture produces a superior form of truth is a hollow lie?

Duh


Is that surprising?! That's a kind of optimization with not always optimum results.


I would expect a much higher than sixfold increase in paper acceptances by winning the Nobel Prize. Why wouldn’t there be a massive lift? Even if you can’t personally see why this result is important you know that one of the biggest contributors to the field thinks it’s interesting. That’s a pretty credible signal in a noisy world.


Elon Musk tweets something stupid and it gets discussed. You or I do it and it's ignored. Celebrity is a thing, nothing odd going on


Discussed is not the same thing as reviewed though. I agree it is not odd. It is a good thing to know about though and be mindful of. It might be advantageous in closed circumstances, i.e. you can skim a review of a more trusted developer.


Who wasn't mindful that the concept of bias and popularity exist?


I think many people. Everyone knows. Not everyone is mindful.


Is anyone surprised by this?


Why weren't you?


Much of academia seems to have become an outwardly transparent no hold barred status battleground. So this just seems to exactly align with my expected outcome given that assumption.

I'm not saying there aren't people in academia who aren't driven by doing good research but I certainly don't see that as the driving force in the US academia.


Do you have proof of this?


Yes I've got objective proof of my qualitative opinions on sociological topics. Let me just dig that right up for you.

I didn't realize that HN was an academic journal where all statements should come with proof.

Do you have proof that I'm wrong?


also has astonishing influence to get invited to a job interview

i had a friend who changed their name in their CV and got 90% success rate, and was close to 10% before


greg rutkowski and alphonse mucha are about to become top-tier scholars


That an economist finds human behaviour that every normal human would expect "astonishing" is itself not remotely astonishing.


Peer review is not supposed to be a Twitter popularity contest so these numbers are indeed astonishing.


Yet most non-economists would have predicted exactly this. If it's astonishing to you, well, perhaps you've had the misfortune to learn to think like an economist (ie. poorly & very muddy about normative vs actual).


It would be possible to respond to your comment in earnest but ad hominem like this is more of a psychological cushion than an invitation to genuine discussion.


In what world is a daft attempt to psychoanalyze your interlocutor not ad hominem?

My comment was snarky but not remotely playing the man. I meant it quite literally as a criticism of the thinking at issue. Economistic thought is riddled with confusion, and being surprised that scientists don't behave normatively (ie. not as those engaged in peer review are "supposed to") exhibits perfectly one variant of said confusion.

I'm not 'astonished' that non-blinded peer reviewers are influenced by social status. No-one I know would be surprised at all, let alone 'astonished'.


> I'm not 'astonished' that non-blinded peer reviewers are influenced by social status.

I am astonished, in the same way I would be astonished to find out that papers that smell like fish are more likely to be accepted for publication at the majority of peer reviewed venues, and that the reason is that reviewers for those publications let their house cats stack rank the submissions.


Right, nothing is really. The parent is saying that this is a common human flaw and exists everywhere


Yes, quite. And that the blind spot is characteristic of economistic thinking.


Is it any different to podcasts, concerts, and books?

People will choose what they listen, see, or read heavily based on who the performer is.

A performer who has consistently given good content will obviously have a bigger pull than a nobody still trying to get their first break.


Podcasts, concerts and books are something we review subjectively. Scientific articles one would expect would be reviewed objectively.


The data in a paper can be objectively measured, the methodology used to conduct a study - like sample size, cohort composition, cohort groups chosen for comparison - could go both ways, the conclusions drawn are often subjective. If two of the three component parts of a paper are arguably subjective, not surprising that Nobel Prize winners get a pass on the quality of their papers.


The data is objective but the quality of the research is subjective. For example if my sample size was 150 for some study that's subjective whether it was sufficient of size


Peer review doesn't verify the correctness of data. At best they'll flag flagrantly fabricated data that doesn't pass the sniff test, but attempted replication of the results is not what peer reviewers are doing.


Not true. There are objective statistical tests regarding sample sizes.


There are objective measures of statistical power, given an effect size a priori ahead of time you can estimate the power of a particular procedure. The trouble is that what a "reasonable" effect size may be is subjective and requires prior knowledge; post-hoc power calculations are widely regarded as misleading and conveying little additional information beyond a p-value.


The only people who could possibly think that are the ones who have never gotten anything published.


That expectation goes against human nature. Although I too wish it was true.


Yes. The purpose of science is supposed to be finding the truth.

The identical paper being rejected more often based on the celebrity of the author isn't that.


Honestly, this is the least of academia's problems.

I've seen it from the inside. Probably read 80-100 widely-cited papers during my PhD (before dropping out), and maybe half a dozen of them were written by people who had any interest in discovering truth and pushing mankind forward.

Seriously cannot overstate both the willful ignorance of established scientists, and the extent to which this is enforced onto the next generation.


Let me guess, the others wanted to push their political view and that's why I shouldn't trust experts?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: