Hacker News new | past | comments | ask | show | jobs | submit login

The issue is that the authors of bad papers still participate in the peer-review process. If they are the only expert reviewers and you do not pay proper respect to their work, they will squash your submission. To avoid this, papers can propagate mistakes for a long time.

Personally, I'm always very careful to cite and praise work by "competing" researchers even when that work has well-known errors, because I know that those researchers will review my paper and if there aren't other experts on the review committee the paper won't make it. I wish I didn't have to, but my supervisor wants to get tenured and I want to finish grad school, and for that we need to publish papers.

Lots of science is completely inaccessible for non-experts as a result of this sort of politics. There is no guarantee that the work you hear praised/cited in papers is actually any good; it may have been inserted just to appease someone.

I thought that this was something specific to my field, but apparently not. Leaves me very jaded about the scientific community.




What is it that makes you have a nice career in research? Is it a robust pile of publishing or is it a star finding? Can you get far on just pure volume?

I want to answer the question "if I were a researcher and were willing to cheat to get ahead, what should be the objective of my cheating?"


I suppose it depends on how you define nice? If you cheat at some point people will catch on, even if you don't face any real consequences. So if you want prestige within your community then cheating isn't the way to go.

If you want to look impressive to non-experts and get lots of grant money/opportunities, I'd go for lots of straightforward publications in top-tier venues. Star findings will come under greater scrutiny.


Not outright cheating, but cooking results to seem better/surprising and publishing lots of those shitty papers is the optimal way to build a career in many fields. In medicine, for example.


Cooking results seems like outright cheating to me.


It's more complicated than that. It can look something like this: https://xkcd.com/882/


For grants and tenure, 100 tiny increments over 10 years are much better for your research career then 1 major paper in 5 years that is better than all of them put together.

If you want to write a pop book and on TV and sell classes, you need one interesting bit of pseudoscience and a dozen followup papers using the same bad methodology.


This sounds inseparable from the replication crisis. The incentives are clearly broken: they are not structured in a manner that achieves the goal of research, which is to expand the scope and quality of human knowledge. To solve the crisis, we must change the incentives.

Does anyone have ideas on how that may be achieved - what a correct incentive structure for research might look like?


Ex-biochemist here, turned political technologist (who's spent a few years engaged in electoral reform and governance convos)

> the goal of research, which is to expand the scope and quality of human knowledge.

But are we so certain this is ever what drove science? Before we dive into twiddling knobs with a presumption of understanding some foundational motivation, it's worth asking. Sometimes the stories we tell are not the stories that drive the underlying machinery.

For e.g., we have a lot of wishy-washy "folk theories" of how democracy works, but actual political scientists know that most of the ones people "think" drive democracy, are actually just a bullshit story. According to some, it's even possible that the function of these common-belief fabrications is that their falsely simple narrative stabilizes democracy itself in the mind of the everyman, due to the trustworthiness of seemingly simple things. So it's an important falsehood to have in the meme pool. But the real forces that make democracy work are either (a) quite complex and obscure, or even (b) as-of-yet inconclusive. [1]

I wonder if science has some similar vibes: folks theory vs what actually drives it. Maybe the folk theory is "expand human knowledge", but the true machinery is and always has been a complex concoction of human ego, corruption and the fancies of the wealthy, topped with an icing of natural human curiosity.

[1]" https://www.amazon.ca/Democracy-Realists-Elections-Responsiv...


> I wonder if science has some similar vibes: folks theory vs what actually drives it. Maybe the folk theory is "expand human knowledge", but the true machinery is and always has been a complex concoction of human ego, corruption and the fancies of the wealthy, topped with an icing of natural human curiosity.

The Structure of Scientific Revolutions by Thomas Kuhn is an excellent read on this topic - dense but considered one of the most important works in the philosophy of science. It popularized Planck's Principle paraphrased as "Science progresses one funeral at a time." As you note, the true machinery is a very complicated mix of human factors and actual science.


See also the Myth of the Rational Voter https://www.goodreads.com/book/show/698866.The_Myth_of_the_R...


Modern real science is driven by engineering that is driven by an industry that is is driven by profit and nature. If you are reading a paper that isn't driven by that chain of incentives, then the bullshit probability shoots way up. If someone somewhere isn't reading your paper to make a widget that is sold to someone to do something useful, then you can say whatever you want.


I've thought about it a lot and I don't think it might be achieved.

The trouble is that for the evaluators (all the institutions that can be sources of an incentive structure) it's impossible to distinguish an unpublished 90%-ready Nobel prize from unpublished 90%-ready bullshit. So if you've been working for 4 years on minor, incremental work and published a bunch of papers it's clear that you've done something useful, not extraordinary, but not bad; but if you've been working on a breakthrough and haven't achieved it, then there's simply no data to judge. Are you one step from major success? Or is that one step impossible and will never be achieved? Perhaps all of it is a dead end? Perhaps you're just slacking off on a direction that you know is a dead end, but it's the one thing you can do which brings you some money, so meh? Perhaps you're just crazy and it was definitely a worthless dead end? Perhaps everyone in the field thought that you're just crazy and this direction is worthless but they're actually wrong?

Peter Higgs was a relevant case - IIRC he said in one interview taht for quite some time "they" didn't know what to do with him as he wasn't producing anything much, and the things he had done earlier were either useless or Nobel prize worthy, but it was impossible to tell for many years after the fact. How the heck can an objective incentive structure take that into account? It's a minefield.

IMHO any effective solution has to scale back on accountability and measurability, and to some extent just give some funding to some people/teams with great potential, and see what they do - with the expectation that it's OK if it doesn't turn out, since otherwise they're forced to pick only safe topics that are certain to succeed and also certain to not achieve a breaktrhough. I believe European Research Foundation had a grant policy with similar principles, and I think that DARPA, at least originally, was like that.

But there's a strong entirely opposite pressure from key stakeholders holding the (usually government) purses, their interests are more towards avoiding bad PR for any project with seemingly wasted money, and that results in a push towards these broken incentive structures and mediocrity.


I would go a step further and say that the value of specific scientific discoveries (even if no bullshit is involved) can often not be evaluated until decades later. Moreover, I would argue that trying to measure scientific value is in fact an effort to try to quantify something unquantifiable.

At the same time, academics have been increasingly been evaluated by some metrics to show value for money. This has let to some schizophrenic incentive structures. Most professor level academics are spending probably around 30% of their time on writing grants, evaluating grants and reporting on grants. Moreover, the evaluation criteria also often demand that work should be innovative, "high risk/high reward" and "breakthrough science", but at the same time feasible (and often you should show preliminary work), which I would argue is a contradiction. This naturally leads to academics overselling their results. Even more so because you are also supposed to show impact.

The main reason for all this IMO is the reduced funding for academic research in particular considering the number of academics that are around. So everyone is competing for a small pot, which makes those that play to the (broken) incentives, the most successful.


Well, perhaps we can learn from how the startup ecosystem works?

For commercial ventures, you also have the same issue of incremental progress vs big breakthroughs that don't look like much until they are ready.

As far as I can tell, in the startup ecosystem the whole thing works by different investors (various angels and VCs and public markets etc), all having their own process to (attempt to) solve this tension.

There's beauty in competition. And no taxpayer money is wasted here. (Yes, there are government grants for startups in many parts of the world, but that's a different issue from angels evaluating would-be companies.)


Start ups are at an entirely different phase that have something research does not - feedback via market success. The USSR already demonstrated what happens when you try to run a process dependent upon price signals with their dead end economic theory attempts to calculate a global fair price.

You get what you measure for applies here. Now if we had some Objective Useful Research Quality Score t could replace the price signals. But then we wouldn't have the problem in the first place, just promote based on OURQS.


Let people promote with their own money based on what subjective useful researche quality score they feel like.


Startups have misaligned incentives in a monopoly ruled world? Build a thousand messenger variations to get acquired by Facebook, comes to mind. So economic thinking might be harmful here?


Your comments are mostly dead. I didn't see anything wrong with them in a cursory glance.


Why? If that's what society values, that's what society gets. Who are we to judge?

A 0.1% chance to build an app that's gonna be useful to hundreds of millions of people is better than what most career scientists manage.


> Does anyone have ideas on how that may be achieved - what a correct incentive structure for research might look like?

Perhaps start with removing tax payer money from the system.

Stop throwing good money after bad.


You don't make a nice career in a vacuum. With very few exceptions, you don't get star findings in a social desert. You get star findings by being liked by influential supervisors who are liked by even more influential supervisors.


There's a book called science fictions that pretty much goes over the standard bullshit packages in modern science.


> I want to answer the question "if I were a researcher and were willing to cheat to get ahead, what should be the objective of my cheating?"

"Academic politics is the most vicious and bitter form of politics, because the stakes are so low."

https://en.wikipedia.org/wiki/Sayre%27s_law


>Lots of science is completely inaccessible for non-experts as a result of this sort of politics

As a non-expert, this is not the type of inaccessibility that is relevant to my interests.

"Unfortunately, alumni do not have access to our online journal subscriptions and databases because of licensing restrictions. We usually advise alumni to request items through interlibrary loan at their home institution/public library. In addition, under normal circumstances, you would be able to come in to the library and access the article."

This may not be technically completely inaccessible. But it is a significant "chilling effect" for someone who wants to read on a subject.


If your main interest is reading papers and not being political about it, just use sci-hub to read the papers.


Having skimmed the Wikipedia page on it, I'm unsure about the legalities and potential consequences.


Some journals allow you to specify reviewers to exclude. True that there is no guarantee about published work being good, but that is likely more about the fact that it takes time to sort out the truth than about nefarious cabals of bad scientist.

I think the inaccessibility is for different reasons, most of which revolve around the use of jargon.

In my experience, the situation is not so bad. It is obvious who the good scientist are and you can almost always be sure that if they wrote it it's good.


In many journals it's abuse of process to exclude reviewers you don't like. Much of the times this is supposed to be used to declare conflicts of interest based on relationships you have in the field.


Why do people need to publish? The whole point of publishing was content discovery. Now that you can just push it to a preprint or to your blog what’s the point? I’ve written papers that weren’t published but still got cited.


I need money to do research, available grants require achieving specific measurable results during the grant (mostly publications fitting specific criteria e.g. "journal that's rated above 50% of average citation rating in your subfield" or "peer reviewed publication that's indexed in SCOPUS or WebOfScience", definitely not a preprint or blog), and getting one is also conditional on earlier publications like that.

In essence, the evaluators (non-scientific organizations who fund scientific organizations) need some metric to compare and distinguish decent research from weak, one that's (a) comparable across fields of science; (b) verifiable by people outside that field (so you can compare across subfields); (c) not trivially changeable by the funded institutions themselves; (d) describable in an objective manner so that you can write up the exact criteria/metrics in a legal act or contract. There are NO reasonable metrics that fit these criteria; international peer-reviewed-publications fitting certain criteria are bad but perhaps least bad from the (even worse) alternatives like direct evaluation by government committees.


Simple cetacean count of the paper itself is probably a better metric than journal, though it’s certainly not perfect either.

(I am leaving cetacean cunt in because it’s a funny autocorrect.)

(And now I’m leaving the above in, because it’s even funnier. Both genuine.)


When you are looking for a job, are up for promotion/tenure, or applying for grants, a long publication record in prestigious journals is helpful.


Metrics. You can’t manage what you can’t measure!


"When a measure becomes a target, it ceases to be a good measure." - Marilyn Strathern


At some point, there's not going to be enough budget for both the football coach and the Latin philology professor. We should hire another three layers of housing diversity deans just to be safe.


What’s crazy to me is nothing should stop an intelligent person from submitting papers, doing research, etc. even outside the confines of academia and having a PhD. But in practice you will never get anywhere without such things because of the politics involved and the incestuous relationship between the journals and their monetarily-uncompensated yet prestige-hungry army of researchers enthralled to the existing system.


If you add 'self funded' to this hypothetical person, then it would not matter if they play any games. Getting published is really not that hard if your work is good. And if it is good it will get noticed (hopefully during the hypothetical person's lifetime). Conferences have less of these games in my experience and would help.

Also, I know of no researchers personally who are enthralled by the existing system.


Can you name a single person with a high school or BS degree published in nature or other high impact journals? If not, why is this the case?


I think one of the most famous examples is that of Grosset, who published his work on statistical significance under the pen name "Student.”[0] I wish I could give you a more recent example, but I don't pay attention to authors' degrees much, unless a paper is suspicious and from a journal I am unfamiliar with.

If I am reading between the lines correctly, you are implying there are few undergrads publishing in high caliber journals because of gatekeeping. As a reviewer, I often don't even know the authors' names, let alone their degrees and affiliations. It is theoretically possible that editors would desk reject undergrads' papers, but: a) I personally don't think a PhD is required to do quality research, especially in CS, and I know I am not the only person thinking that; b) In some fields like psychology and, perhaps, physics many junior PhD students only have BS degrees, which doesn't stop them from publishing.

I think that single-authored research papers by people without a PhD are relatively uncommon because getting a PhD is a very popular way of leveling up to the required expertise threshold and getting research funding without one is very difficult. I don't suspect folks without a PhD are systematically discriminated against by editors and reviewers, but, of course, I can't guarantee that this universally true across all research communities.

0. https://en.wikipedia.org/wiki/William_Sealy_Gosset


I believe that “good” research, i.e. that which would be referenced by other “good” researchers and useful in obtaining government grants, reported in the press, and so on is indeed gatekeeped. Some subjects such as mathematics and computer science have had much progress in preprints and anyone can publish anonymously and make a mark. But the majority of subjects are blocked to those already connected, especially soft sciences like sociology, psychology, and economics.

I think the entire academic enterprise needs to be burnt down and rebuilt. It’s rotten to the core and the people who are providing the most value - the scholars - are simultaneously underpaid and beholden to a deranged publishing process that is a rat race that accomplishes little and hurts society. Not just in our checkbook but also in the wasted talent.


The status quo isn't perfect, but I think you are severely exaggerating how bad things are. The fact that nearly all scientific publishing is done by people who are paid to do research (grad students, research scientists, professors, etc.) isn't evidence of gatekeeping. It just means that most people aren't able/willing to work for free.

It also isn't any sort of conspiracy that government grants are given out to people with a proven history of doing good research, as evaluated by their peers.


I personally as a BS holder only along with a (at the time) high school senior published a paper in a top 6 NLP conference. I had no help or assistance from any pH.d or institution.

Maybe not quite as prestigious as nature, but NLP is pretty huge and the conference I got into has average h index of I think 60+

Proof: https://www.aclweb.org/anthology/2020.argmining-1.1/


In the same vein though, can you think of any person that wants to publish research and is actively being denied being able to publish..

The people you mention are probably making YouTube videos and writing blog posts about their findings and are reaching a broader audience..


I know a person who got published in high school. They did so by working closely with multiple professors on various projects. You don't have to do a PhD to do this especially if you're a talented and motivated youngster.


I have personally recommended for publication papers written by people who do not have a master's degree. In most cases I did not know that at the time of review, but it did not occur to me to care about it when I did.


Myers Briggs test is an example, pseudo scientific tests with questionable origin..

"Neither Myers nor Briggs was formally educated in the discipline of psychology, and both were self-taught in the field of psychometric testing."


I can name several high school students who conducted studies and led first author papers to leading HCI venues. They were supervised by academics though. Would that suffice?


This has nothing to do with gatekeeping. I agree that the current publication and incentive system is broken, but it's completely unrelated to the question if outsiders are being published. The reason why you see very little work from outsiders is because research is difficult. It typically requires years of full time dedicated work, you can't just do it on the side. Moreover, you need to first study and understand the field to identify the gaps. If you try to identify gaps on your own, you are highly likely to go off into a direction which is completely irrelevant.

BTW I can tell you the the vast majority of researchers are not "enthralled" by the system, but highly critical. They simply don't have a choice but to work with it.


I think this is a bit naive. One thing that stops a smart person doing research without a PhD is that it takes a long time to learn enough to be at the scientific frontier where new research can be done. About a PhD’s length of time, in fact. So, many people without a PhD who try to do scientific research are cranks. I don’t say all.


Some quality journals and conferences have double blind reviews now. So the work is reviewed without knowing who the work belongs to. It's not so much the politics of the system as the skills required to write a research paper being hard to learn outside of a PhD. You need to understand how to identify a line of work in a very narrow field so that you can cite prior work and demonstrate a proper understanding of how your work compares and contrasts to other closely related work. That's an important part of demonstrating your work is novel and it's hard to do (especially for the first time) without expert guidance. Most students trying this for the first time cite far too broadly (getting work that's somewhat related but not related to the core of their ideas) and miss important closely related work.


It's time to start over with some competing science establishments!


There's lots of good science done in the commercial sector.

(There's lots of crowding out happening, of course, from the government subsidized science. But that can't be helped at the moment.)


Sure, but I'm dreaming of a whole parallel reformed "New-niversity" system that replaces outdated and wasteful practices with systems that are more productive.

It will probably have to be started by some civic minded billionaires. I don't think the established system can reform itself.


Thanks for sharing.

What you've described sounds like something that is not, in any sense, science.

From your perspective, what can be done to return the scientific method to the forefront of these proceedings?


You're like that Chinese sports boss who was arrested for corruption and complained that it would be impossible to do his job without participating in bribery. Just because you stand to personally gain from your corrupt practices doesn't excuse them. If anything, it makes them morally worse!


I don't tell lies about bad papers, only give some peremptory praise so that reviewers don't have ammunition to kill my submission. E.g. if a paper makes a false contribution X and a true contribution Y, I only mention Y. If I were to say "So-and-so claimed X but actually that's false" I would have to prove it, and unless it's a big enough issue to warrant its own paper, I don't want to prove it. Any ways, without having the raw data, source code etc for the experiments, there is no way for me to prove that X is false (I'm not a mathematician). Then the reviews will ask why I believe X is not true when peer-review accepted it. Suddenly all of my contributions are out the window, and all anybody cares about is X.

The situation is even worse when the paper claiming X underwent artifact review, where reviewers actually DID look at the raw data and source code but simply lacked the attention or expertise to recognize errors.

I'm not taking bribes, I'm paying a toll.


I think you're artificially increasing the citation count for those papers which unfairly relatively disadvantages all their competitors.


I don’t really buy the comparison entirely. Presumably the sports boss is doing something patently illegal, and obviously there are alternative career paths. OP is working in academia, which is socially acceptable, and feels that this is what is normal in their academic field, necessary for their goals, and isn’t actively harmful.

I wouldn’t necessarily condone the behavior, but what would you do in the situation? To always whistleblow whenever something doesn’t feel right and risk the politics? To quit working in the field if your concerns aren’t heard? To never cite papers that have absolutely any errors? I think it’s a tough situation and not productive to say OP isn’t behaving morally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: