Personally, I'm always very careful to cite and praise work by "competing" researchers even when that work has well-known errors, because I know that those researchers will review my paper and if there aren't other experts on the review committee the paper won't make it. I wish I didn't have to, but my supervisor wants to get tenured and I want to finish grad school, and for that we need to publish papers.
Lots of science is completely inaccessible for non-experts as a result of this sort of politics. There is no guarantee that the work you hear praised/cited in papers is actually any good; it may have been inserted just to appease someone.
I thought that this was something specific to my field, but apparently not. Leaves me very jaded about the scientific community.
I want to answer the question "if I were a researcher and were willing to cheat to get ahead, what should be the objective of my cheating?"
If you want to look impressive to non-experts and get lots of grant money/opportunities, I'd go for lots of straightforward publications in top-tier venues. Star findings will come under greater scrutiny.
If you want to write a pop book and on TV and sell classes, you need one interesting bit of pseudoscience and a dozen followup papers using the same bad methodology.
Does anyone have ideas on how that may be achieved - what a correct incentive structure for research might look like?
> the goal of research, which is to expand the scope and quality of human knowledge.
But are we so certain this is ever what drove science? Before we dive into twiddling knobs with a presumption of understanding some foundational motivation, it's worth asking. Sometimes the stories we tell are not the stories that drive the underlying machinery.
For e.g., we have a lot of wishy-washy "folk theories" of how democracy works, but actual political scientists know that most of the ones people "think" drive democracy, are actually just a bullshit story. According to some, it's even possible that the function of these common-belief fabrications is that their falsely simple narrative stabilizes democracy itself in the mind of the everyman, due to the trustworthiness of seemingly simple things. So it's an important falsehood to have in the meme pool. But the real forces that make democracy work are either (a) quite complex and obscure, or even (b) as-of-yet inconclusive. 
I wonder if science has some similar vibes: folks theory vs what actually drives it. Maybe the folk theory is "expand human knowledge", but the true machinery is and always has been a complex concoction of human ego, corruption and the fancies of the wealthy, topped with an icing of natural human curiosity.
The Structure of Scientific Revolutions by Thomas Kuhn is an excellent read on this topic - dense but considered one of the most important works in the philosophy of science. It popularized Planck's Principle paraphrased as "Science progresses one funeral at a time." As you note, the true machinery is a very complicated mix of human factors and actual science.
The trouble is that for the evaluators (all the institutions that can be sources of an incentive structure) it's impossible to distinguish an unpublished 90%-ready Nobel prize from unpublished 90%-ready bullshit. So if you've been working for 4 years on minor, incremental work and published a bunch of papers it's clear that you've done something useful, not extraordinary, but not bad; but if you've been working on a breakthrough and haven't achieved it, then there's simply no data to judge. Are you one step from major success? Or is that one step impossible and will never be achieved? Perhaps all of it is a dead end? Perhaps you're just slacking off on a direction that you know is a dead end, but it's the one thing you can do which brings you some money, so meh? Perhaps you're just crazy and it was definitely a worthless dead end? Perhaps everyone in the field thought that you're just crazy and this direction is worthless but they're actually wrong?
Peter Higgs was a relevant case - IIRC he said in one interview taht for quite some time "they" didn't know what to do with him as he wasn't producing anything much, and the things he had done earlier were either useless or Nobel prize worthy, but it was impossible to tell for many years after the fact. How the heck can an objective incentive structure take that into account? It's a minefield.
IMHO any effective solution has to scale back on accountability and measurability, and to some extent just give some funding to some people/teams with great potential, and see what they do - with the expectation that it's OK if it doesn't turn out, since otherwise they're forced to pick only safe topics that are certain to succeed and also certain to not achieve a breaktrhough. I believe European Research Foundation had a grant policy with similar principles, and I think that DARPA, at least originally, was like that.
But there's a strong entirely opposite pressure from key stakeholders holding the (usually government) purses, their interests are more towards avoiding bad PR for any project with seemingly wasted money, and that results in a push towards these broken incentive structures and mediocrity.
At the same time, academics have been increasingly been evaluated by some metrics to show value for money. This has let to some schizophrenic incentive structures. Most professor level academics are spending probably around 30% of their time on writing grants, evaluating grants and reporting on grants. Moreover, the evaluation criteria also often demand that work should be innovative, "high risk/high reward" and "breakthrough science", but at the same time feasible (and often you should show preliminary work), which I would argue is a contradiction.
This naturally leads to academics overselling their results. Even more so because you are also supposed to show impact.
The main reason for all this IMO is the reduced funding for academic research in particular considering the number of academics that are around. So everyone is competing for a small pot, which makes those that play to the (broken) incentives, the most successful.
For commercial ventures, you also have the same issue of incremental progress vs big breakthroughs that don't look like much until they are ready.
As far as I can tell, in the startup ecosystem the whole thing works by different investors (various angels and VCs and public markets etc), all having their own process to (attempt to) solve this tension.
There's beauty in competition. And no taxpayer money is wasted here. (Yes, there are government grants for startups in many parts of the world, but that's a different issue from angels evaluating would-be companies.)
You get what you measure for applies here. Now if we had some Objective Useful Research Quality Score t could replace the price signals. But then we wouldn't have the problem in the first place, just promote based on OURQS.
A 0.1% chance to build an app that's gonna be useful to hundreds of millions of people is better than what most career scientists manage.
Perhaps start with removing tax payer money from the system.
Stop throwing good money after bad.
"Academic politics is the most vicious and bitter form of politics, because the stakes are so low."
As a non-expert, this is not the type of inaccessibility that is relevant to my interests.
"Unfortunately, alumni do not have access to our online journal subscriptions and databases because of licensing restrictions. We usually advise alumni to request items through interlibrary loan at their home institution/public library. In addition, under normal circumstances, you would be able to come in to the library and access the article."
This may not be technically completely inaccessible. But it is a significant "chilling effect" for someone who wants to read on a subject.
I think the inaccessibility is for different reasons, most of which revolve around the use of jargon.
In my experience, the situation is not so bad. It is obvious who the good scientist are and you can almost always be sure that if they wrote it it's good.
In essence, the evaluators (non-scientific organizations who fund scientific organizations) need some metric to compare and distinguish decent research from weak, one that's (a) comparable across fields of science; (b) verifiable by people outside that field (so you can compare across subfields); (c) not trivially changeable by the funded institutions themselves; (d) describable in an objective manner so that you can write up the exact criteria/metrics in a legal act or contract. There are NO reasonable metrics that fit these criteria; international peer-reviewed-publications fitting certain criteria are bad but perhaps least bad from the (even worse) alternatives like direct evaluation by government committees.
(I am leaving cetacean cunt in because it’s a funny autocorrect.)
(And now I’m leaving the above in, because it’s even funnier. Both genuine.)
Also, I know of no researchers personally who are enthralled by the existing system.
If I am reading between the lines correctly, you are implying there are few undergrads publishing in high caliber journals because of gatekeeping. As a reviewer, I often don't even know the authors' names, let alone their degrees and affiliations. It is theoretically possible that editors would desk reject undergrads' papers, but: a) I personally don't think a PhD is required to do quality research, especially in CS, and I know I am not the only person thinking that; b) In some fields like psychology and, perhaps, physics many junior PhD students only have BS degrees, which doesn't stop them from publishing.
I think that single-authored research papers by people without a PhD are relatively uncommon because getting a PhD is a very popular way of leveling up to the required expertise threshold and getting research funding without one is very difficult. I don't suspect folks without a PhD are systematically discriminated against by editors and reviewers, but, of course, I can't guarantee that this universally true across all research communities.
I think the entire academic enterprise needs to be burnt down and rebuilt. It’s rotten to the core and the people who are providing the most value - the scholars - are simultaneously underpaid and beholden to a deranged publishing process that is a rat race that accomplishes little and hurts society. Not just in our checkbook but also in the wasted talent.
It also isn't any sort of conspiracy that government grants are given out to people with a proven history of doing good research, as evaluated by their peers.
Maybe not quite as prestigious as nature, but NLP is pretty huge and the conference I got into has average h index of I think 60+
The people you mention are probably making YouTube videos and writing blog posts about their findings and are reaching a broader audience..
"Neither Myers nor Briggs was formally educated in the discipline of psychology, and both were self-taught in the field of psychometric testing."
BTW I can tell you the the vast majority of researchers are not "enthralled" by the system, but highly critical. They simply don't have a choice but to work with it.
(There's lots of crowding out happening, of course, from the government subsidized science. But that can't be helped at the moment.)
It will probably have to be started by some civic minded billionaires. I don't think the established system can reform itself.
What you've described sounds like something that is not, in any sense, science.
From your perspective, what can be done to return the scientific method to the forefront of these proceedings?
The situation is even worse when the paper claiming X underwent artifact review, where reviewers actually DID look at the raw data and source code but simply lacked the attention or expertise to recognize errors.
I'm not taking bribes, I'm paying a toll.
I wouldn’t necessarily condone the behavior, but what would you do in the situation? To always whistleblow whenever something doesn’t feel right and risk the politics? To quit working in the field if your concerns aren’t heard? To never cite papers that have absolutely any errors? I think it’s a tough situation and not productive to say OP isn’t behaving morally.