Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How to Make More Published Research True (plosmedicine.org)
59 points by phreeza on Oct 28, 2014 | hide | past | favorite | 31 comments


I did research at a lab at Harvard Medical School. I can verify that the "publish-or-perish" mindset of academia drives researchers to do anything to get that significant p-value – even at some of the most highly regarded and established institutions.

Science today is full of bureaucratic nightmares - the publishing of a large amount of trivial results and the transformation of the most experienced scientists to uninvolved managers.


I have the same experience.

Until university positions and research grants stop being given out based on prior research results we won't be able to trust the research performed there.

There are millions of dollars on the line for researchers involved. It is the difference between a well paid career or a life of destitution while being a slave to huge student debt. It's no wonder they are blind to the flaws in their research.

I believe strongly that teaching positions and research grants should be given out based on criterions that are only incidental to research results.

Evaluate profs and grants based on:

1. domain knowledge (test the applicants) 2. math skills (test the applicants) 3. motivation and leadership 4. prior and current research proposals (but ignore results, especially the fact that they were published or not). 5. other skills such as written and oral communication

Universities should not rely on journals to evaluate their professors. This corrupts the whole system. Journals have different goals. They want to publish well done research with interesting results. Universities should hire researchers that do well done research with interesting _questions_ regardless of the results.

If universities keep giving out jobs based on having generated interesting results in the past, they are going to keep getting researchers that ignore biases and publish whatever results are interesting whether they are true or not.


I don't know that the criteria you're proposing are all that different from the current situation. Domain knowledge and math skills are tested by your eduction. Motivation and leadership by committee work. Other skills by teaching. The only difference is your focus on proposals rather than results, and this is already the case in some fields.

I come for a "search for physics beyond the standard model" background, where other than the neutrino mass (from the SNO collaboration, which I was part of) there hasn't been a positive result in decades. So there is already a good deal of focus on proposals rather than results, and yet almost all the issues I see in the biosciences (I jumped ship to genomics in the mid-00's) are also present in that area of physics.

Ergo, empirically, I'm doubtful that focusing on proposals rather than results will make much difference.

The difficulty is that science never makes economic sense for an individual. I spent a decade of my life measuring zero to higher and higher precision, and I know people who have spent entire careers doing so: putting new limits on branching ratios to exotic (which sounds so much better than "nonexistent") decays and so on. It was fun, although I took a year off in the middle to do some medical physics and imaging, which was even more fun because I actually got to measure phenomena that exist.

So when I read things about the paucity of "breakthrough discoveries" I think that mostly the low-hanging fruit have been picked and genomics turns out to be a whole lot harder and more of a slog than people expected, with a vast amount of uninteresting material to be waded through for the sake of a slow accumulation of knowledge that we are still a century away from putting to any very good use.

I don't know what an economically rational model for reward in such an environment is, and it's good that the article raises the issue and explores some alternative approaches, but I don't think there is any easy fix for the problem because I don't think science makes any economic sense. Just moral sense.


That way you'll get loads of crap research too. In fact if you evaluate scientists on criteria such as "motivation and leadership" you make it more political not less. In your system there is absolutely no incentive to actually do the research, so you'll give all the money to a bunch of people who are great at writing proposals but who don't actually do science. Every second spent doing science is a second not spent writing the research proposals.

There is a very simple solution to the problem that completely eliminates gaming research results and publishing bias. Require that the statistical methodology is completely specified prior to any data acquisition. The paper is written before the data is acquired and it has some blank spots where the data will be filled in with a method that is completely mechanical (e.g. with a computer program that processes the data and spits out the figures that will be used to fill the blanks). Journals should decide whether to publish a paper or not based on the version without the data.


>but ignore results

That is certainly an interesting proposal. How do you intend to assess competence in generating novel ideas (i.e. not testing for knowledge of existing work) if you ignore the candidate's track record?


If you evaluate faculty and grants on research proposals but not the results, what stops a researcher making lots of big idea proposals, but never actually doing any work? At some point, someone needs to actually do the experiments. In your system there is no motivation to do that.

What I think we need to do is reward negative results as much as positive results.


Student debt? But for grant review (I have reviewed NSF proposals before), I'd say that one of the biggest challenges is that there are a lot of people chasing a rather small amount of money (in the physical sciences). I think our committee had several strong people on it for the field and even if people were not listed as having a conflict of interest, they would volunteer if they had one and tried to be fair.

However, when you have a small pot of money, you do have to think about how do you make awards. On the one hand, you do ask the question, what are the chances that this would work and if it did, would it be "transformational"? That is, it may turn out to be a loss, but if it works, then it could really advance the field--in our committee, we did try to fund those kind of proposals over incremental advances.

Now, as to the question of reputation, I'm going to have to disagree. If you're going to give someone funds, how do you gamble? If they're young, then you can just look at their idea, their resources, and some indication that they have a chance at success. However, for a more senior researcher, they do have a track record. If they've received funds in the past and haven't accomplished anything with them, then why would you keep giving them money? If someone is publishing interesting results, then other people will try to duplicate and extend them If someone's research consistently fails these tests, then their reputation will suffer.

As for domain knowledge/math skills--I have to say that I think that this is relatively useless. Do you have any feeling for how many grant applications come in (along with multiple proposers)? And you want to test them across many subfields, etc.? These people managed to get their phDs, so if they are not competent, then that should have either showed up earlier, or in their publications.

I think there's a lot of merit in judging people by their results rather than simple tests which could be gamed.


> If universities keep giving out jobs based on having generated interesting results in the past

Worse they are being given out almost exclusively based on where those past results are reported...


And yet, I have high hopes in science. It is not the most efficient system, but it is the only system that is consistently working well to advance our knowledge.

We definitely have a lot of room to improve though.


As someone inside academic science, I think the only good thing it does is give people a place to think for a few years with relatively few distractions before they can go into industry or form a startup and actually get things done.

If you want to consider a problem in great depth before launching a startup to attack it, definitely go to grad school and do science for 4-5 years. If you think you have a good idea forget about grad school and just launch. (Obviously doesn't apply to things like bio where you need a lot of equipment, but as DIY science becomes more tractable this will go away.)


This article presents several provocative ideas. Changing the "culture" of research to encourage replication of findings, increase applicability of outcomes and reduce the corrosive effects of publication as competitive currency has been discussed quite a bit, though little seems to have come of it.

Especially interesting are the authors' ideas about the "reward system" in the current research world. If "value positions" like academic rank become irrelevant or even reduce reward, benefits would flow to research with greater scientific merit.

Effective science is much more a shared or collaborative effort than about proving who got there first. Too often authors stretch to make findings amount to an overarching explanation of the phenomena. Scientific quality would probably be improved just by scaling back conclusions and letting the data speak for itself.


One exciting thing is happening in my field: http://proceedings.dtu.dk/fedora/repository/dtu:233/OBJ/DOIu...

The TL;DR is that one international user facility is creating a universal locator for data that you can cite in your publications to point people to the raw data, which is curated by the facility and openly accessible. This is extremely cool! It may be of limited use without metadata about how the experiment was performed, but has the potential to be rather useful.

The DOE in the US is moving to put more of the research published with its grant money into the public domain--either with preprints, or if the publishers can be encouraged, the final articles (this seems to be under development).

For some of my friends in statistics, there seems to be a move towards reproducible research where you try to preserve the toolchain that was used to reduce the data--this is considerably harder.

So, I think that there's progress being made, but it takes time for change. Also, unless there are requirements from funding institutions, it's hard for individuals to justify the time requirements involved. While most of my raw data is on the web and a I put out a number of my reduction tools, maintaining the whole tool chain would be rather difficult.


I've been at Harvard and MIT for the last 10 years doing biomedical research and have published in Nature, Science, and Cell (top journals) numerous times. I can tell you academic publishing is completely f*cked with no hope.

Henry Kissinger said the only thing you need to know about academia,

"Academic politics are so vicious precisely because the stakes are so small."


I was at a top 10 grad school till 2 months ago. An assistant professor here wrote a paper which was a minimalist increment over her PhD work. Something you would expect to see in PlOs One but got published in Cell. I know of people in lesser ranked and less prestigious schools who have trouble getting more impacting papers accepted.


What are you doing now?


> I can tell you academic publishing is completely f*cked with no hope.

Care to elaborate? Because in my experience (non-bio, non-med), this is not true. It's certainly not perfect, but overall the peer-review system (again, based on my experience with it) results in scientific progress.


What rate of progress? Academia is glacial.


As the author points out in the abstract, "Optimal interventions need to understand and harness the motives of various stakeholders who operate in scientific research and who differ on the extent to which they are interested in promoting publishable, fundable, translatable, or profitable results." Yes. This is fundamentally a human behavior problem, and economics, "the dismal science," reminds us that people respond to incentives. If the incentives are not aligned to produce better research, all the ideas in the world about best research practices and best publication practices will not help. The author suggests testing interventions experimentally whenever possible, and that is what will best suggest what helps most to change human behavior in the direction of producing more true research findings.


The economics of research are worthy of more investigation.

The goal would be to reward those who produce the most research with the highest impact. However, it often takes years or decades to determine which of the fashionable ideas circulating at any one time are the foundations upon which further knowledge will be built, and which will fade away.

A similar problem exists when discussing the mechanisms of rewarding Wall Street traders, bankers and politicians.

Since the impacts of the work performed today are so distant from realistic mechanisms of evaluation, proxies are instead used. In academia, this amounts to counting the number of papers a person publishes and the journals in which they are published.

So, are there alternative mechanisms that could be used? In what way could we harness the mechanisms of industry and capitalism to further "true science"?

A possibility, alluded to in the article linked, might be for researchers to makes "bets" on the outcome of their work. For example, they would lodge their predictions at some central place (perhaps using the mechanisms currently used for grant proposals), and if their predictions are correct, they get a reward. So, one would submit a grant application, some funding would be given, and if the results are correct, an extra tranche of money is delivered to the scientist (not their institute).

It would be necessary to ensure that they are not betting on results they already have (~insider trading).


http://hanson.gmu.edu/gamble.html for the latter proposal.


>>> So, one would submit a grant application, some funding would be given, and if the results are correct, an extra tranche of money is delivered to the scientist (not their institute).

That seems like a perverse incentive to me.


It's similar to a bonus for making the correct prediction. Obviously one would need independent validation that the scientist's prediction was correct.

It wouldn't be sufficient for scientists to make predictions based on the work of others since the tools and so forth available to the candidate would not necessarily be widely accessible. However, once a scientist has 1) made a prediction, 2) demonstrated that their prediction holds out, and 3) that it is independently verified, then they should get rewarded at a rate above that of their peers who make predictions that don't hold out.

Edit: "prediction" in the above is easily replaced by "hypothesis".


Best article I have seen on how to improve science. The author describes a whole spectrum of possible improvements - not just one, such as more replication, and not just the problems, such as misaligned incentives. Gives some hope - but as author admits progress will be difficult. Lots a good references too - although often behind paywalls.


I've spent a considerable time in and associated with academia. It is certainly a system that "works" at all in spite of the process because of the talented and passionate folks involved and would benefit from some large structural changes.

I think that academia could make large steps forward by adopting some of the wonderful collaboration tools created by open source projects. Basic things like wikis, version control, accountability through open peer review, and open standards for data if required could really change the quality and the pace of innovation.

The current model was set up for an incredibly smaller community where everybody literally knew each other and the economics were different. What is needed now is more like a github for results/collaboration and python notebooks for analysis so others can reproduce and test.


These are all great but not realistic given the current competition and the academic mindset I have seen here. Many of my fellow graduate students consider robotics to be the future of how science will be done as most, if not all, of the work done in the lab can be done by robots. We need cheap and open-source liquid handling robots with a well documented platform. The labs of the future will not require man labor except but to think and that means scientists will be ask to publish a standardized protocol file with any plugins they used and all other labs will have to do is run that file and provide the materials. That will make replication incredibly easy and cheap.


This is applicable only to a limited subset of a handful of fields. There's more kinds of research than -omics and organic synthesis/combichem.


I agree it would be a lot easier and plausible for the biomedical sciences and for material/organic/inorganic synthesis. Certainly the theory work will be done by scientists and the design of experiments. Hmmm I wonder for people developing equipment if we could write a whole bunch of plugins that allow a robot to construct different modules of equipment from raw materials and a built-in 3D printer. Again theory and design by the scientist but won't have to spend hours building and testing each component. There are very few scientists I have met or heard speak, and I am at a top 10 grad school, that are doing any method/technology development that are a huge leap from pre-existing science and technology.


Nobody is going to allow their work to be commoditized in the way you describe. The issues are 100% about human psychology. There just aren't any equitable and widely-adoptable standards so far for evaluating work and the scientists that do it.

Remember anyone can create a Facebook clone but it will never be Facebook. There is a very abstract currency in the human mind that no robot will ever solve.


I'll believe that when a robot can, for example, euthanise and dissect a rat to extract a specific part of a specific tissue, without mistakes or contamination. There's a lot of lab work that would need some pretty advanced AI to replace human control.


Give the rat a label that binds specifically to those tissues and guide the robot to the tissue using that. Self-guided robots to be used in surgery are being developed and if I remember currently there is one available for a very minor form of neurosurgery that involves sticking a very thin needle into the skull. What you need a robot for is very much possible and FYI they will be able to do it with better precision and less contamination.


That's a really good point, I hadn't considered the translational benefits from surgical automation.

I still think we're a long way off from automated dissection though. An AI performing these tasks would need very high level of situational awareness to be able to interpret the internal structure of a moving animal.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: