My mother is a GP, but I'm a software engineer, so I'd like to make a software metaphor.
The experts have come up with best practices and agile methodologies. If you follow the Agile processes rigorously and use industry-standard tools ... often the results are total crap, like the Military or the State of Virginia spending 300M on an accounting software project over 5 years and then just throw it away. I guess we don't know anything about software, do we?
Well, some of us do. Some of us are reasonable about how much various techniques help, what the trade-offs are, the inherent uncertainties. Some get good results fairly consistently. And it isn't by rigorously following Agile Methodologies, it's more than that, you have to be thoughtful about the code itself.
And, medicine is often like debugging a large complicated messy system. It takes time, and many practitioners are a bit lazy. They have a lot on their plate, the don't have the time to really dig in and figure out each case. They guess, patch in a work-around, and move on.
But, frankly, western medicine has been massively useful, and I think we all know that.
EDIT: and of course there's the hype cycle: everyone, especially those who are managers or customers rather than practitioners, are looking for the secret, the trick to getting good results. Before agile, it was object-oriented, etc...
We could debate for hours on that subject, but I dare say that someone who believes that most M.Ds can do real science is very wrong. Statistics done by M.Ds is akin to a junkie cooking meth. He knows how to make it, but has no idea why sometimes it goes wrong. Moreover, you drastically underestimate the dishonesty of the "experts" you are citing.
I don't usually "do science" the statistical way either, I usually come up with a plausible theory for a bug, fix it, and use the results to confirm or refute my theory of that one bug. Usually it's something dumb, simple, and obvious in hindsight, not a whole new algorithm or technique.
A system, that you didn't write and don't have access to the source, Now, try fixing a large and complex software system only by observing inputs and outputs only.
This is not even going into the fact that how every one will be running a slight different version, which might also be influenced by a million factors such as environment, food habits and past and current medications....
It is virtually impossible, even for moderately large software, even without considering the latter..
Recognizing this fact led to studies in patient safety.
It's a really interesting read, especially the chapter "Basic Concepts in Patient Safety".
I find it notable that greater safety is achieved by avoiding reliance on memory. The amount of memorization in medicine is astounding. When I tell my friends there's no shame in looking something up, they look at me like I'm some kind of madman. "What will the patient think? If you can just look something up, why have doctors at all?" Yet this article clearly advocates the use of checklists to make sure nothing is forgotten.
But doctors screw up all the time. Sometimes it's dumb mistakes like instruments left in the body after surgery, or the wrong limb operated on. (Like failing to check length when copying a buffer perhaps? Sometimes it's just a bit embarrasing, sometimes catastrophic.) More often, it's just not finding the real cause of the "bug", applying a "fix" or "cleanup" that doesn't fix it.
For example, if a pilot screws up, people die, lots of people die.
Unfortunately, there are a whole bunch of professions where it's not really ok to be "Oops, my bad". And those are the professions that I would say need the most rigor (yet with enough flexibility to catch unknown corner cases) in general.
So it's not really debug but more likely post-mortem. =(
Anyhow I agree with you 100%.
I can't help but notice, however, that we're killing a ton of people because of the inefficiency of our research system. Fifty or sixty years ago, if we had simply taken people who were going to die soon and had them voluntarily submit to A/B testing? How many millions would be saved by now? How much better would it have been to have died knowing you were directly helping in a simple experiment that one day would save all of those lives?
Instead we spend tens of billions, drugs take decades to get approval, and we have people dying of infections for which we have no drugs to address.
We may have reached an inflection point here, folks. Instead of perfect safety, a better metric might be the most medical progress over the shortest amount of time -- in an ethical fashion, of course.
People like to feel there is meaning to their life, some sort of story. Given a chance to directly contribute to science in an understandable way in their last days? I think for many it is the most humane thing to do. (But not all, of course)  Note that the key here is understandable, which would eliminate double-blind studies.
1. Related -- https://en.wikipedia.org/wiki/Man%27s_Search_for_Meaning
Are they fellow MD's? What gives you the authority to determine whether their work constitutes legitimate science or not? Can you be more specific on their shortcomings?
When I finished my studies, I was extremely motivated, and wanted to do research (clinical and translational). My first few projects were quite horrible: very little supervision, huge amount of time and paperwork, very little result (2 peer-reviewed papers). I thought I was the problem, although my fellow junior M.Ds did not seem to fare much better. So I began studying science (a lot): informatics, statistics, physics. I did not become good at it, but I learned a huge amount.
After a few years of that, I found myself unable to collaborate on new projects, and that is actually a sad result. You see, M.Ds, having no science background, view statistics and physiology as they view medicine: a bunch of facts that you must learn off by heart. I cannot begin to describe the statistical heresy I witnessed in clinical trials.
You should also know that professional statisticians are rarely implicated in medical research, because they are expensive. Add to that a good amount of dishonesty motivated by the refusal to admit that nothing positive comes out of the dataset, due to the hard work done to collect the data in the first place, and a huge amount of pressure to publish, and you have a recipe for disaster.
TL;DR: MDs with no specific scientific background will not magically be able to do valid science without additional education, even if they are full professors.
To your comment, though. In the PHD section, you have to preform as a normal PHD student, ie, you have to publish papers for your PI. The learning and the classes, already at 2 years of very intense study, do not have the time to continue into comp-sci or physics. Nor do the students have the training in the math. To do the comp-sci and physics classes with a modicum of understanding, you must come in with at least: Multi-variable calculus, matrix algebra and differential equations, a total of 6 extra semesters of classes. Most MD/PHD people I have interacted with never took calculus to begin with. The hill to climb is very very long and steep, and unless there is a much larger prize than a possible faculty position at Wherever State Univ. where you still have to publish or perish, you are going to get few people going after it and not just opting into private practice.
Also, I have worked with a number of MDs, and yes, there is a Grand Canyon of misunderstanding between bio-peepz and the docs. Neither party really has the time to cross it, and so we just end up trying to use each other. Bio-peepz try to use the doc's name as leverage for increased grant funding from the NIH, and MDs are trying to get the bio people to patent something with their names on it to make more money. In the end, it's all about the money, or lack thereof.
Being an anesthesiologist myself, I must however admit that I am quite skeptical regarding current automated systems. In time, anesthesiology, surgery (which I think will be the first to go completely), and most of medicine will be performed by machines. The current rate of major complication related to anesthetic care is very, very low. And until they can demonstrate a benefit of automation on those grounds, I would be wary of such machines. This does also mean that the population sample required to demonstrate said benefit will be very, very big.
Now if you are evaluating that on grounds of cost-effectiveness, I am quite sure we could already replace most anesthsiologists and surgeons, at the price of many, many more deaths.
What do you consider a well-known, credible medical professional? A Harvard professor? I happen to have worked with world-renowed clinical researchers, and I have nothing but disrespect for their scientific abilities. I do however, value their clinical teachings very much, and I am thankful for the training they provided in the clinics.
I am frankly not motivated enough to look for papers about it, and I clearly speak from my own experience. I do know that the proportion of research considered as valid is very difficult to estimate, though. If you are interested in "evidence-based medicine", I am sure you know PubMed and related websites. At this point however, I find it extremely difficult to believe any medical paper containing statistics.
- deficiencies in the reporting methodology
- a bit of incorrect retraction
- a fun one on systematic reviews
- regarding missing data
- a paywalled abstract about power
- on sample size calculations
- aaand a fun one on selection bias
All found within 10 minutes...
We could go on like that for hours with this sizing contest. I do not expect to convince you. You will, if you put the time and effort into it, find other studies saying the opposite (although being less sexy for publishers, they will be harder to find). If you are somewhat knowledgeable in the field of statistics, please take a look a the numbers, as I am quite sure you will find them appalling (19% of study population missing outcome data and 27.9% underpowered studies, anyone?)
What the OP is referring to isn't unique to MDs, it's endemic to much biomedical research today. I blame it on lack of tenure protections and science-as-university-income, which in many cases ultimately stems from indirect costs charged to federal grants, or the current grant system.
For me, the concerns mentioned in this thread about scientific research and MD training in particular, bring up bigger issues pertaining to the culture of hierarchy in medicine and its implications for quality of care and competition in service provision and training models.
Evidence based medicine. An idea that is really less than 50 years old, and still strugging to gain widespread acceptance.
I pretty much literally facepalmed; luckily she interpreted that as agreeing with her...
It seems to me that the problem here is that our industry, built around throwing expensive drugs at problems, paying for results and lobbying governments and insurance companies is ripe for abuse of "science".
That said, "pure" science is about more than published papers: it's about taking the data and observations you have and constructing the most likely theories and explanations around the observed evidence. If we had a capability to separate funding (and emotions) from research we might be able to produce good results given enough open data (which is itself a challenge).
As an uninformed software developer, I think medicine is a field where machine learning based tools will shine: ethical ("hippa") issues aside, we eventually might be able to feed all observed data, from diagnosis to results years after treatment into computer systems which might be able to make sense of the data and allow us to construct conclusions of the data unbiased by personal and business incentives.
Obviously our current eco-policital climate is strewn with roadblocks, but just wanted to put it out there that science doesn't have to lead us down this path if done right.
It's also a flawed notion. There has been a concerted push in the last 50 years, but applying the scientific method to medicine is considerably older. Koch's Postulates, for example, were published in 1890.
Evidence-based Medicine is applying the scientific method to the practice of medicine.
Please read the link before spreading misinformation.
You could ask for clarification before simply assuming I don't know what I'm talking about.
Furthermore, I'd suggest that understanding disease and understanding the practice of medicine are far more entangled than your division suggests.