My mother is a GP, but I'm a software engineer, so I'd like to make a software metaphor.
The experts have come up with best practices and agile methodologies. If you follow the Agile processes rigorously and use industry-standard tools ... often the results are total crap, like the Military or the State of Virginia spending 300M on an accounting software project over 5 years and then just throw it away. I guess we don't know anything about software, do we?
Well, some of us do. Some of us are reasonable about how much various techniques help, what the trade-offs are, the inherent uncertainties. Some get good results fairly consistently. And it isn't by rigorously following Agile Methodologies, it's more than that, you have to be thoughtful about the code itself.
And, medicine is often like debugging a large complicated messy system. It takes time, and many practitioners are a bit lazy. They have a lot on their plate, the don't have the time to really dig in and figure out each case. They guess, patch in a work-around, and move on.
But, frankly, western medicine has been massively useful, and I think we all know that.
EDIT: and of course there's the hype cycle: everyone, especially those who are managers or customers rather than practitioners, are looking for the secret, the trick to getting good results. Before agile, it was object-oriented, etc...
We could debate for hours on that subject, but I dare say that someone who believes that most M.Ds can do real science is very wrong. Statistics done by M.Ds is akin to a junkie cooking meth. He knows how to make it, but has no idea why sometimes it goes wrong. Moreover, you drastically underestimate the dishonesty of the "experts" you are citing.
I don't usually "do science" the statistical way either, I usually come up with a plausible theory for a bug, fix it, and use the results to confirm or refute my theory of that one bug. Usually it's something dumb, simple, and obvious in hindsight, not a whole new algorithm or technique.
A system, that you didn't write and don't have access to the source, Now, try fixing a large and complex software system only by observing inputs and outputs only.
This is not even going into the fact that how every one will be running a slight different version, which might also be influenced by a million factors such as environment, food habits and past and current medications....
It is virtually impossible, even for moderately large software, even without considering the latter..
Recognizing this fact led to studies in patient safety.
It's a really interesting read, especially the chapter "Basic Concepts in Patient Safety".
I find it notable that greater safety is achieved by avoiding reliance on memory. The amount of memorization in medicine is astounding. When I tell my friends there's no shame in looking something up, they look at me like I'm some kind of madman. "What will the patient think? If you can just look something up, why have doctors at all?" Yet this article clearly advocates the use of checklists to make sure nothing is forgotten.
But doctors screw up all the time. Sometimes it's dumb mistakes like instruments left in the body after surgery, or the wrong limb operated on. (Like failing to check length when copying a buffer perhaps? Sometimes it's just a bit embarrasing, sometimes catastrophic.) More often, it's just not finding the real cause of the "bug", applying a "fix" or "cleanup" that doesn't fix it.
For example, if a pilot screws up, people die, lots of people die.
Unfortunately, there are a whole bunch of professions where it's not really ok to be "Oops, my bad". And those are the professions that I would say need the most rigor (yet with enough flexibility to catch unknown corner cases) in general.
So it's not really debug but more likely post-mortem. =(
Anyhow I agree with you 100%.
I can't help but notice, however, that we're killing a ton of people because of the inefficiency of our research system. Fifty or sixty years ago, if we had simply taken people who were going to die soon and had them voluntarily submit to A/B testing? How many millions would be saved by now? How much better would it have been to have died knowing you were directly helping in a simple experiment that one day would save all of those lives?
Instead we spend tens of billions, drugs take decades to get approval, and we have people dying of infections for which we have no drugs to address.
We may have reached an inflection point here, folks. Instead of perfect safety, a better metric might be the most medical progress over the shortest amount of time -- in an ethical fashion, of course.
People like to feel there is meaning to their life, some sort of story. Given a chance to directly contribute to science in an understandable way in their last days? I think for many it is the most humane thing to do. (But not all, of course)  Note that the key here is understandable, which would eliminate double-blind studies.
1. Related -- https://en.wikipedia.org/wiki/Man%27s_Search_for_Meaning
Are they fellow MD's? What gives you the authority to determine whether their work constitutes legitimate science or not? Can you be more specific on their shortcomings?
When I finished my studies, I was extremely motivated, and wanted to do research (clinical and translational). My first few projects were quite horrible: very little supervision, huge amount of time and paperwork, very little result (2 peer-reviewed papers). I thought I was the problem, although my fellow junior M.Ds did not seem to fare much better. So I began studying science (a lot): informatics, statistics, physics. I did not become good at it, but I learned a huge amount.
After a few years of that, I found myself unable to collaborate on new projects, and that is actually a sad result. You see, M.Ds, having no science background, view statistics and physiology as they view medicine: a bunch of facts that you must learn off by heart. I cannot begin to describe the statistical heresy I witnessed in clinical trials.
You should also know that professional statisticians are rarely implicated in medical research, because they are expensive. Add to that a good amount of dishonesty motivated by the refusal to admit that nothing positive comes out of the dataset, due to the hard work done to collect the data in the first place, and a huge amount of pressure to publish, and you have a recipe for disaster.
TL;DR: MDs with no specific scientific background will not magically be able to do valid science without additional education, even if they are full professors.
To your comment, though. In the PHD section, you have to preform as a normal PHD student, ie, you have to publish papers for your PI. The learning and the classes, already at 2 years of very intense study, do not have the time to continue into comp-sci or physics. Nor do the students have the training in the math. To do the comp-sci and physics classes with a modicum of understanding, you must come in with at least: Multi-variable calculus, matrix algebra and differential equations, a total of 6 extra semesters of classes. Most MD/PHD people I have interacted with never took calculus to begin with. The hill to climb is very very long and steep, and unless there is a much larger prize than a possible faculty position at Wherever State Univ. where you still have to publish or perish, you are going to get few people going after it and not just opting into private practice.
Also, I have worked with a number of MDs, and yes, there is a Grand Canyon of misunderstanding between bio-peepz and the docs. Neither party really has the time to cross it, and so we just end up trying to use each other. Bio-peepz try to use the doc's name as leverage for increased grant funding from the NIH, and MDs are trying to get the bio people to patent something with their names on it to make more money. In the end, it's all about the money, or lack thereof.
Being an anesthesiologist myself, I must however admit that I am quite skeptical regarding current automated systems. In time, anesthesiology, surgery (which I think will be the first to go completely), and most of medicine will be performed by machines. The current rate of major complication related to anesthetic care is very, very low. And until they can demonstrate a benefit of automation on those grounds, I would be wary of such machines. This does also mean that the population sample required to demonstrate said benefit will be very, very big.
Now if you are evaluating that on grounds of cost-effectiveness, I am quite sure we could already replace most anesthsiologists and surgeons, at the price of many, many more deaths.
What do you consider a well-known, credible medical professional? A Harvard professor? I happen to have worked with world-renowed clinical researchers, and I have nothing but disrespect for their scientific abilities. I do however, value their clinical teachings very much, and I am thankful for the training they provided in the clinics.
I am frankly not motivated enough to look for papers about it, and I clearly speak from my own experience. I do know that the proportion of research considered as valid is very difficult to estimate, though. If you are interested in "evidence-based medicine", I am sure you know PubMed and related websites. At this point however, I find it extremely difficult to believe any medical paper containing statistics.
- deficiencies in the reporting methodology
- a bit of incorrect retraction
- a fun one on systematic reviews
- regarding missing data
- a paywalled abstract about power
- on sample size calculations
- aaand a fun one on selection bias
All found within 10 minutes...
We could go on like that for hours with this sizing contest. I do not expect to convince you. You will, if you put the time and effort into it, find other studies saying the opposite (although being less sexy for publishers, they will be harder to find). If you are somewhat knowledgeable in the field of statistics, please take a look a the numbers, as I am quite sure you will find them appalling (19% of study population missing outcome data and 27.9% underpowered studies, anyone?)
What the OP is referring to isn't unique to MDs, it's endemic to much biomedical research today. I blame it on lack of tenure protections and science-as-university-income, which in many cases ultimately stems from indirect costs charged to federal grants, or the current grant system.
For me, the concerns mentioned in this thread about scientific research and MD training in particular, bring up bigger issues pertaining to the culture of hierarchy in medicine and its implications for quality of care and competition in service provision and training models.
Evidence based medicine. An idea that is really less than 50 years old, and still strugging to gain widespread acceptance.
I pretty much literally facepalmed; luckily she interpreted that as agreeing with her...
It seems to me that the problem here is that our industry, built around throwing expensive drugs at problems, paying for results and lobbying governments and insurance companies is ripe for abuse of "science".
That said, "pure" science is about more than published papers: it's about taking the data and observations you have and constructing the most likely theories and explanations around the observed evidence. If we had a capability to separate funding (and emotions) from research we might be able to produce good results given enough open data (which is itself a challenge).
As an uninformed software developer, I think medicine is a field where machine learning based tools will shine: ethical ("hippa") issues aside, we eventually might be able to feed all observed data, from diagnosis to results years after treatment into computer systems which might be able to make sense of the data and allow us to construct conclusions of the data unbiased by personal and business incentives.
Obviously our current eco-policital climate is strewn with roadblocks, but just wanted to put it out there that science doesn't have to lead us down this path if done right.
It's also a flawed notion. There has been a concerted push in the last 50 years, but applying the scientific method to medicine is considerably older. Koch's Postulates, for example, were published in 1890.
Evidence-based Medicine is applying the scientific method to the practice of medicine.
Please read the link before spreading misinformation.
You could ask for clarification before simply assuming I don't know what I'm talking about.
Furthermore, I'd suggest that understanding disease and understanding the practice of medicine are far more entangled than your division suggests.
Scientism: observe. Generate hypothesis. Apply.
It's not just medicine, but basically anything that tries to assert credibility by being "science-based", rather than "proven".
Things are proven mathematically within a single model, nothing in reality is proven. There is only a scale from models that are more useful and likely to give us what we want, and models that are less likely.
But, things can be disproved. So if we have a single case against a model, we know it does not hold 100%. The model is then insufficient.
In that sense, we need to know: are the medicinal models presented to us known to not be the most useful and likely to help us? If they're not, then they're still valuable, and we should be happy that we're improving them at a high pace, giving us the impression previous models were wrong, yet models are not wrong, just less useful and likely to help us, but if it was still the best we had, your chances were still better following it.
Preach it didibus! So few people understand this!
This is the sticking point with nutrition and medical studies though. With the existence of IRB's and medical ethics practices, it's challenging and best and impossible at worst to run medical experiments up to scientific standards. Maybe rightly so, but it does put handcuffs on the ability to do hypothesis based testing for nutrition.
On the other hand, the continual pressure to publish "complete" stories, early and often, absolutely affects the quality of research and I think finding ways to realign researchers' incentives would be awesome.
There is indeed bad medical research. Typically, animal research that focuses on just one marker and then extrapolates on what it means for humans. You should not rely on these. At best, they give you cues on what might work for humans. But there are multiple studies that reviews all medical research. They study 5 million people over 20 years, follow their health as well as their habits. Variables are adjusted. In such studies, if you discover that green tea decreases risk of type 2 diabetes, then evidence is high. Much, much higher than the simple animal research where mice were given green tea.
The biggest problem is not medical research. It's the army of bloggers, writers from NYTimes and overall the general population hungry for the latest superfood or cure of cancer. They think that throwing billions of dollars will find a revolutionary food, herb or medicine for cancer. So medical researchers present their results in a way that's pleasing. And then most bloggers are just too guilible. That's amplificated by social media, or even boards like Hacker News (see the upvotes for effect of coffee on mortality yesterday. That study is non-conclusive but the story got hundreds of upvotes. People are just too happy to discover that their daily drink will extend their lifespan, even though evidence is limited)
Not only that, but to add to the injury using mice or rats which are known to be sensitive to growing tumors due to factor X (which is henceforth completely ignored, and fails to be mentioned).
Sad, very sad. I wasted a lot of energy & focus in my life on this.
As for dose relationships- a lot of drug dose responses are linear over a range. But they tend to not be linear outside that range; the error is in not correcting for low-dose effects.
Technically, going from "megadose is lethal" to "smaller doses are harmful" is an interpolation, not an extrapolation.
It has to be kept in as similar conditions as the group under treatment and have genetically similar makeup.
Most importantly, the composition of both groups and the conditions the experiment is ran under should be rigorously described.
None of this research passes even the most basic statistical sniff-test, and even if it did, the hazard ratios are so small as to be easily influenced by noise or confounders.
Changing the diets of millions of children at once should require airplane-construction levels of confidence, at the very least. Instead the FDA and FNS are running off low-confidence garbage and acting as if they have an authoritative standing, playing with potentially trillions of dollars of future utility differences spread across tens of millions of schoolchildren.
The safe default for airplane construction is not to build it (or not fly it).
There is no "safe default" for a diet...it's not like you can stop eating. So they'll serve their best guess of a good school lunch.
What if the original dietary recommendation lacked iron-clad evidence?
In fact with nutrition it probably can't which means advice should be both well-researched and more nuanced than it currently is. Neither of which changes the point that making recommendation with little to no quality research is more likely to make it worse than make it better and you'd be better off just staying where you are until you understand the problem better.
 It's medical advice, quack miranda warning be damned
Also, most people follow news organisations that are fastest instead of following the organisations that are slower but rigourous. It doesn't matter if the advice is wrong; it still gets more eyeballs.
What makes you trust one ancient knowledge over other ancient knowledge?
Biology is so complex and we know so little about it, so that the normal logic often fails on it: if A -> B, B -> C, you need to do experiments to show A -> C. Error accumulates, other unknowns come in.
If you have to draw conclusion by logic in biology, look at the errorbars and sample size. Then stretch the errorbars by the square root of the sample size. Then stretch it again 3 times. If the conclusion still look obvious to you, proceed with caution.
Perhaps the best way to look after the body will turn out to be as Aubrey de Grey suggests: regular servicing to repair accumulated damage. But we aren't there yet. Plus we urgently need new antibiotics.
Then again, it's hard to pick a complex system that we do model properly.. the economy, the climate, the body, etc..
Much the same way that the three body problem is chaotic whether there are people involved or not.
I should have said that weather is chaotic.
including quotes / sections:
- It has been known since antiquity that fresh foods in general, and lemons and oranges in particular, will cure scurvy. Starting with Vasco de Gama’s crew in 1497, sailors have repeatedly discovered the curative power of citrus fruits, and the cure has just as frequently been forgotten or ignored by subsequent explorers.
- 1747, James Lind proved in one of the first controlled medical experiments that citrus fruits were an effective cure for the disease. [..] the experiment involved two sailors eating oranges for six days. Lind went on to propound a completely ineffective method of preserving lemon juice (by boiling it down), which he never thought to test.
- Knowing that citrus fruits deferred scurvy, but not knowing why, assuming it was to do with acidity, and switching to a cheaper British source of 'limes' instead of a foreign source of 'limes' - and accidentally switching from effective fruit to ineffective fruit, without noticing.
- Knowing that citrus fruits avoided scurvy, but not knowing that copper breaks down Vitamin C, and keeping the juice in copper containers on ship.
- the time Pasteurized milk was found to be better for preventing bacterial infection in infants, so rich parents switched to it. And the heating denatured Vitamin C so their children developed scurvy. Poor children, being breast fed, didn't get scurvy, only richer children.
- The sickness could be fitted to so many theories of disease—imbalance in vital humors, bad air, acidification of the blood, bacterial infection—that despite the existence of an unambigous cure, there was always a raft of alternative, ineffective treatments. At no point did physicians express doubt about their theories, however ineffective.
- Finally, that one of the simplest of diseases managed to utterly confound us for so long, at the cost of millions of lives, even after we had stumbled across an unequivocal cure. It makes you wonder how many incurable ailments of the modern world—depression, autism, hypertension, obesity—will turn out to have equally simple solutions, once we are able to see them in the correct light. What will we be slapping our foreheads about sixty years from now, wondering how we missed something so obvious?
And plenty of other things they thought about ptomaines, contaminated tinned food, stuffy air, lack of light, poisons to avoid, and all without good experiments and with plenty of grabbing a ray of hope and committing everything based on it.
Highly recommended if you're interested in the space
"Should we be eating more polyunsaturated fats? Should we be avoiding saturated fats? The honest answer is: I don’t know. Given my review of the evidence, I stand by my previous recommendations , which essentially focus more on foods and less on nutrients. I think the state of nutrition research in general is shockingly flawed."
Given what the author says in the article regarding processed foods (and I'd disclose that I agree with his summation) I have to wonder about the whole fad for Soylent. It seems to me a technofix beloved by food haters, but some part of me suspects that, in the fullness of time, we'll find out that it is an incredibly bad idea.
- Roman playwright Terence, circa ~200 B.C.
Still good wisdom, twenty two hundred years later.
(of course, once you start looking at how restaurants use saturated fat, it's easy to see that this advice is violated constantly.)
If you're not saying that, why not? Given that "credible evidence is nonexistent in the medical field."
We detached this subthread from https://news.ycombinator.com/item?id=13423680 and marked it off-topic.
The 'So Doctor' was unfortunate - I was framing a response from a fictious patent walking into his/her surgery, but the wording doesn't work and I can see how it appears aggressive.
Nonetheless, yes when there are organisations such as https://www.nice.org.uk and http://www.cochrane.org trying their hardest to pull together the best evidence out there, this broad dismissal is irritating.
What I can say is that, knowing nothing better, I rely on the very likely abysmal quality of medical evidence on the subject to say that vaccines do not cause autism. You can quote that as my official opinion.