Hacker News new | past | comments | ask | show | jobs | submit login

In view of other comments, it seems useful to point out that credible evidence is nonexistent in the medical field. I am an M.D. with 7y experience in the clinics, have done my share of research, and i can assure you that "scientific evidence" in medicine is a humongous s*load of tampered data written by people who have absolutely zero idea of what science is.



I'm also an MD. I agree with many of your points. Always interested to get in touch with colleagues with similar ideas. Would be interested to hear more about your opinions on the matter. You can email me at albin.stigo@gmail.com.


Coincidence, I happen to also be an anesthesiologist :-)


Come on now, there is a good amount of scientific evidence, doctors do know some things ... it's just that, if you're not a decent doctor, you can't tell the bullshit from the useful information.

My mother is a GP, but I'm a software engineer, so I'd like to make a software metaphor.

The experts have come up with best practices and agile methodologies. If you follow the Agile processes rigorously and use industry-standard tools ... often the results are total crap, like the Military or the State of Virginia spending 300M on an accounting software project over 5 years and then just throw it away. I guess we don't know anything about software, do we?

Well, some of us do. Some of us are reasonable about how much various techniques help, what the trade-offs are, the inherent uncertainties. Some get good results fairly consistently. And it isn't by rigorously following Agile Methodologies, it's more than that, you have to be thoughtful about the code itself.

And, medicine is often like debugging a large complicated messy system. It takes time, and many practitioners are a bit lazy. They have a lot on their plate, the don't have the time to really dig in and figure out each case. They guess, patch in a work-around, and move on.

But, frankly, western medicine has been massively useful, and I think we all know that.

EDIT: and of course there's the hype cycle: everyone, especially those who are managers or customers rather than practitioners, are looking for the secret, the trick to getting good results. Before agile, it was object-oriented, etc...


M.Ds know things, that's the point! Have we been trained for scientific thinking? Absolutely, totally not. Test your mother on basic statistics and scientific reasoning, and you will quickly see the limits. This has absolutely nothing to do with being a decent doctor or not. Now, you are going to tell me she can read the scientific literature and make sense of it? Medicine is caring for patients, meaning applying knowledge. Knowledge applied in the clinics is mostly passed through practical training from generation to generation. Arguably, the decisive factor of change in medicine has up to now always been technology: a stent in an artery, ether anesthesia, organ transplant, etc. A certain base of irrefutable knowledge is present, of course, but it is proportionally small compared to the amount of downright false information we get from clinical trials.

We could debate for hours on that subject, but I dare say that someone who believes that most M.Ds can do real science is very wrong. Statistics done by M.Ds is akin to a junkie cooking meth. He knows how to make it, but has no idea why sometimes it goes wrong. Moreover, you drastically underestimate the dishonesty of the "experts" you are citing.


She doesn't do statistical studies, no. But I think she has good sense of the relative confidence she can have in "new discoveries" and older institutional knowledge (which also isn't 100%). And sometimes she puts in more time and attention than a few previous doctors did, considering all the symptoms and previous attempted treatments, to come up with an accurate diagnosis and effective treatment. By not being dogmatic, and spending a bit more time, she can fix some things.

I don't usually "do science" the statistical way either, I usually come up with a plausible theory for a bug, fix it, and use the results to confirm or refute my theory of that one bug. Usually it's something dumb, simple, and obvious in hindsight, not a whole new algorithm or technique.


MDs are not the only ones doing medical research.


That is indeed a valid point. Unfortunately, most of biology can also be qualified of "soft science". I however expect that a new age is coming for medicine: the age of science (the real one, this time). We are already seeing the birth of sizable databases exploited by professional scientists, but this is still a very minor part of the research output, and the signal-to-noise ratio is currently dismal. Additionally, the progress of medicine is somewhat parallel to the progress of hard science. What is lacking the most, is solid data and the methods to collect them.


Ironically enough though, "hard scientists" wandering into biology and medicine and going "Right, time to show you lot how real science is done..." are notorious for producing really awful research.


>medicine is often like debugging a large complicated messy system...

A system, that you didn't write and don't have access to the source, Now, try fixing a large and complex software system only by observing inputs and outputs only.

This is not even going into the fact that how every one will be running a slight different version, which might also be influenced by a million factors such as environment, food habits and past and current medications....

It is virtually impossible, even for moderately large software, even without considering the latter..


Problem is that if she screws up, someone might die. I would be terrified if someone approached the medical profession the way people do software development.


Doctors are human too. Medical error is pretty common. It's not just doctors, either; every healthcare professional is affected.

Recognizing this fact led to studies in patient safety.

https://www.ncbi.nlm.nih.gov/books/NBK2673/

It's a really interesting read, especially the chapter "Basic Concepts in Patient Safety".

I find it notable that greater safety is achieved by avoiding reliance on memory. The amount of memorization in medicine is astounding. When I tell my friends there's no shame in looking something up, they look at me like I'm some kind of madman. "What will the patient think? If you can just look something up, why have doctors at all?" Yet this article clearly advocates the use of checklists to make sure nothing is forgotten.


You're right, there are differences - medicine is more rigorous (in a sense), processes are more regulated, there's more "standard procedure" which is more specifically taught and widely followed.

But doctors screw up all the time. Sometimes it's dumb mistakes like instruments left in the body after surgery, or the wrong limb operated on. (Like failing to check length when copying a buffer perhaps? Sometimes it's just a bit embarrasing, sometimes catastrophic.) More often, it's just not finding the real cause of the "bug", applying a "fix" or "cleanup" that doesn't fix it.


But applying the same logic to other professions does not work in my opinion.

For example, if a pilot screws up, people die, lots of people die.

Unfortunately, there are a whole bunch of professions where it's not really ok to be "Oops, my bad". And those are the professions that I would say need the most rigor (yet with enough flexibility to catch unknown corner cases) in general.

So it's not really debug but more likely post-mortem. =(


I'm not an MD but after reading about how the USDA's pyramid of foods came to be. It shows how the power of politics and the power of money dictated grains as the base rather than vegetables. Or how eggs became enemy number one against cholesterol, not because of research but because someone thought that high cholesterol food equals high cholesterol in people. While it may seem to make sense that should not be the way to recommend food choices for millions. Where's the science? Yet it was sold as advice based on science.

Anyhow I agree with you 100%.


And the assumption that cholesterol even causes heart disease.


I do not want to bash medical science. Every field of human endeavor has problems, and the longer it has been around and the more established it is, the more the cruft and the bigger the problems.

I can't help but notice, however, that we're killing a ton of people because of the inefficiency of our research system. Fifty or sixty years ago, if we had simply taken people who were going to die soon and had them voluntarily submit to A/B testing? How many millions would be saved by now? How much better would it have been to have died knowing you were directly helping in a simple experiment that one day would save all of those lives?

Instead we spend tens of billions, drugs take decades to get approval, and we have people dying of infections for which we have no drugs to address.

We may have reached an inflection point here, folks. Instead of perfect safety, a better metric might be the most medical progress over the shortest amount of time -- in an ethical fashion, of course.


The goal of the medical system, as it currently stands, is explicitly and literally geared at making people feel cared for. Which is quite antagonistic to your above statement, which stems from utilitarian philosophy. The consequences are simple: in the recent years, we have seen a huge push for "medical humanities" in medical education (aimed at perfecting the doctor-patient relationship). But for scientific medical education? Niet. Nada. But people seem to like it that way...


I hate making moral arguments, but dang. I'm not sure it's entirely utilitarian.

People like to feel there is meaning to their life, some sort of story. Given a chance to directly contribute to science in an understandable way in their last days? I think for many it is the most humane thing to do. (But not all, of course) [1] Note that the key here is understandable, which would eliminate double-blind studies.

1. Related -- https://en.wikipedia.org/wiki/Man%27s_Search_for_Meaning


Quite apart from the ethical concerns, I can't really imagine a shared characteristic in a set of test subjects for an experimental procedure/treatment that could undermine the results more than expected to die soon for unrelated reason


That's a sample size and actuarial question.


No. Confounding variables are NOT fixed by using larger sample sizes.


As a trained epidemiologist, I regret that I have but one upvote to give.


Even if an abundance of patients with sufficiently similar terminal illnesses at sufficiently similar stages willing to forego their entitlement to other treatment or pain relief in the interests of medical science exists, I can't see "although several patients died during the trial the increase in death rate wasn't statistically significant and whilst possible side effects such as nausea and extreme fatigue were reported, these were also near-universal in the control group" as an advance in testing procedures...


This is a fairly inflammatory comment. I would agree that as sciences, biology and medicine still have a long way to go. However, I definitely disagree that well conducted trials constitute 'nonexistent' evidence. Trials have their problems, but they still provide credible evidence.

MD also.


Inflammatory, yes. But not without background. Like you, I will of course be more likely to be influenced by what I see as good research on the paper. However, my own experience tells me that what is written is seldom an exact reflection of how things did in fact go in practice. From that point of view, it becomes difficult to trust what people write.


> M.D. with 7y experience in the clinics, have done my share of research, and i can assure you that "scientific evidence" in medicine is a humongous s*load of tampered data written by people who have absolutely zero idea of what science is.

Are they fellow MD's? What gives you the authority to determine whether their work constitutes legitimate science or not? Can you be more specific on their shortcomings?


They mostly were my superiors. My fellows were more often employed as scientific slaves, understanding nothing they were doing while collecting huge amounts of unusable data.

When I finished my studies, I was extremely motivated, and wanted to do research (clinical and translational). My first few projects were quite horrible: very little supervision, huge amount of time and paperwork, very little result (2 peer-reviewed papers). I thought I was the problem, although my fellow junior M.Ds did not seem to fare much better. So I began studying science (a lot): informatics, statistics, physics. I did not become good at it, but I learned a huge amount.

After a few years of that, I found myself unable to collaborate on new projects, and that is actually a sad result. You see, M.Ds, having no science background, view statistics and physiology as they view medicine: a bunch of facts that you must learn off by heart. I cannot begin to describe the statistical heresy I witnessed in clinical trials. You should also know that professional statisticians are rarely implicated in medical research, because they are expensive. Add to that a good amount of dishonesty motivated by the refusal to admit that nothing positive comes out of the dataset, due to the hard work done to collect the data in the first place, and a huge amount of pressure to publish, and you have a recipe for disaster.

TL;DR: MDs with no specific scientific background will not magically be able to do valid science without additional education, even if they are full professors.


I'm interested to hear what you have to say about MD/PHD programs. How useful are they do you think. The ones I interact with mostly just used it as a way to pay for med-school. Almost none of them are professors 5+years on and most just practice. I have heard that the NIH is going to discontinue the program because of this.


MD-PhD is a good idea, and by all means, should be maintained. I think it could, however, use a big facelift. The fact is that even people who truly like science are drawn by familial and financial matters to the clinical activities, and particularly to the private practice (because that is where the money and good life is). The shape of the reform to be introduced is a complex one, though. I do not have a definite idea about it. What I would change however, is the curriculum. Currently, the PhD part consists in dabbling into statistics and sometimes also into lab work, with almost no training in basic sciences. I hope that in the future, MD-PhDs are taught much more about statistics in a fundamental way, more about physics, and more about computer science. Because in essence, the role of MD-PhDs should be to bridge the gap in communication that exists between pure scientists and clinical practitioners. Because those two are so far apart, that they currently are unable to understand each other. I am sure the scientists here who have had the occasion to lead a project with doctors know all about it.


Thanks for the reply! An interesting person to talk to is Dr. Emery Brown at Mass Gen. [0] He is triply elected to the NAS in engineering, medicine, and biology. I recently saw a talk of his about his new auto-anesthesia machine. The data presented was very compelling. It seems his machine, for a 'normal' surgery, preforms much better than an anesthesiologist over many critical factors. The most interesting part was the Q&A afterwards in which the assembled neuroscientists and anesthesiologists tried their best to not understand a thing he presented and tear him down; their jobs were on the line after all. He is a good example of what the MD/PHD should be producing, I think.

To your comment, though. In the PHD section, you have to preform as a normal PHD student, ie, you have to publish papers for your PI. The learning and the classes, already at 2 years of very intense study, do not have the time to continue into comp-sci or physics. Nor do the students have the training in the math. To do the comp-sci and physics classes with a modicum of understanding, you must come in with at least: Multi-variable calculus, matrix algebra and differential equations, a total of 6 extra semesters of classes. Most MD/PHD people I have interacted with never took calculus to begin with. The hill to climb is very very long and steep, and unless there is a much larger prize than a possible faculty position at Wherever State Univ. where you still have to publish or perish, you are going to get few people going after it and not just opting into private practice.

Also, I have worked with a number of MDs, and yes, there is a Grand Canyon of misunderstanding between bio-peepz and the docs. Neither party really has the time to cross it, and so we just end up trying to use each other. Bio-peepz try to use the doc's name as leverage for increased grant funding from the NIH, and MDs are trying to get the bio people to patent something with their names on it to make more money. In the end, it's all about the money, or lack thereof.

[0]https://sleep.med.harvard.edu/people/faculty/150/Emery+N+Bro...


Thanks, if I get the occasion, I will be trying to get in touch with Dr. Emery Brown. I actually happen to currently work at a Harvard-affiliated hospital in Boston. On the subject of MD-PhDs, I agree with you on many points. I would still be happy to see the level of basic science capabilities in candidates increase. For example, in my country of origin, a lot of the program is devoted to fundamental biology. While this is indeed useful, I think a fully-fledged MD-PhD should absolutely have basics in fluid mechanics and computer programming/scripting.

Being an anesthesiologist myself, I must however admit that I am quite skeptical regarding current automated systems. In time, anesthesiology, surgery (which I think will be the first to go completely), and most of medicine will be performed by machines. The current rate of major complication related to anesthetic care is very, very low. And until they can demonstrate a benefit of automation on those grounds, I would be wary of such machines. This does also mean that the population sample required to demonstrate said benefit will be very, very big. Now if you are evaluating that on grounds of cost-effectiveness, I am quite sure we could already replace most anesthsiologists and surgeons, at the price of many, many more deaths.


I would so love to read your book, hint hint!


That sounds like a extreme statement. Surely the evidence against smoking is credible, as is the evidence against many things that are so uncontroversial that no one brings them up.


I do not put into question that we are seeing some real effects. I am questioning the quality of individual studies, which even when viewed collectively, almost always fail to yield a clear-cut answer. In summary, if I see a thousand studies of dismal quality all saying the same thing, I start to suspect there might be something. But that is not due to the scientific prowess of the authors, and the evidence, although abundant, is still of dismal quality. Every day, you will hear people say: "thi evidence about that is now extremely solid". Most often, things have flipped 5 years from then, with the latest fad.


I'm afraid I see literally no way to evaluate this sort of comment based on its merits. Given you're anonymous here, any chance you could link to well-known/credible professionals who agree with you?


I am sure I am very easily identifiable for someone lurking on hackernews.

What do you consider a well-known, credible medical professional? A Harvard professor? I happen to have worked with world-renowed clinical researchers, and I have nothing but disrespect for their scientific abilities. I do however, value their clinical teachings very much, and I am thankful for the training they provided in the clinics.

I am frankly not motivated enough to look for papers about it, and I clearly speak from my own experience. I do know that the proportion of research considered as valid is very difficult to estimate, though. If you are interested in "evidence-based medicine", I am sure you know PubMed and related websites. At this point however, I find it extremely difficult to believe any medical paper containing statistics.


Forget "medical professional". Forget my criteria. Just cite someone, anyone, whom YOU believe is well-respected in medicine and whose view on this agrees with you. If you can't find anyone well-respected, then cite the best source you can. My point is, just cite someone. For someone who cares about science it's sure ironic that you're expecting us to trust your judgment with zero backing.


Yeah, so... there is abundant literature on quality assessment. A few samples:

- deficiencies in the reporting methodology https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3554513/

- a bit of incorrect retraction https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3899113/

- a fun one on systematic reviews https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4785311/

- regarding missing data https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4748550/

- a paywalled abstract about power https://www.ncbi.nlm.nih.gov/pubmed/26677241

- on sample size calculations https://www.ncbi.nlm.nih.gov/pubmed/25523375

- aaand a fun one on selection bias https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4566301/

All found within 10 minutes...

We could go on like that for hours with this sizing contest. I do not expect to convince you. You will, if you put the time and effort into it, find other studies saying the opposite (although being less sexy for publishers, they will be harder to find). If you are somewhat knowledgeable in the field of statistics, please take a look a the numbers, as I am quite sure you will find them appalling (19% of study population missing outcome data and 27.9% underpowered studies, anyone?)


I offer this in the interests of maintaining what I see as a useful, if difficult, discussion, not to be antagonistic:

http://www.jclinepi.com/article/S0895-4356(16)00147-5/abstra...

What the OP is referring to isn't unique to MDs, it's endemic to much biomedical research today. I blame it on lack of tenure protections and science-as-university-income, which in many cases ultimately stems from indirect costs charged to federal grants, or the current grant system.

For me, the concerns mentioned in this thread about scientific research and MD training in particular, bring up bigger issues pertaining to the culture of hierarchy in medicine and its implications for quality of care and competition in service provision and training models.


Interesting, thank you for the link!



This will get you started https://en.wikipedia.org/wiki/Evidence-based_medicine

Evidence based medicine. An idea that is really less than 50 years old, and still strugging to gain widespread acceptance.


I mentioned once that I was a researcher (not medicine) to someone at a hospital when making smalltalk over the time my wife was in hospital giving birth. She got all excited and started explaining her issues with a research supervisor or something who she didn't agree with on methodology. She said something to the effect of 'what does this old guy know of modern research! Nowadays it's all about evidence based research! I found it right here in this book, how much more evidence does he want!'

I pretty much literally facepalmed; luckily she interpreted that as agreeing with her...


I won't pretend to know about or have any background in medicine, but based on a cursory reading of the article, it seems to be a term for applying the Scientific method to medicine. The idea that we only started doing this 50 years ago is terrifying to say the least.

It seems to me that the problem here is that our industry, built around throwing expensive drugs at problems, paying for results and lobbying governments and insurance companies is ripe for abuse of "science".

That said, "pure" science is about more than published papers: it's about taking the data and observations you have and constructing the most likely theories and explanations around the observed evidence. If we had a capability to separate funding (and emotions) from research we might be able to produce good results given enough open data (which is itself a challenge).

As an uninformed software developer, I think medicine is a field where machine learning based tools will shine: ethical ("hippa") issues aside, we eventually might be able to feed all observed data, from diagnosis to results years after treatment into computer systems which might be able to make sense of the data and allow us to construct conclusions of the data unbiased by personal and business incentives.

Obviously our current eco-policital climate is strewn with roadblocks, but just wanted to put it out there that science doesn't have to lead us down this path if done right.


> The idea that we only started doing this 50 years ago is terrifying to say the least.

It's also a flawed notion. There has been a concerted push in the last 50 years, but applying the scientific method to medicine is considerably older. Koch's Postulates, for example, were published in 1890.


You're talking about applying the scientific method to understanding disease, which is not Evidence-based Medicine.

Evidence-based Medicine is applying the scientific method to the practice of medicine.

Please read the link before spreading misinformation.


I picked one example, but there are others that well predate the term "Evidence Based Medicine". Semmelweis comes to mind. But I've also seen Koch's Postulates used in evaluating the practice of medicine, in addition to the study of disease.

You could ask for clarification before simply assuming I don't know what I'm talking about.

Furthermore, I'd suggest that understanding disease and understanding the practice of medicine are far more entangled than your division suggests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: