Hacker News new | comments | show | ask | jobs | submit login
Why Medical Advice Seems to Change So Frequently (nytimes.com)
135 points by WheelsAtLarge on Jan 18, 2017 | hide | past | web | favorite | 130 comments

In view of other comments, it seems useful to point out that credible evidence is nonexistent in the medical field. I am an M.D. with 7y experience in the clinics, have done my share of research, and i can assure you that "scientific evidence" in medicine is a humongous s*load of tampered data written by people who have absolutely zero idea of what science is.

I'm also an MD. I agree with many of your points. Always interested to get in touch with colleagues with similar ideas. Would be interested to hear more about your opinions on the matter. You can email me at albin.stigo@gmail.com.

Coincidence, I happen to also be an anesthesiologist :-)

Come on now, there is a good amount of scientific evidence, doctors do know some things ... it's just that, if you're not a decent doctor, you can't tell the bullshit from the useful information.

My mother is a GP, but I'm a software engineer, so I'd like to make a software metaphor.

The experts have come up with best practices and agile methodologies. If you follow the Agile processes rigorously and use industry-standard tools ... often the results are total crap, like the Military or the State of Virginia spending 300M on an accounting software project over 5 years and then just throw it away. I guess we don't know anything about software, do we?

Well, some of us do. Some of us are reasonable about how much various techniques help, what the trade-offs are, the inherent uncertainties. Some get good results fairly consistently. And it isn't by rigorously following Agile Methodologies, it's more than that, you have to be thoughtful about the code itself.

And, medicine is often like debugging a large complicated messy system. It takes time, and many practitioners are a bit lazy. They have a lot on their plate, the don't have the time to really dig in and figure out each case. They guess, patch in a work-around, and move on.

But, frankly, western medicine has been massively useful, and I think we all know that.

EDIT: and of course there's the hype cycle: everyone, especially those who are managers or customers rather than practitioners, are looking for the secret, the trick to getting good results. Before agile, it was object-oriented, etc...

M.Ds know things, that's the point! Have we been trained for scientific thinking? Absolutely, totally not. Test your mother on basic statistics and scientific reasoning, and you will quickly see the limits. This has absolutely nothing to do with being a decent doctor or not. Now, you are going to tell me she can read the scientific literature and make sense of it? Medicine is caring for patients, meaning applying knowledge. Knowledge applied in the clinics is mostly passed through practical training from generation to generation. Arguably, the decisive factor of change in medicine has up to now always been technology: a stent in an artery, ether anesthesia, organ transplant, etc. A certain base of irrefutable knowledge is present, of course, but it is proportionally small compared to the amount of downright false information we get from clinical trials.

We could debate for hours on that subject, but I dare say that someone who believes that most M.Ds can do real science is very wrong. Statistics done by M.Ds is akin to a junkie cooking meth. He knows how to make it, but has no idea why sometimes it goes wrong. Moreover, you drastically underestimate the dishonesty of the "experts" you are citing.

She doesn't do statistical studies, no. But I think she has good sense of the relative confidence she can have in "new discoveries" and older institutional knowledge (which also isn't 100%). And sometimes she puts in more time and attention than a few previous doctors did, considering all the symptoms and previous attempted treatments, to come up with an accurate diagnosis and effective treatment. By not being dogmatic, and spending a bit more time, she can fix some things.

I don't usually "do science" the statistical way either, I usually come up with a plausible theory for a bug, fix it, and use the results to confirm or refute my theory of that one bug. Usually it's something dumb, simple, and obvious in hindsight, not a whole new algorithm or technique.

MDs are not the only ones doing medical research.

That is indeed a valid point. Unfortunately, most of biology can also be qualified of "soft science". I however expect that a new age is coming for medicine: the age of science (the real one, this time). We are already seeing the birth of sizable databases exploited by professional scientists, but this is still a very minor part of the research output, and the signal-to-noise ratio is currently dismal. Additionally, the progress of medicine is somewhat parallel to the progress of hard science. What is lacking the most, is solid data and the methods to collect them.

Ironically enough though, "hard scientists" wandering into biology and medicine and going "Right, time to show you lot how real science is done..." are notorious for producing really awful research.

>medicine is often like debugging a large complicated messy system...

A system, that you didn't write and don't have access to the source, Now, try fixing a large and complex software system only by observing inputs and outputs only.

This is not even going into the fact that how every one will be running a slight different version, which might also be influenced by a million factors such as environment, food habits and past and current medications....

It is virtually impossible, even for moderately large software, even without considering the latter..

Problem is that if she screws up, someone might die. I would be terrified if someone approached the medical profession the way people do software development.

Doctors are human too. Medical error is pretty common. It's not just doctors, either; every healthcare professional is affected.

Recognizing this fact led to studies in patient safety.


It's a really interesting read, especially the chapter "Basic Concepts in Patient Safety".

I find it notable that greater safety is achieved by avoiding reliance on memory. The amount of memorization in medicine is astounding. When I tell my friends there's no shame in looking something up, they look at me like I'm some kind of madman. "What will the patient think? If you can just look something up, why have doctors at all?" Yet this article clearly advocates the use of checklists to make sure nothing is forgotten.

You're right, there are differences - medicine is more rigorous (in a sense), processes are more regulated, there's more "standard procedure" which is more specifically taught and widely followed.

But doctors screw up all the time. Sometimes it's dumb mistakes like instruments left in the body after surgery, or the wrong limb operated on. (Like failing to check length when copying a buffer perhaps? Sometimes it's just a bit embarrasing, sometimes catastrophic.) More often, it's just not finding the real cause of the "bug", applying a "fix" or "cleanup" that doesn't fix it.

But applying the same logic to other professions does not work in my opinion.

For example, if a pilot screws up, people die, lots of people die.

Unfortunately, there are a whole bunch of professions where it's not really ok to be "Oops, my bad". And those are the professions that I would say need the most rigor (yet with enough flexibility to catch unknown corner cases) in general.

So it's not really debug but more likely post-mortem. =(

I'm not an MD but after reading about how the USDA's pyramid of foods came to be. It shows how the power of politics and the power of money dictated grains as the base rather than vegetables. Or how eggs became enemy number one against cholesterol, not because of research but because someone thought that high cholesterol food equals high cholesterol in people. While it may seem to make sense that should not be the way to recommend food choices for millions. Where's the science? Yet it was sold as advice based on science.

Anyhow I agree with you 100%.

And the assumption that cholesterol even causes heart disease.

I do not want to bash medical science. Every field of human endeavor has problems, and the longer it has been around and the more established it is, the more the cruft and the bigger the problems.

I can't help but notice, however, that we're killing a ton of people because of the inefficiency of our research system. Fifty or sixty years ago, if we had simply taken people who were going to die soon and had them voluntarily submit to A/B testing? How many millions would be saved by now? How much better would it have been to have died knowing you were directly helping in a simple experiment that one day would save all of those lives?

Instead we spend tens of billions, drugs take decades to get approval, and we have people dying of infections for which we have no drugs to address.

We may have reached an inflection point here, folks. Instead of perfect safety, a better metric might be the most medical progress over the shortest amount of time -- in an ethical fashion, of course.

The goal of the medical system, as it currently stands, is explicitly and literally geared at making people feel cared for. Which is quite antagonistic to your above statement, which stems from utilitarian philosophy. The consequences are simple: in the recent years, we have seen a huge push for "medical humanities" in medical education (aimed at perfecting the doctor-patient relationship). But for scientific medical education? Niet. Nada. But people seem to like it that way...

I hate making moral arguments, but dang. I'm not sure it's entirely utilitarian.

People like to feel there is meaning to their life, some sort of story. Given a chance to directly contribute to science in an understandable way in their last days? I think for many it is the most humane thing to do. (But not all, of course) [1] Note that the key here is understandable, which would eliminate double-blind studies.

1. Related -- https://en.wikipedia.org/wiki/Man%27s_Search_for_Meaning

Quite apart from the ethical concerns, I can't really imagine a shared characteristic in a set of test subjects for an experimental procedure/treatment that could undermine the results more than expected to die soon for unrelated reason

That's a sample size and actuarial question.

No. Confounding variables are NOT fixed by using larger sample sizes.

As a trained epidemiologist, I regret that I have but one upvote to give.

Even if an abundance of patients with sufficiently similar terminal illnesses at sufficiently similar stages willing to forego their entitlement to other treatment or pain relief in the interests of medical science exists, I can't see "although several patients died during the trial the increase in death rate wasn't statistically significant and whilst possible side effects such as nausea and extreme fatigue were reported, these were also near-universal in the control group" as an advance in testing procedures...

This is a fairly inflammatory comment. I would agree that as sciences, biology and medicine still have a long way to go. However, I definitely disagree that well conducted trials constitute 'nonexistent' evidence. Trials have their problems, but they still provide credible evidence.

MD also.

Inflammatory, yes. But not without background. Like you, I will of course be more likely to be influenced by what I see as good research on the paper. However, my own experience tells me that what is written is seldom an exact reflection of how things did in fact go in practice. From that point of view, it becomes difficult to trust what people write.

> M.D. with 7y experience in the clinics, have done my share of research, and i can assure you that "scientific evidence" in medicine is a humongous s*load of tampered data written by people who have absolutely zero idea of what science is.

Are they fellow MD's? What gives you the authority to determine whether their work constitutes legitimate science or not? Can you be more specific on their shortcomings?

They mostly were my superiors. My fellows were more often employed as scientific slaves, understanding nothing they were doing while collecting huge amounts of unusable data.

When I finished my studies, I was extremely motivated, and wanted to do research (clinical and translational). My first few projects were quite horrible: very little supervision, huge amount of time and paperwork, very little result (2 peer-reviewed papers). I thought I was the problem, although my fellow junior M.Ds did not seem to fare much better. So I began studying science (a lot): informatics, statistics, physics. I did not become good at it, but I learned a huge amount.

After a few years of that, I found myself unable to collaborate on new projects, and that is actually a sad result. You see, M.Ds, having no science background, view statistics and physiology as they view medicine: a bunch of facts that you must learn off by heart. I cannot begin to describe the statistical heresy I witnessed in clinical trials. You should also know that professional statisticians are rarely implicated in medical research, because they are expensive. Add to that a good amount of dishonesty motivated by the refusal to admit that nothing positive comes out of the dataset, due to the hard work done to collect the data in the first place, and a huge amount of pressure to publish, and you have a recipe for disaster.

TL;DR: MDs with no specific scientific background will not magically be able to do valid science without additional education, even if they are full professors.

I'm interested to hear what you have to say about MD/PHD programs. How useful are they do you think. The ones I interact with mostly just used it as a way to pay for med-school. Almost none of them are professors 5+years on and most just practice. I have heard that the NIH is going to discontinue the program because of this.

MD-PhD is a good idea, and by all means, should be maintained. I think it could, however, use a big facelift. The fact is that even people who truly like science are drawn by familial and financial matters to the clinical activities, and particularly to the private practice (because that is where the money and good life is). The shape of the reform to be introduced is a complex one, though. I do not have a definite idea about it. What I would change however, is the curriculum. Currently, the PhD part consists in dabbling into statistics and sometimes also into lab work, with almost no training in basic sciences. I hope that in the future, MD-PhDs are taught much more about statistics in a fundamental way, more about physics, and more about computer science. Because in essence, the role of MD-PhDs should be to bridge the gap in communication that exists between pure scientists and clinical practitioners. Because those two are so far apart, that they currently are unable to understand each other. I am sure the scientists here who have had the occasion to lead a project with doctors know all about it.

Thanks for the reply! An interesting person to talk to is Dr. Emery Brown at Mass Gen. [0] He is triply elected to the NAS in engineering, medicine, and biology. I recently saw a talk of his about his new auto-anesthesia machine. The data presented was very compelling. It seems his machine, for a 'normal' surgery, preforms much better than an anesthesiologist over many critical factors. The most interesting part was the Q&A afterwards in which the assembled neuroscientists and anesthesiologists tried their best to not understand a thing he presented and tear him down; their jobs were on the line after all. He is a good example of what the MD/PHD should be producing, I think.

To your comment, though. In the PHD section, you have to preform as a normal PHD student, ie, you have to publish papers for your PI. The learning and the classes, already at 2 years of very intense study, do not have the time to continue into comp-sci or physics. Nor do the students have the training in the math. To do the comp-sci and physics classes with a modicum of understanding, you must come in with at least: Multi-variable calculus, matrix algebra and differential equations, a total of 6 extra semesters of classes. Most MD/PHD people I have interacted with never took calculus to begin with. The hill to climb is very very long and steep, and unless there is a much larger prize than a possible faculty position at Wherever State Univ. where you still have to publish or perish, you are going to get few people going after it and not just opting into private practice.

Also, I have worked with a number of MDs, and yes, there is a Grand Canyon of misunderstanding between bio-peepz and the docs. Neither party really has the time to cross it, and so we just end up trying to use each other. Bio-peepz try to use the doc's name as leverage for increased grant funding from the NIH, and MDs are trying to get the bio people to patent something with their names on it to make more money. In the end, it's all about the money, or lack thereof.


Thanks, if I get the occasion, I will be trying to get in touch with Dr. Emery Brown. I actually happen to currently work at a Harvard-affiliated hospital in Boston. On the subject of MD-PhDs, I agree with you on many points. I would still be happy to see the level of basic science capabilities in candidates increase. For example, in my country of origin, a lot of the program is devoted to fundamental biology. While this is indeed useful, I think a fully-fledged MD-PhD should absolutely have basics in fluid mechanics and computer programming/scripting.

Being an anesthesiologist myself, I must however admit that I am quite skeptical regarding current automated systems. In time, anesthesiology, surgery (which I think will be the first to go completely), and most of medicine will be performed by machines. The current rate of major complication related to anesthetic care is very, very low. And until they can demonstrate a benefit of automation on those grounds, I would be wary of such machines. This does also mean that the population sample required to demonstrate said benefit will be very, very big. Now if you are evaluating that on grounds of cost-effectiveness, I am quite sure we could already replace most anesthsiologists and surgeons, at the price of many, many more deaths.

I would so love to read your book, hint hint!

That sounds like a extreme statement. Surely the evidence against smoking is credible, as is the evidence against many things that are so uncontroversial that no one brings them up.

I do not put into question that we are seeing some real effects. I am questioning the quality of individual studies, which even when viewed collectively, almost always fail to yield a clear-cut answer. In summary, if I see a thousand studies of dismal quality all saying the same thing, I start to suspect there might be something. But that is not due to the scientific prowess of the authors, and the evidence, although abundant, is still of dismal quality. Every day, you will hear people say: "thi evidence about that is now extremely solid". Most often, things have flipped 5 years from then, with the latest fad.

I'm afraid I see literally no way to evaluate this sort of comment based on its merits. Given you're anonymous here, any chance you could link to well-known/credible professionals who agree with you?

I am sure I am very easily identifiable for someone lurking on hackernews.

What do you consider a well-known, credible medical professional? A Harvard professor? I happen to have worked with world-renowed clinical researchers, and I have nothing but disrespect for their scientific abilities. I do however, value their clinical teachings very much, and I am thankful for the training they provided in the clinics.

I am frankly not motivated enough to look for papers about it, and I clearly speak from my own experience. I do know that the proportion of research considered as valid is very difficult to estimate, though. If you are interested in "evidence-based medicine", I am sure you know PubMed and related websites. At this point however, I find it extremely difficult to believe any medical paper containing statistics.

Forget "medical professional". Forget my criteria. Just cite someone, anyone, whom YOU believe is well-respected in medicine and whose view on this agrees with you. If you can't find anyone well-respected, then cite the best source you can. My point is, just cite someone. For someone who cares about science it's sure ironic that you're expecting us to trust your judgment with zero backing.

Yeah, so... there is abundant literature on quality assessment. A few samples:

- deficiencies in the reporting methodology https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3554513/

- a bit of incorrect retraction https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3899113/

- a fun one on systematic reviews https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4785311/

- regarding missing data https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4748550/

- a paywalled abstract about power https://www.ncbi.nlm.nih.gov/pubmed/26677241

- on sample size calculations https://www.ncbi.nlm.nih.gov/pubmed/25523375

- aaand a fun one on selection bias https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4566301/

All found within 10 minutes...

We could go on like that for hours with this sizing contest. I do not expect to convince you. You will, if you put the time and effort into it, find other studies saying the opposite (although being less sexy for publishers, they will be harder to find). If you are somewhat knowledgeable in the field of statistics, please take a look a the numbers, as I am quite sure you will find them appalling (19% of study population missing outcome data and 27.9% underpowered studies, anyone?)

I offer this in the interests of maintaining what I see as a useful, if difficult, discussion, not to be antagonistic:


What the OP is referring to isn't unique to MDs, it's endemic to much biomedical research today. I blame it on lack of tenure protections and science-as-university-income, which in many cases ultimately stems from indirect costs charged to federal grants, or the current grant system.

For me, the concerns mentioned in this thread about scientific research and MD training in particular, bring up bigger issues pertaining to the culture of hierarchy in medicine and its implications for quality of care and competition in service provision and training models.

Interesting, thank you for the link!

This will get you started https://en.wikipedia.org/wiki/Evidence-based_medicine

Evidence based medicine. An idea that is really less than 50 years old, and still strugging to gain widespread acceptance.

I mentioned once that I was a researcher (not medicine) to someone at a hospital when making smalltalk over the time my wife was in hospital giving birth. She got all excited and started explaining her issues with a research supervisor or something who she didn't agree with on methodology. She said something to the effect of 'what does this old guy know of modern research! Nowadays it's all about evidence based research! I found it right here in this book, how much more evidence does he want!'

I pretty much literally facepalmed; luckily she interpreted that as agreeing with her...

I won't pretend to know about or have any background in medicine, but based on a cursory reading of the article, it seems to be a term for applying the Scientific method to medicine. The idea that we only started doing this 50 years ago is terrifying to say the least.

It seems to me that the problem here is that our industry, built around throwing expensive drugs at problems, paying for results and lobbying governments and insurance companies is ripe for abuse of "science".

That said, "pure" science is about more than published papers: it's about taking the data and observations you have and constructing the most likely theories and explanations around the observed evidence. If we had a capability to separate funding (and emotions) from research we might be able to produce good results given enough open data (which is itself a challenge).

As an uninformed software developer, I think medicine is a field where machine learning based tools will shine: ethical ("hippa") issues aside, we eventually might be able to feed all observed data, from diagnosis to results years after treatment into computer systems which might be able to make sense of the data and allow us to construct conclusions of the data unbiased by personal and business incentives.

Obviously our current eco-policital climate is strewn with roadblocks, but just wanted to put it out there that science doesn't have to lead us down this path if done right.

> The idea that we only started doing this 50 years ago is terrifying to say the least.

It's also a flawed notion. There has been a concerted push in the last 50 years, but applying the scientific method to medicine is considerably older. Koch's Postulates, for example, were published in 1890.

You're talking about applying the scientific method to understanding disease, which is not Evidence-based Medicine.

Evidence-based Medicine is applying the scientific method to the practice of medicine.

Please read the link before spreading misinformation.

I picked one example, but there are others that well predate the term "Evidence Based Medicine". Semmelweis comes to mind. But I've also seen Koch's Postulates used in evaluating the practice of medicine, in addition to the study of disease.

You could ask for clarification before simply assuming I don't know what I'm talking about.

Furthermore, I'd suggest that understanding disease and understanding the practice of medicine are far more entangled than your division suggests.

Science: Observe. Generate hypothesis. Design experiment(s) that would invalidate hypothesis. Test. Evaluate. Repeat as needed. Apply, and measure effectiveness. Continue to evaluate as new data is gained.

Scientism: observe. Generate hypothesis. Apply.

It's not just medicine, but basically anything that tries to assert credibility by being "science-based", rather than "proven".

The belief that proven exists is the bigger problem.

Things are proven mathematically within a single model, nothing in reality is proven. There is only a scale from models that are more useful and likely to give us what we want, and models that are less likely.

But, things can be disproved. So if we have a single case against a model, we know it does not hold 100%. The model is then insufficient.

In that sense, we need to know: are the medicinal models presented to us known to not be the most useful and likely to help us? If they're not, then they're still valuable, and we should be happy that we're improving them at a high pace, giving us the impression previous models were wrong, yet models are not wrong, just less useful and likely to help us, but if it was still the best we had, your chances were still better following it.

"But it's been proved by research on neurosciences" claimed by psychologists/sociologists/economists is one of my favourites of the moment.

> Things are proven mathematically within a single model, nothing in reality is proven.

Preach it didibus! So few people understand this!

Design experiment(s) that would invalidate hypothesis.

This is the sticking point with nutrition and medical studies though. With the existence of IRB's and medical ethics practices, it's challenging and best and impossible at worst to run medical experiments up to scientific standards. Maybe rightly so, but it does put handcuffs on the ability to do hypothesis based testing for nutrition.

I really don't think that IRBs/ethics committees are the major problem in biomedical research. Interacting with them can be frustrating and annoying, but the issues were largely bureaucratic ones (so many forms and approvals). I've never felt like their requests were unreasonable or that they compromised the research.

On the other hand, the continual pressure to publish "complete" stories, early and often, absolutely affects the quality of research and I think finding ways to realign researchers' incentives would be awesome.

Same reason economists have such a hard time making robust theories.

Same reason paleontology, geology, anthropology, climate science, astronomy, and so on have a difficult time... Any theory that can't be falsified by running an experiment shouldn't be called "science". At best, people pick the theories they believe because they "fit" existing observations, not because it's good science. (No computer models really aren't good enough - you have to ask whether the model was correct and included all the important components)

I despised medicine for the feel of butchery I have when looking at surgery. That said, humans are "complex", and sadly, ethics forbid experiments to validate anything most of the time. Slow it will be.

Things which assert credibility by claiming that they're "proven" are some of the worst. Sadly there's no real way to know the difference other than having a scientific background and examining the actual research.

Great entertainment for anyone with basic knowledge of statistics: read medical papers. Lots of laughs to be had. Most experiments are bogus. Most conclusions are drawn with little in the way of supporting evidence, often without controlling for obvious confounding factors. Correlation is implied to mean causation. Extrapolations abound, both for concentrations (i.e. if I give this mouse a megadose of X it'll die, hence X is harmful) and between species (mice grow tumors if I give them a ton of X, so humans will too if they consume a tiny fraction of the amount per unit of body weight). Most of it is basically bro science, except expressed in much longer words.

I disagree. This is a bit like browsing open source repos on github and laughing because of bad algorithms, issues or quick fixes.

There is indeed bad medical research. Typically, animal research that focuses on just one marker and then extrapolates on what it means for humans. You should not rely on these. At best, they give you cues on what might work for humans. But there are multiple studies that reviews all medical research. They study 5 million people over 20 years, follow their health as well as their habits. Variables are adjusted. In such studies, if you discover that green tea decreases risk of type 2 diabetes, then evidence is high. Much, much higher than the simple animal research where mice were given green tea.

The biggest problem is not medical research. It's the army of bloggers, writers from NYTimes and overall the general population hungry for the latest superfood or cure of cancer. They think that throwing billions of dollars will find a revolutionary food, herb or medicine for cancer. So medical researchers present their results in a way that's pleasing. And then most bloggers are just too guilible. That's amplificated by social media, or even boards like Hacker News (see the upvotes for effect of coffee on mortality yesterday. That study is non-conclusive but the story got hundreds of upvotes. People are just too happy to discover that their daily drink will extend their lifespan, even though evidence is limited)

> and between species (mice grow tumors if I give them a ton of X, so humans will too if they consume a tiny fraction of the amount per unit of body weight).

Not only that, but to add to the injury using mice or rats which are known to be sensitive to growing tumors due to factor X (which is henceforth completely ignored, and fails to be mentioned).

Sad, very sad. I wasted a lot of energy & focus in my life on this.

Sadly, this applies beyond just medical research. Most of the biology papers we discussed in Journal Club (a weekly meeting of grad students who go over a single paper in detail) turned out to have at least one invalidating error (one that makes the conclusion unsupported).

As for dose relationships- a lot of drug dose responses are linear over a range. But they tend to not be linear outside that range; the error is in not correcting for low-dose effects.

Just to put your rant on more sound statistical footing, there is the whole Replication Crisis issue:


Holy cow! I've been looking for a 'clearinghouse' site that addresses all the issues swiftly and clearly. It's obvious in retrospect, but I never though Wikipedia would be that place. Thank you!

Actually sourcing the data via PubMed has been eye opening for me, and I saw all those things. I am amazed by the number of news articles that cite conclusions as factual for humans, but on reading the source, they have only been shown in mice studies.

And quite often they're obviously bogus even in mice, especially if the author seems to have a political axe to grind (e.g. anti-GMO, anti-sweetener, anti-herbicide, etc).


Technically, going from "megadose is lethal" to "smaller doses are harmful" is an interpolation, not an extrapolation.

Only if an even smaller dose was tested too.

Well, technically anyone with a pet mouse is doing the experiment "Does a dose of ~0ppm of cyanide/arsenic/etc. kill mice?" :)

That is called a control group. You need to actually have a formal one and big enough too.

It has to be kept in as similar conditions as the group under treatment and have genetically similar makeup.

Most importantly, the composition of both groups and the conditions the experiment is ran under should be rigorously described.

It does, but it seems to be very slow acting.

The dose is the poison.

There are infinite interpolation possibilities, one decides it is linear by extrapolation.

Don't forget all the BS drawn from epidemiological "research". We ask 10,000 people how many cheeseburgers, eggs, tomatoes, etc. they eat in a week, at the start of the study, extrapolate that out for 10 years, see how many died, break the participants into arbitrary quartiles, and report everything as relative risk to pump the headlines: "Eating 2 cheeseburgers a week increases your risk of death by 75%"

None of this research passes even the most basic statistical sniff-test, and even if it did, the hazard ratios are so small as to be easily influenced by noise or confounders.

My favorite, especially because this ends up being reported in the media as infallible truths.

I mean, medicine is not science, it is an applied science. Applying biology, chemistry, and physics to the attempted betterment of health. Akin to the difference between physics and engineering.

It would be nice to see a meta-study on this.

I agree. But peeing in other people's cereal is a perilous endeavor.

Cochrane reviews frequntly say "the available evidence is poor, more research is needed", and "we found X relavent papers, but had to exclude Y which were low quality."

I have no serious issue with fad-diet pundits and talk show hosts giving bad advice based on extremely weak science. My issue is when the government does this, and sets regulations, dietary guidelines, and school lunch rosters based on bunk.

Changing the diets of millions of children at once should require airplane-construction levels of confidence, at the very least. Instead the FDA and FNS are running off low-confidence garbage and acting as if they have an authoritative standing, playing with potentially trillions of dollars of future utility differences spread across tens of millions of schoolchildren.

What makes it worse is that they spend precious credibility on this nonsense so when something that should be as simple and important as "Polio will $%*& up your whole family, you should get this vaccine" comes along, they've got so little left that otherwise rational people doubt them and are suckered by quacks.

> Changing the diets of millions of children at once should require airplane-construction levels of confidence

The safe default for airplane construction is not to build it (or not fly it).

There is no "safe default" for a diet...it's not like you can stop eating. So they'll serve their best guess of a good school lunch.

A better (i.e. less risky) approach is to have smaller administrative units (like states) deciding what to feed kids (assuming we can't find some way to not have the government, with its inherent ulterior pork-barrel motives, feeding children). Even if the odds of any given administrative unit making a bad choice are the same, we're at least very unlikely to malnourish every child in every administrative region.

We have thousands of years of experience in feeding children. Changing a dietary recommendation should require iron-clad evidence.

For most of history, children have been fed badly. Its a hyperbolic exaggeration of the first order to claim millennia of experience.

I feed my children only sunshine and they usually stop complaining after 2-4 weeks.

> Changing a dietary recommendation

What if the original dietary recommendation lacked iron-clad evidence?

Then goal one is to get some iron-clad evidence so your next recommendation doesn't make it worse. Just because a mistake was made the first time isn't an excuse to make the same mistake a second time.

Once you have iron-clad evidence of a diet better than all other diets, you've reached the end. Congratulations.

This is a red herring argument. You don't need iron-clad evidence of a diet better than all diets. You only need iron-clad evidence for smaller specific recommendations. This doesn't have to be an all or nothing diet.

In fact with nutrition it probably can't which means advice should be both well-researched and more nuanced than it currently is. Neither of which changes the point that making recommendation with little to no quality research is more likely to make it worse than make it better and you'd be better off just staying where you are until you understand the problem better.

You don't need to be better than all other diets. You just need to be better than the previous recommendation.

Humans have survived and thrived on a wide variety of diets both now and throughout history.

They're dispensing medical advice[1], so I do have serious issues with fad-diet pundits and talk show hosts giving advice based on weak science.

[1] It's medical advice, quack miranda warning[2] be damned

[2] http://scienceblogs.com/whitecoatunderground/2008/01/14/quac...

Just to play devil's advocate here, if the old dietary guidelines were bunk and the new replacement guidelines are also bunk then are we any worse off?

Quite possibly. There will be levels of mis-advice. You may be moving from limited damage fallacy to a seriously damaging one.

Not necessarily, but the point is that the government shouldn't be giving this advice in the first place. We'd be better off if we recognized that there is no objective answer. That will encourage us to follow actually good risk-mitigation strategies like per-state food programs.

The news only reports change. If the advice isn't changing then it won't be in the news. That makes it feel like everything is changing because we conviently ignore all the advice that hasn't changed. If you look at medical advice as a whole it hardly changes at all, so most of it never gets in the news.

Also, most people follow news organisations that are fastest instead of following the organisations that are slower but rigourous. It doesn't matter if the advice is wrong; it still gets more eyeballs.

I grew up in India surrounded by a whole bunch of general health and nutrition advice which seems to be an aggregation of centuries of culture specifically suited to my culture, location and genetic makeup. There's no evidence for any of it, but it is what seems to have been learnt over the years. I have somehow always trusted it more than these ever changing fads that the newspapers were too eager to push at me. I know though that by the scientific yard stick these traditions don't stand up to scrutiny but I studied enough science to understand that these general advisory guidelines were as likely to be hocus pocus as my grandmother's so called wisdom. Mind you this is not about western medicine, just about food and lifestyle recommendations. #justsaying.

It's applying leeches still recommended for most health conditions there?

What makes you trust one ancient knowledge over other ancient knowledge?

For many assertions, my quick filter is: what the _direct_ experimental design would look like to draw such conclusions? Is it possible that it is done somewhere?

Biology is so complex and we know so little about it, so that the normal logic often fails on it: if A -> B, B -> C, you need to do experiments to show A -> C. Error accumulates, other unknowns come in.

If you have to draw conclusion by logic in biology, look at the errorbars and sample size. Then stretch the errorbars by the square root of the sample size. Then stretch it again 3 times. If the conclusion still look obvious to you, proceed with caution.

I know zilch about biology or medicine but my impression is that in biology even A->B doesn't imply A->B half the time. Like you might do a large study and find that people get obese after consuming fat, only to later find that if you do the same with a different group of people, that doesn't happen. (Exaggerating with an example here, not actually claiming this particular example is true. Just trying to illustrate my impression of what a lot of things in the field are like.)

that's essentially what he said.

Part of the problem is that properly designed medical experiments on humans are an ethical minefield so there are compromises all over the place.

I would agree that the medical field is plagued with empiricism which consists of asking a naive question (rather than working from a deeper theory) and then doing a study to 'measure' the thing in question. Third parties then note the correlation and issue 'advice' in the form of suggestive reporting. I would guess this puts the public off science altogether!

Perhaps the best way to look after the body will turn out to be as Aubrey de Grey suggests: regular servicing to repair accumulated damage. But we aren't there yet. Plus we urgently need new antibiotics.

The medical field is very much based off one-off effect estimates, but there's starting to be some pushback on this. I've seen it mentioned several times that there's value to be had in theory from relatively prominent people in epidemiology.

It's a failing of the industry to treat nutrition as cause and effect when the body is clearly a complex system.

Then again, it's hard to pick a complex system that we do model properly.. the economy, the climate, the body, etc..

Perhaps I need to clarify, these systems are chaotic hence the difficulty in modelling them, I didn't mean to imply anything political.


If humans were removed from the equation, we could accurately model the economy and climate (to a reasonable degree). Human randomness adds too much entropy to the system to accurately model it.

The climate is perfectly chaotic without human involvement.

Much the same way that the three body problem is chaotic whether there are people involved or not.

the climate is a lot less chaotic than weather, and aspects of it are not so hard to model correctly. For instance the IPCC far projections for global average temp from 1990 have been very good.

Apologies. You're correct.

I should have said that weather is chaotic.

That is a claim I'm not sure I believe. Why do you think removing Humans would reduce the complexity sufficiently to model the weather which any usable accuracy?

Idlewords always a good read, the history of Scurvy and Vitamin C has some great content relating to science and diet, how difficult it is to get conclusive evidence, and how easy it is to jump to the wrong conclusions over and over:


including quotes / sections:

- It has been known since antiquity that fresh foods in general, and lemons and oranges in particular, will cure scurvy. Starting with Vasco de Gama’s crew in 1497, sailors have repeatedly discovered the curative power of citrus fruits, and the cure has just as frequently been forgotten or ignored by subsequent explorers.

- 1747, James Lind proved in one of the first controlled medical experiments that citrus fruits were an effective cure for the disease. [..] the experiment involved two sailors eating oranges for six days. Lind went on to propound a completely ineffective method of preserving lemon juice (by boiling it down), which he never thought to test.

- Knowing that citrus fruits deferred scurvy, but not knowing why, assuming it was to do with acidity, and switching to a cheaper British source of 'limes' instead of a foreign source of 'limes' - and accidentally switching from effective fruit to ineffective fruit, without noticing.

- Knowing that citrus fruits avoided scurvy, but not knowing that copper breaks down Vitamin C, and keeping the juice in copper containers on ship.

- the time Pasteurized milk was found to be better for preventing bacterial infection in infants, so rich parents switched to it. And the heating denatured Vitamin C so their children developed scurvy. Poor children, being breast fed, didn't get scurvy, only richer children.

- The sickness could be fitted to so many theories of disease—imbalance in vital humors, bad air, acidification of the blood, bacterial infection—that despite the existence of an unambigous cure, there was always a raft of alternative, ineffective treatments. At no point did physicians express doubt about their theories, however ineffective.

- Finally, that one of the simplest of diseases managed to utterly confound us for so long, at the cost of millions of lives, even after we had stumbled across an unequivocal cure. It makes you wonder how many incurable ailments of the modern world—depression, autism, hypertension, obesity—will turn out to have equally simple solutions, once we are able to see them in the correct light. What will we be slapping our foreheads about sixty years from now, wondering how we missed something so obvious?

And plenty of other things they thought about ptomaines, contaminated tinned food, stuffy air, lack of light, poisons to avoid, and all without good experiments and with plenty of grabbing a ray of hope and committing everything based on it.

If you haven't already subscribed the author (Dr Aaron Carroll) runs his own YouTube channel - Healthcare Triage (https://www.youtube.com/user/thehealthcaretriage)

Highly recommended if you're interested in the space

I'm still confused about saturated fats. Can I eat butter with abandon or not?

Here is a post about fats from the same author: http://theincidentaleconomist.com/wordpress/a-study-on-fats-...


"Should we be eating more polyunsaturated fats? Should we be avoiding saturated fats? The honest answer is: I don’t know. Given my review of the evidence, I stand by my previous recommendations [1], which essentially focus more on foods and less on nutrients. I think the state of nutrition research in general is shockingly flawed."

[1] http://www.nytimes.com/2015/04/21/upshot/simple-rules-for-he...

"Eat food. Not too much. Mostly plants."

Given what the author says in the article regarding processed foods (and I'd disclose that I agree with his summation) I have to wonder about the whole fad for Soylent. It seems to me a technofix beloved by food haters, but some part of me suspects that, in the fullness of time, we'll find out that it is an incredibly bad idea.

The trick is, Soylent is not new, similar things are in use as enteral feeding in hospitals, much more expensive, rigorously designed and validated.

"Moderation in all things".

- Roman playwright Terence, circa ~200 B.C.

Still good wisdom, twenty two hundred years later.

Obvious counter examples flood my mind.

Obvious counter examples are fine, in moderation.

The best advice I've received(which is a combination of journal science and self-experiment) is to moderate it when pairing with carbs - so perhaps lay off the butter cookies or cheeseburgers and fries, but don't be too worried about having a frankfurter with some condiments or a fried egg by itself. The saturated fat acts as an accelerator to the carbs, making it hit your system harder. When consumed apart, they don't have the same interaction.

(of course, once you start looking at how restaurants use saturated fat, it's easy to see that this advice is violated constantly.)

absolutely, as long as you still get enough protein, vitamins, and fiber and remain within your target calorie intake. (hard to do that if you're eating 1,500 calories of butter though of course) not a nutritionist but i did stay at holiday in express last night.

I find Authority Nutrition a useful guide:


So, Doctor - you're telling me that vaccines could well cause autism?

If you're not saying that, why not? Given that "credible evidence is nonexistent in the medical field."

This crosses the line into incivility, which is the last thing we want on HN, especially when talking with domain experts whom we're lucky to have in this community. Please conduct yourself better than this in the future.

We detached this subthread from https://news.ycombinator.com/item?id=13423680 and marked it off-topic.

I'm sorry, but when an individual descends into hyperbole by dismissing an entire branch of research endeavour with 'credible evidence is nonexistent in the medical field' - it is worth attempting to find out whether they actually believe that there are simply no good studies ever carried out.

The 'So Doctor' was unfortunate - I was framing a response from a fictious patent walking into his/her surgery, but the wording doesn't work and I can see how it appears aggressive.

Nonetheless, yes when there are organisations such as https://www.nice.org.uk and http://www.cochrane.org trying their hardest to pull together the best evidence out there, this broad dismissal is irritating.

Ok. I never said there was no effort towards improvement. However: https://www.ncbi.nlm.nih.gov/pubmed/17205029

So, anonymous sarcastic individual - As I am sure you are aware, this specific controversy began with a big fraudulent study. You still see controversial data pop up now and then, but the topic is so much a political and public health matter, that you would be very naive to take anything anyone has to say about the medical evidence for granted.

What I can say is that, knowing nothing better, I rely on the very likely abysmal quality of medical evidence on the subject to say that vaccines do not cause autism. You can quote that as my official opinion.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact