Hacker News new | comments | show | ask | jobs | submit login

The problem with much of modern medicine is that much of it is based on flawed and biased statistical studies. Whether this is done because medical personnel don't have training in statistics, or because such studies generate funding, I don't know, but something is definitely rotten.

Let's take anything involving nutrition. Some challenges are: (1) people lie, (2) such studies can't be double-blind so placebo kicks in, (3) the statistical significance of short-term studies is zero, (4) you can't control all the variables, unless you lock those people in a cage and (5) most conclusions of such studies have the potential to confuse the cause and the effect.

But not all of science is like that. Just medicine.




When you cannot control all the variables, it's important to have a large enough sample that the randomness in each direction for the different variables essentially cancels itself out.

Also what does "the statistical significance of short-term studies is zero" mean? I don't think it means what you think it means.

I would argue that short-term studies (for nutrition anyway) have little clinical significance, despite their statistical significance. I'm in medicine, and I read papers all the time detecting a statistical difference between control and experimental groups, but the difference is so tiny that it's meaningless. This is the balance you have to strike with large sample sizes. With a large enough sample, small differences are likely to be statistically significant but the key is determining if the difference is worthwhile.

I blame bad science reporting for a lot of the anger you are feeling. Reporters don't seem to understand what they are reporting, and often the scientists themselves are (accidentally or on purpose) making it worse.


> When you cannot control all the variables, it's important to have a large enough sample that the randomness in each direction for the different variables essentially cancels itself out.

That's nice in theory, but does not happen for most published research.

> I'm in medicine, and I read papers all the time detecting a statistical difference between control and experimental groups, but the difference is so tiny that it's meaningless.

I'm trained in statistics. My ex was an MD. I used to read the NEJM for fun for a couple of years. Most of the results published are barely statistically significant for the small group they tested ("our sample included 40 caucasian females between the ages of 37 and 48, and we have a p value of 0.03" with no mention of the context which might make that p value meaningless - but let's assume they got that part right). And then, a couple of years later, some other study takes that result as absolute truth, but assumes it applies to any woman aged >30. And a couple of years later, it is assumed to be universal and speculated to apply to males as well.

Is your experience different?

> I blame bad science reporting for a lot of the anger you are feeling.

I blame tenure publishing requirements. While bad reporting certainly deserves its share of contempt, people these days do everything in order to meet the publishing requirements for tenure. Most stay away from outright fabrication, but otherwise every manipulation of the data that would make it fit for a higher caliber publication is being done as long as it is not outright fraudulent -- including dropping the background context so nicely exemplified by this xkcd comic http://xkcd.com/882/ . It often is the researchers doing the bad reporting with no outside help.


Speaking only to the middle part of my experience with modest research creeping up in significance, I would say that I see that sometimes but not regularly. Given, as you say, the tenure publishing requirements, I feel that I often see a flood of similar studies after a "proof of concept" that actually helps to flesh out the issue.


Not even a large sample will help against systematic errors.


Good point; upvoted. However I was just trying to address the idea that there are ways of minimizing problems in study design. No study is ever perfect, but many of them are sufficient.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: