Hacker News new | past | comments | ask | show | jobs | submit login

This is why I actually don't trust newspapers or news sites for any medical advice for statements like "a glass of wine a day will help you live longer", or some nonsense like that.

In fact, the best place to find this information is through the primary literature. You can use Google Scholar and Sci-hub to do your own research. For most issues, it's actually not to hard to understand the primary literature, and it will be much more accurate than the crap you read on popular websites.




The primary literature is a huge waste of time in so many cases though. It can be really frustrating. IF there's not a problem in one of hundreds of representational practices, you can find out later there was some other compromise (and some are HUGE) of the data or methodology.

And then if you point out a problem with a research undertaking or study when it is brought up, you can get some set of true-science-believers looking at you like it can't possibly be. It's unreal sometimes.

It's really no wonder people spread their subjective experiences as if they are science. Institutional "science" is great as a set of models and perspectives, but when you dip into the product you can start to wonder if you are effectively doing the equivalent of product research by watching the home shopping network.


The only people who trust science blindly are those who have never seen how science gets done.

It's scientism pure and simple.

Or to quote a bunch of people who had a great idea: Nullius In Verba.


In so much of the world today, "science" is a brand, and everyone is told to listen to "scientists" because they "follow the science" and know the best way. The genpop couldn't possibly even begin to wrap their heads around text written above a 5th grade level, so they are told they need to listen to the "scientists" by "authorities"


That is due to schools giving importance to memorization over critical thinking skills


Seems quite difficult. For example in his study the sample size was a problem:

"Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result."

"A woman’s weight can fluctuate as much as 5 pounds over the course of her menstrual cycle"

As a regular person how would I know what the correct sample size for this type of study should be? Remember, that number isn't some fixed value but a range that increases confidence (if I'm using that wrong sorry) and it's all subjective.

I would also have to think about the uncontrolled factors, like mensuration, as having an effect since he didn't mention it in the original study.

This is far too complicated for the majority of people.


Plus something being proven by a study, even if well done, still doesn't necessarily prove anything. It generally requires replication to confirm. People have this idea that if they find a study that says something and it looks very legit, it must be true. But studies come out and are later found to be flawed or straight up wrong all the time. Primary sources are not a great source of information for most people.


"But studies come out and are later found to be flawed or straight up wrong all the time."

I didn't even think of this. You could be looking at a study that turned out to be disproven or even fraudulent (though maybe those are removed from journals). This is why there are experts in particular fields who keep up to date and should be the source.

The problem is the current wave of anti-intellectualism which turns everything into black and white easy to digest situation. A person lies once, don't trust them. A newspaper printed one or more articles that turned out to false, don't trust them. Doesn't matter how long ago or how many other truthful articles, they are done. However, like the GP tried, they don't offer any reasonable solutions to how are you suppose to get reliable information.

I wonder if the true goal of some people who push equal mistrust of historically standard sources of information is to make it easier for people to lie.


The current problem is scientism not anti-intellectualism.

The number of times I was told that I should shut up about covid because I'm not a epidemiologist was astonishing. What's more I was criticizing their _computer models_ something I do have a degree in.


What is scientism?


In his book The Atheist’s Guide to Reality, philosopher Alex Rosenberg defends his conviction that “the methods of science are the only reliable ways to secure knowledge of anything.” His philosophy is called scientism and is held by many of the world’s skeptics. In the spirit of his anti-supernaturalist leanings, Rosenberg asserts that “science provides all the significant truths about reality, and knowing such truths is what real understanding is all about.” In other words: if science can’t prove it, it’s not worth believing.

https://www.catholic.com/magazine/online-edition/science-is-...


Blind support of what people think science is without the understanding of what science actually is.

If anyone asks you if you "believe in climate change" then you're talking to someone who does not understand science and instead has fallen for scientism. The actual question is "do you understand climate change" since belief is not required.


The defintion of belief is "an acceptance that a statement is true or that something exists."

I blindly believe in almost everything in science. For example gravity, I don't understand gravity beyond some of its effects.

You think that's strange? Should I say I don't believe in gravity? Belief and understanding are two different things.

Let me pivot here and say "in most situations should just believe whatever the majority of experts in a particular field say is true but that has the highest probability of being true"


Fixing the last line

Because it has the highest probability if being true


Being able to smell P hacking is an art that people have to be trained in and practice...

Or they could just ignore any study with p>0.005 (extra zero avoids a lot of p hacking) and suffer no adverse effects.

The latter is probably a better rule of thumb to give the masses.


I totally agree with the extra zero for p-values.

I largely disagree with the term p-hacking as its not specific enough to the nature of the violation (omitting experiments, straight up [partial] fabrication of data, ...). A problem is that often it is odorless. What is the odor of omitted data, omitted experimental runs, ...?

Regarding "data dredging" i.e. testing many candidate correlations, I have mixed feelings. As long as all tests are mentioned or at least the number of tests provided I don't see a problem really.

A stream of 100 papers stemming from a 1000 different studies each truthfully based on a single test excluding the null hypothesis over its dataset at p=0.05 and a single study testing 1000 properties and finding 100 at p=0.05 have the same expected number of spurious and real findings. A scientist with more programming skills or support, more compute, ... will simply find similar quality results faster. Scientist B may intuitively feel jealous about the quantity of similar quality results that scientist A picks up with a dragnet approach, and the jealousy may or may not be justified (not having the same amount of compute, support, training, etc.), but that does not justify accusing scientist A of fraud, in those cases where there is none. Just like one factory worker might work faster than another (after part of his job on the factory line became automated) that doesn't mean its being done less well.

What disturbs me the most about the article is the following:

>Luckily, scientists are getting wise to these problems. Some journals are trying to phase out p value significance testing altogether to nudge scientists into better habits.

This worries me enormously. Plenty of non-scholarly articles, like news articles mess up units, especially when time is involved. Product information also often contains rather intentional unit confusion. I hope the solution wouldn't be to simply ban units!!!

If some people use erroneous arithmetic, should we collectively ban numbers too?!

In the face of malpractice we shouldn't do away with the theory, we should think of ways to force practice to adhere to theory.

Again, I fully agree with your suggestion to set tighter significance levels. When we observe readership complaining about the huge number of results, and of its low trustworthiness, the most obvious solution is to set stricter significance levels. At some point levels will become tight enough that explaining outright fraud will become so hard because the probability of an effect later discovered to be nonexistent resulting in a spurious false positive at the stated p-value becomes embarrassingly low.

If the dataset sizes are chosen large enough, having separate test and validation sets can help tremendously.


Doing a legit study is hard, and expensive, because there are a million things to control for. Finding even 30 people who live basically identical lives, and are physically identical, and who are prepared to actually do something specific to themselves is a major barrier.

But 30 is a bare minimum. 60 is better (ie 30 active, 30 control) and the ideal is in the thousands.

Even the fact of taking money to be in a study is already non-normative, and has to be controlled for.

So, take any study on human behaviour with a huge pinch of salt.

That said, some studies can be replicated, and correlate well with what we experience. Dunning-kruger and loss aversion are easily (and often) replicated, but also correlate well with everyday experiences.

When it comes to diet and weight-loss though, well, consider it all rubbish IMO - 99% is rubbish, and figuring out the 1% is hard.


Sample size needed depends on effect size. If there's a disease that has resulted in death within a year in every recorded case (assume its a common disease for the sake of argument) in history and your pill cures 2 out of the 7 people in your study? Damn, you've got a fantastic pill there!

My favourite example is if you have a pill you think allows people to fly and you administer it to 1 random person and they shoot off into the sky? Despite sample size 1 that's actually pretty strong evidence :D


How did you come up with "99% are rubish"


That being said, with which I have some level of agreement, I have adopted chocolate in this way.

I started drinking coffee because I had heard that it was the primary source of antioxidants in the U.S., and I decided to start adding unsweetened cocoa powder to every cup.

I get all the health benefits of chocolate this way, but none of the fat or the sugar.

The taste is not bad.


Is it also easy to tell if the author is a fraud? Or if the study was designed specifically to get a specific outcome? How do you tell from scihub if an article was funded by a party with affiliated interests?


What you do is not trust a single article. You download about a dozen in the field, including review articles if you can. You look for journal titles and journal rankings. And check whether the authors come from well-known research institutions. Click on the "cited by" link and check the papers that cited them if there are any refutations. Looking through 30-40 papers sounds like a lot but it only takes a couple weeks at most and it's not a lot of work at all if the issue is really important to you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: