Hacker News new | past | comments | ask | show | jobs | submit login

So run a meta-study upon the results published by a set of authors and double-check to make sure that their results are normally distributed across the p-values associated with their studies.

These problems are solved problems in the scientific community. Just announce that regular meta-studies will be done, expectations for authors to be normally distributed is published, and publicly show off the meta-study.

-------------

In any case, the discussion point you're making is well beyond the high-school level needed for a general education. If someone needs to run their own experiment (A/B testing upon their website) and cannot afford a proper set of tests/statistics, they should instead rely upon high-school level heuristics to design their personal studies.

This isn't a level of study about analyzing other people's results and finding flaws in other people's (possibly maliciously seeded) results. This is a heuristic about how to run your own experiments and how to prove something to yourself at a 95% confidence level. If you want to get published in the scientific community, the level of rigor is much higher of course, but no one tries to publish a scientific paper on just a high school education (which is where I was aiming my original comment at).




> and double-check to make sure that their results are normally distributed across the p-values associated with their studies

What is the distribution of a set of results over a set of p-values?

If you mean that you should check to make sure that the p-values themselves are normally distributed... wouldn't that be wrong? Assuming all hypotheses are false, p-values should be uniformly distributed. Assuming some hypotheses can sometimes be true, there's not a lot you can say about the appropriate distribution of p-values - it would depend on how often hypotheses are correct, and how strong the effects are.


First, I was specifically responding to this:

> I remember my high school AP Psychology teacher mocking p=0.05 as practically meaningless.

and trying to explain why the OP's teacher was probably right.

Second:

> So run a meta-study upon the results published by a set of authors and double-check to make sure that their results are normally distributed across the p-values associated with their studies.

That won't work, especially if you only run the meta-study on published results because it is all but impossible to get negative results published. Authors don't need to cherry-pick, the peer-review system does it for them.

> These problems are solved problems in the scientific community.

No, they aren't. These are social and political problems, not mathematical ones. And the scientific community is pretty bad at solving those.

> the discussion point you're making is well beyond the high-school level needed for a general education

I strongly disagree. I think everyone needs to understand this so they can approach scientific claims with an appropriate level of skepticism. Understanding how the sausage is made is essential to understanding science.

And BTW, I am not some crazy anti-vaxxer climate-change denialist flat-earther. I was an academic researcher for 15 years -- in a STEM field, not psychology, and even that was sufficiently screwed up to make me change my career. I have advocated for science and the scientific method for decades. It's not science that's broken, it's the academic peer-review system, which is essentially unchanged since it was invented in the 19th century. That is what needs to change. And that has nothing to do with math and everything to do with politics and economics.


> It's not science that's broken, it's the academic peer-review system, which is essentially unchanged since it was invented in the 19th century.

In my experience, it's not even this. Rather, it is that outside of STEM, very, very few people truly understand hypothesis testing.

At least in my experience, even basic concepts, as "falsify the null-hypothesis" is surprisingly hard, even with presumably intelligent people, such as MD's in PHd programmes.

They will still tend to believe that a "significant" result is proof of an effect, and often even believe it proves that the effect is causal with the direction they prefer.

At some point, stats just becomes a set of arcane conjurations for an entire field. At that point, the field as a whole tends to lose their ability to follow the scientific method and turns into something resembling a cult or clergy.


FWIW, I got through a Ph.D. program in CS without ever having to take a stats course. I took probability theory, which is related, but not the same thing. I had to figure out stats on my own. So yes, I think you're absolutely right, but it's not just "outside of STEM" -- sometimes it's inside of STEM too.


Yes. I was, however, not arguing that every student of the field would have to understand the scientific method well. It's enough that there is a critical mass of leaders having such understanding, to ensure that students (including PhD students) work in ways that supports it.

What I was arguing was that there are almost nobody with this understanding in many fields outside stem.

As for your case, I don't know exactly what "probability theory" meant at your college. But in principle, but if it's teaching about probability density functions and how to do integration on them to calculate various probabilities, you're a long way towards a basic understanding of stats surpassing many "stats" courses taught to social science students.

I myself only took a single "stats" course before graduating, which was mostly calculus applied to probability theory, without applications such as hypothesis testing baked in. Then I went on to do a lot of physics that was essentially applied probability theory (statistical mechanics and quantum mechanics).

Around that time, my GF (who was a bit older than me) was teaching a course in scientific methodology to a class of MD students who wanted to become "real" doctors (PhD programme for Medical Doctors), and the math and logic part was kind of hard for her (physicians may not learn a lot of stats until this level, but most of the MD PhD students are quite smart). Anyway, with a proper STEM background, picking up these applications was really easy.

Since then, I've had many encounters with people from various backgrounds that try to grapple with stats or adjacant spaces (data mining, machine learning, etc), and it seems that those who do not have a Math or Physics background, or at least a quite theoretical/mathematical Computer Science or Economics background, struggle quite hard.

Especially if they have to deal with a problem that is not covered by the set of conjurations they've been taught in their basic stats classes, since they only learned to "how" but not the "why".


There’s a professor of Human Evolutionary Biology at Harvard who only has a high school diploma[1]. Needless to say he’s been published and cited many times over.

[1] https://theconversation.com/profiles/louis-liebenberg-122680...


I don't know whether you're mocking them or being supportive of them or just stating a fact. Either way, education level has no bearing on subject knowledge. I know more about how computers, compilers, and software algorithms work than most post-docs and professors that I've run into in those subjects.

Am I smarter than them? Nope. Do I know as many fancy big words as them? Nope. Do I care about results and communicating complex topics to normal people? Yep. Do I care more about making the company money than chasing some bug-bear to go on my resume? Yep.

I fucking hate school and have no desire to ever go back. I can't put up with the bullshit, so I dropped out; I just never stopped studying and I don't need a piece of paper to affirm that fact.


To the people downvoting, at least rebuttal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: