Hacker News new | past | comments | ask | show | jobs | submit login
Effect size is significantly more important than statistical significance (argmin.net)
376 points by stochastician 7 days ago | hide | past | favorite | 159 comments





Speaking not to this study in particular necessarily, I strongly agree with the general point. Science has really been held back by an over-focusing on "significance". But I'm not really interested in a pile of hundreds of thousands of studies that establish a tiny effect with suspiciously-just-barely-significant results. I'm interested in studies that reveal robust results that are reliable enough to be built on to produce other results. Results of 3% variations with p=0.046 aren't. They're dead ends, because you can't put very many of those into the foundations of future papers before the probability of one of your foundations being incorrect is too large.

To the extent that those are hard to come by... Yeah! They are! Science is hard. Nobody promised this would be easy. Science shouldn't be something where labs are cranking out easy 3%/p=0.046 papers all the time just to keep funding. It's just a waste of money and time of our smartest people. It should be harder than it is now.

Too many proposals are obviously only going to be capable of turning up that result (insufficient statistical power is often obvious right in the proposal, if you take the time to work the math). I'd rather see more wood behind fewer arrows, and see fewer proposals chasing much more statistical power, than the chaff of garbage we get now.

If I were King of Science, or at least, editor of a prestigious journal, I'd want to put word out that I'm looking for papers with at least one of some sort of significant effect, or a p value of something like p = 0.0001. Yeah. That's a high bar. I know. That's the point.

"But jerf, isn't it still valuable to map out all the little things like that?" No, it really isn't. We already have every reason in the world to believe the world is drenched in 1%/p=0.05 effects. "Everything's correlated to everything", so that's not some sort of amazing find, it's the totally expected output of living in our reality. Really, this sort of stuff is still just below the noise floor. Plus, the idea that we can remove such small, noisy confounding factors is just silly. We need to look for the things that stand out from that noise floor, not spending billions of dollars doing the equivalent of listening to our spirit guides communicate to us over white noise from the radio.


> If I were King of Science, or at least, editor of a prestigious journal, I'd want to put word out that I'm looking for papers with at least one of some sort of significant effect, or a p value of something like p = 0.0001. Yeah. That's a high bar. I know. That's the point.

And study preregistration to avoid p-hacking and incentivize publishing negative results. And full availability of data, aka "open science".


Preregistration, requirement to publish negative or null results, and full data is, arguably, the three legs of modern science. If we collectively don't enforce this, nobody is doing science, they're just fucking around and writing it down.

I like rules like these. One context where preregistration, null results, and full data are all required are clinical trials overseen by the FDA. It’s no surprise that those studies carry a lot of weight.

Also replication studies for negative or null results in addition to positive ones (we don't have either).

You do realize there is a million negative results for every one positive result? This is equally easy to game, maybe easier.

Yes, and knowing what's been tried and what has failed is important.

I think what's being pointed out is that "researchers" could pump out hundreds of easy to test negatives every day if a negative result was just as incentivised.

I do agree though, negatives are just as important when the intent is to prove/disprove a meaningful hypothosis.


A negative result won't make a career. I don't think there's much danger when requiring negative results going onto a repository of over incentivising negative results. You can't mandate Nature or Cell publishes negative results.

we tried using 0.1 mL, it didn't work

we tried using 0.11 mL, it didn't work

we tried using 0.12 mL, it didn't work

we tried using 0.13 mL, it didn't work


    we tried using 0.10 mL, it didn't work
    we tried using 0.11 mL, it didn't work
    we tried using 0.13 mL, it didn't work
    we tried using 0.15 mL, it didn't work
    we tried using 0.17 mL, it didn't work
    we tried using 0.16 mL, it didn't work
    we tried using 0.18 mL, it didn't work
    we tried using 0.20 mL, it didn't work
    we tried using 0.14 mL, it didn't work
    we tried using 0.12 mL, it worked so we published
Do you want to know the ones that "didn't work" existed? Or are you happy with just the one that "worked" being written up in isolation?

Especially for small effect size and suppressing what didn't work, this is one obvious way of many to perform p-hacking for publication acceptance.

https://en.wikipedia.org/wiki/Replication_crisis


i don’t want to know about each test that didn’t work as a separate publication, that’s for sure!

That's true, we would need a way to collect this data so it's easily seen as part of a whole.

E.g. if you search for eggs and cholesterol you should find all studies with their summarized results on whether eggs are ok or not for your cholesterol, grouped by researcher, so if somebody does 200 studies to find the one positive it's instantly visible.


You would read a meta-study that summarizes those tests - especially because they might potentially made by different labs, and the fact that one of them worked might be actual a real effect caused by some other difference in the experiment.

If someone really tested those hypotheses, let them publish. I doubt they'll get funding so it'll be on their own dime. In practice people do run experiments like that, but they only publish the 1/4 trials that is successful.

Look to physics for how negative results should be published. There typically has to be reason to suspect some dosage range should work, in which case that sequence of studies you describe would be perfectly valid if it's within that range.

Why would someone want to game a negative result? Nobody ever becomes famous for saying my approach doesn't work. (As long as science is open, to make sure there is actually good work done by researchers before reaching this neg result.)

To have their name on a publication, which is a currency in the academic world.

I've thought about the idea of allowing people to separately publish data and analysis. Right now, data are only published if the analysis shows something interesting.

Improving the quality of measurements and data could be a rewarding pursuit, and could encourage the development of better experimental technique. And a good data set, even if it doesn't lead to an immediate result, might be useful in the future when combined with data that looks at a problem from another angle.

Granted, this is a little bit self serving: I opted out of an academic career, partially because I had no good research ideas. But I love creating experiments and generating data! Fortunately I found a niche at a company that makes measurement equipment. I deal with the quality of data, and the problem of replication, all day every day.


It would be interesting to consider how much knowledge would never have been uncovered if you were King of Science. All those subtle, barely seen interactions in nature that on further investigation turned out to be something rather special.

Such as? It would also be interesting to explore how many dead ends we wouldn't have wasted time on, and so what other things might have been discovered sooner.

Scientists aren't stupid. No one saw a paper where a predictor explained 1% of the variance in an outcome and based solely on a significant p value decided that was a great road to base an entire career on. The problem, as described by the parent comment, doesn't really exist in funding structures and the scientific literature. It does occur to some degree in media coverage of science.

One could make the case that in GWAS studies it has occured, but not because small effect sizes are inconsequential, the statistical methods just weren't able to separate grain from chaff for a while.

An allele that is responsible for 2% of the variation in disease risk might seem inconsequential, but 25 of those together can serve as a polygenic risk score that can predict disease and target treatment.


> Scientists aren't stupid. No one saw a paper where a predictor explained 1% of the variance in an outcome and based solely on a significant p value decided that was a great road to base an entire career on. The problem, as described by the parent comment, doesn't really exist in funding structures and the scientific literature.

Of course they're stupid. Everyone is stupid. That's why we have a "scientific method" and a formal discipline of logic to overcome fallacious reasoning and cognitive biases. If people weren't stupid we wouldn't need any of these disciplines to check our mistakes.

And yes, what you describe does happen all of the time. We literally just had a thread on HN about the failure of the amyloid hypothesis in Alzheimer's and the decades of work put wasted on it. Many researchers are still trying to push it as a legitimate therapeutic target despite every clinical trial to date failing spectacularly. As Planck said, science advances on funeral at a time.

Which isn't to say that small effect sizes aren't legitimate research targets either, but if you're after a a small effect size, the rigour should be scaled proportionally.


So your example of decades being wasted chasing an initial tiny effect size, all the time, was... An example of a failed mechanistic hypothesis that wasn't based on a tiny effect size.

I wasn't trying to post about the effect size specifically, but about general incentives and dead ends, but if you want a specific example look no further than aspirin for myocardial infarction:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3444174/

Quote:

> A commonly cited example of this problem is the Physicians Health Study of aspirin to prevent myocardial infarction (MI).4 In more than 22 000 subjects over an average of 5 years, aspirin was associated with a reduction in MI (although not in overall cardiovascular mortality) that was highly statistically significant: P < .00001. The study was terminated early due to the conclusive evidence, and aspirin was recommended for general prevention. However, the effect size was very small: a risk difference of 0.77% with r2 = .001—an extremely small effect size. As a result of that study, many people were advised to take aspirin who would not experience benefit yet were also at risk for adverse effects. Further studies found even smaller effects, and the recommendation to use aspirin has since been modified.

Long-term aspirin use has its own risks, like GI bleeds, and the MI benefits are clearly not warranted given those risks.


It's hard to parse that example, because the citation it contains is to a meta-analysis that provides and effect size of aspirin for MI in the PHS in the form of an odds ratio that is much greater magnitude. Digging a bit more, heres the actual result - the difference in relative risk was 44% not 0.77%. https://www.nejm.org/doi/full/10.1056/NEJM198907203210301

> There was a 44 percent reduction in the risk of myocardial infarction (relative risk, 0.56; 95 percent confidence interval, 0.45 to 0.70; P<0.00001) in the aspirin group (254.8 per 100,000 per year as compared with 439.7 in the placebo group).

I agree if you said from the start you meant general incentives, especially in pharma development, but that is by and large a different conversation.


This paper was pretty clearly pre-specified here; https://files.givewell.org/files/DWDA%202009/IPA/Masks_RCT_P...

And it was actually preregistered as well: https://osf.io/vzdh6/

The problem is that when you’re on the cusp of a new thing, unless you’re super lucky, the result will necessarily be near the noise floor. Real science is like that.

But I definitely agree it’d be nice to go back and show something is true to p=.0001 or whatever. Overwhelmingly solid evidence is truly a wonderful thing, and as you say, it’s really the only way to build a solid foundation.

When you engineer stuff, it needs to work 99.99-99.999% of the time or more. Otherwise you’re severely limited to how far your machine can go (in terms of complexity, levels of abstraction and organization) before it spends most of its time in a broken state.

I’ve been thinking about this while playing Factorio: so much of our discussion and mental modeling of automation works under the assumption of perfect reliability. If you had SLIGHTLY below 100% reliability in Factorio, the game would be a terrible grind limited to small factories. Likewise with mathematical proofs or computer transistors or self driving cars or any other kind of automation. The reliability needs to be insanely good. You need to add a bunch of nines to whatever you’re making.

A counterpoint to this is when you’re in an emergency and inaction means people die. In that case, you need to accept some uncertainty early on.


> If you had SLIGHTLY below 100% reliability in Factorio, the game would be a terrible grind limited to small factories.

I'd argue you do have <100% reliability in Factorio, and much of the game is in increasing the 9s.

Biters can wreck havok on your base. Miners contaminate your belts with the wrong types of ore, if you weren't paying enough attention near overlapping fields. Misplaced inserters may mis-feed your assemblers, reducing efficiency or leaving outright nonfunctional buildings. Misclicks can cripple large swaths of your previously working factory, ruining plenty of speedruns if they go uncaught. For later game megabase situations, you must deal with limited lifetimes as mining locations dry up, requiring you to overhaul existing systems with new routes of resources into them. As inputs are split and redirected, existing manufacturing can choke and sputter when they end up starved of resources. Letting your power plants starve of fuel can result in a small crisis! Electric miners mining coal, refineries turning oil into solid fuel, electric inserters fueling the boilers, water pumps providing the water to said boilers - these things all take power, and jump starting these after a power outage takes time you might not have if under active attack if your laser turrets are all offline as well.

But you have means of remediating much of this unreliability. Emergency fuel and water stockpiles, configuring priorities such that fuel for power is prioritized ahead of your fancy new iron smelting setup, programmable alerts for when input stockpiles run low, ammo-turrets that work without power, burner inserters for your power production's critical path will bootstrap themselves after an outage, roboports that replace biter-attacked defenses.

Your first smelting setup in Factorio will likely be a hand-fed burner miner and furnace, taking at most 50 coal. This will run out of power in minutes. Then you might use inserters to add a coal buffer. Then a belt of coal, so you don't need to constantly refill the coal buffer. Then a rail station, so you don't need to constantly hand-route entirely new coal and ore mining patches. Then you'll use blueprints and bots to automate much of constructing your new inputs. If you're really crazy, you'll experiment with automating the usage of those blueprints to build self-expanding bases...


I really considered getting into Factorio but your comment is exactly why I can’t touch it. I have certain demands upon my time that would inevitably go unmet as I fuss with factory.

Holy shit I was about to compose exactly this answer! Parent was marketing straight to my lizard brain

Yes be warned. Factorio is the most addictive game I have ever played.

Now imagine if machines got clogged 1% of the time and you had to fix them, or if items occasionally fell off conveyer belts onto other conveyor belts. The amount of redundancy and work that would create would be paralyzing, but that’s the bare minimum of recreating what goes wrong in the real world. I love factorio, but what always strikes me as most interesting is thinking about what it is you get to take for granted in one of the most complex games around.

That's a nice post and all, but none of that had anything to do with reliability. In all of those cases, those components worked exactly as designed when operating within their specification ranges (ie inserters insert when they have power).

The point is, it would be significantly more complex if things frequently failed even when "operating properly". And this happened at all levels of abstraction in a factory.


You're drawing what appear to be arbitrary distinctions between failure modes without making a good argument as to why one is a reliability issue and another is not.

My printer might jam if I feed paper crooked or poorly. My assemblers might jam if I feed incorrect components through misclicks, misplaced miners, or filled outputs.

My printer might fail from the entropy of wear and tear. My assemblers might fail from the entropy of biters attracted by generated pollution.

My printer might stall from running out of paper or a filled output tray. My assemblers might stall from running out of inputs or a filled output belt or chest.

Why is the printer arguably unreliable, but the assembler "100% reliable"?

Failures of my printer are not caused by magic faries sprinkling dice rolling pixie dust on my toner cartrige. Failures have physical causes. That factorio's assembler failures have modeled causes as well, instead of an arbitrary and magic dice roll, does not detract from those failure modes being reliability issues.

That my printer fails far less frequently than my Factorio assemblers points to my printer being more reliable than my Factorio assemblers. Your point that reliability could be even worse misses my point, which is merely that not only does Factorio already avoid the fiction of "100%" or "perfect reliability" - but that perhaps Factorio already models reliability worse than "real-life" in some aspects already.


It's still reliability, just who the whole system rather than the individual parts. The aliens breaking stuff is part of the whole system "operating properly"

I don't think it would be particularly bad for inserters inserting at slightly different speeds from each other, or occasionally destroying the item it was supposed to insert. Same with components occasionally breaking on their own.


Fine. Do it like the experimental physicists do: if you think you're on to something, refine and repeat the experiment in order to get a more robust, repeatable result.

The original sin of the medical and social sciences is failing to recognize a distinction between exploratory research and confirmatory research and behave accordingly.


The problem is that it’s really hard to get good data, ethically, in medical sciences. Something that improves outcomes by 5-10% can be really important, but trying to get a study big enough to prove it can be super expensive already.

Nobody likes being in the control group of the first working anti-aging serum...

> Nobody likes being in the control group of the first working anti-aging serum...

You only know whether it works when the study has been completed. You also only know whether the drug has (potentially) disastrous consequences when the study has been completed. Thus, I am not completely sure whether your claim holds.


You missed the working part. Success was a prerequisite to their after the fact feelings. At least some of the control group will be in old age but still alive when we know it woris. They might not know if it is infinite life (and side effects may turn it into die at 85, so some control may outlive the intervention group after the study ), but they will know on average they did worse

> At least some of the control group will be in old age but still alive when we know it woris.

The anti-aging serum could work (i.e. make you older), but have strong negative side effects.


So then it wouldn't "work", where "work" as used here is defined, "to function or operate according to plan or design".

And no, it's not reasonable to assume I meant "work" as in, "have an anti-aging serum that has strong negative side effects."


An anti-aging serum works if it reduces or removed specific (or even most/all?) effects that are related to aging.

Relative to other side effects. Remember we are in a post serum existing scenario so we would have the knowledge of those as part of the discussion on if it works.

People opt into the study in the first place. I'm willing to bet that no one opts into the study hoping to be in the control group.

Well, some medical studies pay good money, so it's perfectly rational to sign up for these and hope to get the placebo.

Fair enough. I tend to think of most of them as paying just a small amount where the main goal is to get treatment, but I'm sure some pay better than others.

> I’ve been thinking about this while playing Factorio: so much of our discussion and mental modeling of automation works under the assumption of perfect reliability. If you had SLIGHTLY below 100% reliability in Factorio, the game would be a terrible grind limited to small factories.

So I'm making a guess here that you play with few monsters or non-aggressive monsters?


> So I'm making a guess here that you play with few monsters or non-aggressive monsters?

Aggressively building turret walls, defensive train lines, and so on very quickly pays dividends here. Particularly if you claim as much territory as you can each time you expand instead of simply defending what you've built out.

If done this way building/improving defenses and managing enemies becomes a task you maintain every so often and doesn't spill over into the reliability of your base.


Currently playing a game to minimize pollution to try to totally avoid biter attention. Surrounded by trees, now almost entirely solar with efficiency modules.

> when you’re on the cusp of a new thing, unless you’re super lucky, the result will necessarily be near the noise floor. Real science is like that.

That's not necessarily true in social sciences. When you're working with large survey datasets, many variables are significantly related. That doesn't mean these relationships are meaningful or causal, they could be due to underlying common causes, etc. (Maybe social sciences weren't included in "real science" - but there's where a lot of stats discussions focus)


Come into Bayesian land, the water is fine. The whole NHST edifice starts to seem really shaky once you stop and wonder if "True" and "False" are really the only two possible states of a scientific hypothesis. Andrew Gelman has written about this in many places, e.g. http://www.stat.columbia.edu/~gelman/research/published/aban....

> The whole NHST edifice starts to seem really shaky once you stop and wonder if "True" and "False" are really the only two possible states of a scientific hypothesis.

The root problem here is that people tend to dichotomise what are fundamentally continuous hypothesis spaces. The correct question is not "is drug A better than drug B?", it's "how much better or worse is drug A compared to drug B?". And this is an error you can do both in Bayesian and frequentist lands, though culturally the Bayesians have a tendency to work directly with the underlying, continuous hypothesis space.

That said, there are sometimes external reasons why you have to dichotomise your hypothesis space. E.g. ethical reasons in medicine, since otherwise you can easily end up concluding that you should give half your patients drug A and the other half drug B, to minimise volatility of outcomes (this situation would occur when you're very uncertain which drug is better).


Gelman et al's BDA3 has a fun exercise estimating heart-disease rates in one of the early chapters that demonstrates this issue with effect-sizes. BDA3 uses a simple frequentist model to determine heart-disease rates and shows that areas with small population sizes have heavily exaggerated heart-disease rates because of the small base population. Building a Bayesian model does not have the same issue as the prior population prevalence incorporates the small base population sizes.

It's interesting that high p-values actually seem to more conclusively state something than low p values (like p < 0.05) do.

With a high p value, you can say with some degree of certainty that your test was unable to detect any effect. Whether it was due to the lack of an effect or because your test wasn't capable of measuring it

With a low p value, you don't actually really know if you detected something interesting. It could be due to a flawed test, biases, non-causal correlations, faulty p-hacky stats, etc.

So why do we consider the latter more worthwhile when it seems to say less?


Bayesianism makes the problem much worse. Prior-hacking is easier and harder to detect than p-hacking, and Bayesianism has no way to exclude noise results at all. I'm constantly baffled when people suggest it as a solution to these problems.

> Prior-hacking is easier and harder to detect than p-hacking

But that's comparing apples to oranges. Setting a reasonable prior is akin to frequentists interpreting the effect size (including its confidence interval) in light of deep domain knowledge. To produce a good analysis using either Bayesian or frequentist methodology (or to criticise such an analysis), you have to have deep domain knowledge. There's no getting around that, and arguably the use of p-values often lets you get away with shoddy domain knowledge.

> and Bayesianism has no way to exclude noise results at all.

This statement doesn't make any sense. Bayesian methodology has plenty of mechanisms for working with and controlling noisy data (obviously, since it's one of the two key paradigms in statistics, which as a field fundamentally deals with noisy data). The precise error rates and uncertainties that are calculated are usually different from what you would use in a frequentist analysis, but most people consider this a benefit of Bayesian analysis.


> To produce a good analysis using either Bayesian or frequentist methodology (or to criticise such an analysis), you have to have deep domain knowledge. There's no getting around that, and arguably the use of p-values often lets you get away with shoddy domain knowledge.

The whole problem we're facing is that it requires too much domain knowledge and detailed analysis to dismiss results that are actually just noise. The whole point of p-values is that they give you a way to do that without needing that complex analysis with deep domain knowledge - they're not a replacement for doing in-depth analysis, they're a way to cull the worst of the chaff before you do, the statistical-analysis equivalent of FizzBuzz. Bayesianism has no substitute for that (you can't say anything until you've defined your prior, which requires deep domain knowledge), and as such makes the problem much worse.


> (you can't say anything until you've defined your prior, which requires deep domain knowledge)

Well, you can use a non-informative prior. And that's the correct choice when you genuinely don't have a better option. But you should always be able to justify that, and that in turn requires deep domain knowledge....which leads me to....

> The whole problem we're facing is that it requires too much domain knowledge and detailed analysis to dismiss results that are actually just noise.

....this is in no way a "problem" that needs fixing, by allowing shortcuts that can easily be hacked. Rather, it's a factual statement about the difficulty of drawing correct conclusions, in low Signal-to-Noise-Ratio domains. Whether you use p-values or not, and whether you use Bayesian methodology or not, you cannot get around the need to understand the data you're working with. Bad p-values are worse than none, since you have no knowledge of what error rate they actually achieve in the long-run.

> Bayesianism has no substitute for that

Yes it does. It's called Bayes factors. But as I said above, I completely disagree with your view of what a p-value is for.


> Well, you can use a non-informative prior. And that's the correct choice when you genuinely don't have a better option.

At which point you've just found a more cumbersome way to do frequentist statistics. Frequentist tools aren't inconsistent with Bayes' law (they can't be, since both are valid theorems) - indeed one could say that the whole project of frequentist statistics consists of building a well-understood suite of pre-baked priors and computations that are appropriate to situations that are commonly encountered.

> ....this is in no way a "problem" that needs fixing, by allowing shortcuts that can easily be hacked. Rather, it's a factual statement about the difficulty of drawing correct conclusions, in low Signal-to-Noise-Ratio domains. Whether you use p-values or not, and whether you use Bayesian methodology or not, you cannot get around the need to understand the data you're working with.

Well, the fact is there are too many small-sample studies being produced for all or even most of them to be critically analysed by people with deep understanding. And maybe the right fix for the problem is to give the right incentives for that kind of critical analysis (e.g. by allowing that kind of analysis to count as research for the purposes of journal publications and PhD theses just as much as "the original study" does, given that a study without that kind of critical analysis cannot truly be said to represent advancing human knowledge). But if you just tell people to do Bayesian analysis instead of frequentist analysis then that's not going to magically create deep understanding - rather people will try to replace shallow frequentist analysis with shallow Bayesian analysis, and shallow Bayesian analysis is a lot less effective and more hackable.

> Yes it does. It's called Bayes factors.

But you still need a prior to compute a Bayes factor.


> At which point you've just found a more cumbersome way to do frequentist statistics.

Hmm, in one way, yes...but on the other hand, Bayesian posteriors are a lot more intuitive to interpret, for most people. So I think you trade one form of convenience for another. But as you sort of hint at, the results should usually be fairly similar, whether you're doing frequentist or Bayesian analysis. So in most cases, I doubt it matters that much. Where it does matter, is when you have grounds for strong priors, that you want to take advantage of. In such cases you can improve your chances of being correct in the "here and now", if you do a Bayesian analysis. Whereas a frequentist analysis is only concerned with the asymptotic error rates. (but of course frequentist vs Bayesian is also a ladder, rather than a black and white distinction)

> Well, the fact is there are too many small-sample studies being produced for all or even most of them to be critically analysed by people with deep understanding.

And this I totally agree with. If there's one thing I dislike about academia, it's the tendency to fund low-powered studies that get nowhere. Better to go all in, with sufficient support from experienced people, in fewer and bigger studies.


> So in most cases, I doubt it matters that much. Where it does matter, is when you have grounds for strong priors, that you want to take advantage of. In such cases you can improve your chances of being correct in the "here and now", if you do a Bayesian analysis.

I completely agree with this - but it's exactly this dynamic that I think, at least in the current academic environment, does more harm than good. Effectively it normalizes publishing a result that's not strong enough to swamp the prior, but where you have some detailed situational argument for why a different prior should be used here. We already get every social science paper arguing that they should be allowed to use a 1-tailed t-test rather than 2-tailed because surely there's no possibility that their intervention would do more harm than good, and you need to get into the details of the paper to see why that's nonsense; letting them pick their own prior multiplies that kind of thing many times over.


> letting them pick their own prior multiplies that kind of thing many times over.

I'm a big fan of sensitivity analysis in this context. Don't just pick one prior and call it a day, but show the effect of having liberal vs conservative priors, and discuss that in light of the domain knowledge. That gives the next researcher a much better foundation than a single prior, or a p-value, ever could.

Unfortunately, if it was a non-trivial paper to begin with, it now just turned into a whole book.


Bayesian reasoning has even worse underpinnings. You don’t actually know any of the things the equations want. For example suppose a robot is counting Red and Blue balls from a bin, the count is 400Red and 637Blue, it just classified a Red ball.

Now what’s the count, wait what’s the likelihood it misclassified a ball? How accurate are those estimates, and those estimates of those ...

For a real world example someone using Bayesian reasoning when counting cards should consider the possibility that the deck doesn’t have the correct cards. And the possibility that the decks cards have been changed over the course of the game.


Huh? You can derive all of those from Bayesian models. If you're counting balls from a bin with replacement, and your bot has counted 400Red with 637Blue, you have a Beta/Binomial model. That means you p_blue | data ~ Beta(401, 638) assuming a Uniform prior. The probability of observing a red ball given the above p_blue | data is P(red_obs | p_blue) = 1 - P(blue_obs | p_blue), which is calculable from p_blue | data. In fact in this simple example you can even analytically derive all of these values, so you don't even need a simulation!

And if misclassification is a concern (as the parent mentioned) you can put a prior on that rate too!

Which rate? The rate you failed to mix the balls? The rate you failed to count a ball? The rate you misclassified the ball? The rate you repeatedly counted the same ball? The rate you started with an incorrect count? The rate you did the math wrong? etc

Here’s the experiment and here’s the data is concrete it may be bogus but it’s information. Updating probabilistic based on recursive estimates of probabilities is largely restating your assumptions. Black swans can really throw a wrench into things.

Plenty of downvotes and comments, but nothing addressing the point of the argument might suggest something.


> Which rate? The rate you failed to mix the balls? The rate you failed to count a ball? The rate you misclassified the ball? The rate you repeatedly counted the same ball? The rate you started with an incorrect count? The rate you did the math wrong? etc

This is called modelling error. Both Bayesian and frequentist approaches suffer from modelling error. That's what TFA talks about when mentioning the normality assumptions behind the paper's GLM. Moreover, if errors are additive, certain distributions combine together easily algebraically meaning it's easy to "marginalize" over them as a single error term. In most GLMs, there's a normally distributed error term meant to marginalize over multiple i.i.d normally distributed error terms.

> Plenty of downvotes and comments, but nothing addressing the point of the argument might suggest something.

I don't understand the point of your argument. Please clarify it.

> Here’s the experiment and here’s the data is concrete it may be bogus but it’s information. Updating probabilistic based on recursive estimates of probabilities is largely restating your assumptions.

What does this mean, concretely? Run me through an example of the problem you're bringing up. Are you saying that posterior-predictive distributions are "bogus" because they're based on prior distributions? Why? They're just based on the application of Bayes Law.

> Black swans can really throw a wrench into things

A "black swan" as Taleb states is a tail event, and this sort of analysis is definitely performed (see: https://en.wikipedia.org/wiki/Extreme_value_theory). In the case of Bayesian stats, you're specifically calculating the entire posterior distribution of the data. Tail events are visible in the tails of the posterior predictive distribution (and thus calculable) and should be able to tell you what the consequences are for a misprediction.


You don’t find black swans from the data you find them from building better models. You can look at 100 years of local flood and weather data to build up a flood assessment, but that’s not going to include mudslides or earthquakes etc. The same applies to studies.

My point is this: You can’t combine them using Bayesian statistics adjusting for the possibility of research fraud it’s simply not in the data.

Their great for well understood domains, less so for research. Frequentist models don’t work, but they also don’t even try.

PS: Math errors don’t really fall into modeling error.


I wouldn't think of Black Swan events as tail events, so much as model failures or regime-changes. As in, 'we modeled this as a time-invariant gaussian distribution, but it's actually a mixture model where the second hidden mode was triggered in the aftermath of an asteroid strike that we didn't model for, because of course we didn't.'

In re, the arguey-person you were responding to, frequentist modeling is just as bad or worse for these sorts of situations.


Frequentist modeling isn’t useful, but that’s not how studies are evaluated. Let’s suppose your looking at a bunch of COVID studies and you ask yourself what if one or more of them was fraudulent?

Your investigation isn’t limited to the data provided by them it’s going to look for more information beyond the paper. This isn’t a failure of frequentist models because they evaluate the study and it’s output separately.


Ah yeah fair enough, I see what you mean. This is a general problem with all models though. Fundamental modeling issues will tank your conclusion.

Suppose the likelihood it missclassified a ball is significantly different from zero, but not yet known precisely.

If you use a model that doesn't ask you to think about this likelihood at all, you will get the same result as if you had used bayes and consciously chose to approximate the likelihood of misclassification as zero.

You may get slightly better results if you have a reasonnable estimate of that probability, but you will get no worse if you just tell Bayes zero.

It feels like you're criticizing the model for asking hard questions.

I feel like explicitely not knowing an answer is always a small step ahead of not considering the question.


The criticism is important because of how Bayes keeps using the probability between experiments. Garbage in Garbage out.

As much as people complain about frequentist approaches, examining the experiment independently from the output of the experiment effectively limits contamination.


Can't you just add that to your equation? Seems like for anything real, this will not go many levels deep at all before it's irrelevent.

Don't get distracted by the click bait title. Effect size should be captured by statistical significance (larger effects are less likely to happen by chance). Author is really complaining that the original study didn't report enough data to check their analysis or do alternative analysis methods. Better title for article would be "Hard to peer review when you don't share the data"

Note the point in the essay that statistical significance is meaningless if the model does not correspond to reality — which, in this case as in many, they very much do not.

A few years ago, HN comments complained about the censorship that only leaves successful studies. We need to report on everything we've tried, so we don't walk around on donuts.

What's missing in my mind is admitting that results were negative. I'm reading up on financial literacy, and many studies end with some metrics being "great" at p 5%, but then some other metrics are also "great" at p 10%, without the author ever explaining what they would have classified as bad. They're just reported without explanation of what significance they would expect (in their field).


> ...so we don't walk around on donuts

I agree with what you're saying, but I don't understand this phrase.


I don't know where that turn of phrase comes from, but I imagine it's synonymous with 'walking around in circles'.

The phrase "walk around on donuts" has one Google result and it's this thread.

You know how sometimes you'll accidentally step on a donut and you'll have to call your dog over to lick all the jelly off your toes? That.

Not only is it not valuable to publish tons of studies with p=.04999 and small effect size, in fact it's harmful. With so many questionable results published in supposedly reputable places it becomes possible to "prove" all sorts of crackpot theories by selectively citing real research. And if you try to dispute the studies you can get accused of being anti-science.

Only a problem for people who are trying hard not to think. You can just ignore those people. They're not doing any harm believing their beliefs.

The USDA food pyramid and nutrition education would suggest that there's an inherent danger in just letting people believe irrational things after a correction is known. It depends on the belief - flat earth people aren't likely to cause any harm. Bad nutrition information can wreak havoc at scale.

Flat earth beliefs doesn't cause harm, but flat earth believers have largely upgraded to believing more dangerous nonsense.

Data or it didn't happen. This really sounds like you're inventing a caricature of your enemy and assigning them "dangerous" qualities so you can hate them more.

Nobody needs to caricature the insane beliefs surrounding COVID (or flat earth), people holding them are doing a good enough job of that themselves.

I do have a few favorites. "COVID tests give you COVID, so I won't go get tested" is certainly up there. I can't say I give two figs about your opinion on the Earth's topology, but this one is a public health problem, that's crippling hospitals around the country.


So it didn't happen?

Exactly - like the "Oklahoma horse paste overdoses overwhelming emergency rooms so gunshot victims can't get tratment" the stories validate all the caricatured biases against middle America as a bunch of ignorant redneck yokels. None of the story was true, but people leapt on it because it resonates in that echo chamber.

We are literally in the middle of a global crisis that is founded on people misunderstanding science.

What on earth are you talking about? I guess climate change but that's certainly not founded on people misunderstanding science, it's caused by people understanding science which led to industrialization. Or maybe you mean covid-19? Neither that. You're just trying to make it seem like it's somehow very serious and bad if everyone doesn't agree with you. It's not.

I’ll presume you’re referring to everyone involved in the gain of function research that led to the virus.

I blame most of this on pop science. It's absolutely ruined the average public's respect for the behind the scenes work doing interesting stuff in every field. What's worse is the attitude it breeds. Anti-intellectualism runs rampant amongst even well educated members of my social circle. It's frustrating to say the least.

Some say that it is not anti-intellectualism to realize the emperor has no clothes but enlightenment.

Either way it’s dangerous.


It can be both, but you're absolutely right.

"Believe the science" vs. "understand the process". The former merely uses the language of science to gain legitimacy.

> Plus, the idea that we can remove such small, noisy confounding factors is just silly. We need to look for the things that stand out from that noise floor

We have found most of them, and all the easy ones. Today the interesting things are near the noise floor. 3000 years ago atoms were well below the noise floor, now we know a lot about them - most of it seems useless in daily life yet a large part of the things we use daily depend on our knowledge of the atom.

Science needs to keep separating things from the noise floor. Some of them become important once we understand it.


I don't think we have found most of them. I think we make it look like we've found most of them because we keep throwing money at these crap studies.

Bear in mind that my criteria are two-dimensional, and I'll accept either. By all means, go back and establish your 3% effect to a p-value of 0.0001. Or 0.000000001. That makes that 3% much more interesting and useful.

It'll especially be interesting and valuable when you fail to do so.

But we do not, generally, do that. We just keep piling up small effects with small p-values and thinking we're getting somewhere.

Further, if there is a branch of some "science" that we've exhaused so thoroughly that we can't find anything that isn't a 3%/p=0.047 effect anymore... pack it in, we're done here. Move on.

However, part of the reason I so blithely say that is that I suspect if we did in fact raise the standards as I propose here, it would realign incentives such that more sciences would start finding more useful results. I suspect, for instance, that a great deal of the soft sciences probably could find some much more significant results if they studied larger groups of people. Or spent more time creating theories that aren't about whether priming people with some sensitive word makes them 3% more racist for the next twelve minutes, or some other thing that even if true really isn't that interesting or useful as a building block for future work.


So 3% is not interesting but the difference between 10^-7 and 10^-8 probability that there is no effect is interesting somehow?

Meta analysis after enough small studies show the effect exists.

Individual atoms, or small numbers of them, may be beneath some noise floor, but not combined atoms.

A salt crystal (Lattice of NaCl atoms) is nothing like a pure gold nugget (clump of Au atoms).

That difference is a massive effect.

So to begin with, we have this sort of massive effect which requires an explanation, which is where atoms then come in.

Maybe the right language here is not that we need an effect rather than statistical significance, but that we need a clear, unmistakable phenomenon. There has to be a phenomenon, which is then explained by research. Research cannot be inventing the phenomenon by whiffing at the faint fumes of statistical significance.


> We have found most of them, and all the easy ones. Today the interesting things are near the noise floor.

The noise floor is not static. A major theoretical advance spurs an advance in instrumentation, which then supports more science. The hypothesis space is usually much larger than the data space, making the bottleneck theory, not data. The "end of progress" has been lamented again and again since before Galileo, only to be upended by a paradigm shifting theory that paved the way for lots of new science. Many of these theories were developed long after the data and instruments were available, and were produced with relatively simple data: Young's double slit experiment, Mendelian genetics, the photoelectric effect, Brownian motion, most of classical mechanics, quantum teleportation, BOLD MRI, etc.


Doesn't it make a difference if it's near the noise floor because it's hard to measure (atoms) or if it's near the noise floor because it's hardly there (masks)? Maybe if these "hardly there" results led to further research that isolated some underlying "very there" phenomena, they would be important, but until that happens, who cares if thinking about money makes you slightly less generous than thinking about flowers? If they're not building on previous research to discover more and more important things, then it doesn't seem like useful progress.

> or a p value of something like p = 0.0001

This has been proposed [0], albeit for a threshold of p < 0.005.

Here's Andy Gelman and others arguing otherwise [1]. They also got like 800 scientists to sign on to the general idea of no longer using statistical significance at all [2].

[0] https://www.nature.com/articles/s41562-017-0189-z

[1] http://www.stat.columbia.edu/~gelman/research/unpublished/ab...

[2] https://www.nature.com/articles/d41586-019-00857-9


Given the (estimated) number of scientists in the world and their general propensity to sign on to something… is 800 scientists a significant amount?

I don't know, and it's a fair point; I think I should have just summarised the follow-up article as "here's some follow-up by the authors for context"

This is clearly a cost/benefit tradeoff, and the sweet spot will depend entirely on the field. If you are studying the behavior of heads of state, getting an additional N is extremely costly, and having a p=0.05 study is maybe more valuable than having no published study at all, because the stakes are very high and even a 1% chance of (for example) preventing nuclear war is worth a lot. On the other hand, if you are studying fruit flies, an additional N may be much cheaper, and the benefit of yet another low effect size study may be small, so I could see a good argument being made for more stringent standards. In fact I know that in particle physics the bar for discovery is much higher than p=0.05.

What if it's the other way round and a p<0.05 study says that the best way to make sure a rival country does not do a nuclear strike on you first is to do a massive nuclear strike on them first?

Nothing is wrong with publishing small effect size results. Setting a P threshold lower or a a higher bar for effect sizes for journal acceptance will just increase the positivity bias and also encourage more dodgy practices. Null results are important.

Understanding effect size is as important as significance can manifest by requiring effect size or variance explained to be reported every time the result of a statistical test is presented, e.g. rather than simply "a significant increase was observed (p = 0.01)" and also making that kind of parsing the standard in scientific journalism.


If you were the king of science, I'd kindly ask you to think about replacing grant financing and all other financial incentives that go along with publishing. Now that would be efficient. 'Cause I currently make .05-barely-significant-results but if you force me to up my game I will provide .0001-barely-significant-results no problem, even with 'preregistration' or whatever hoop you hold in front of me.

As an aside, could you also please make medicine a real science, so I can finally scientifically demonstrate that my boss is wrong?


What do you (or anyone else) think about the statistical conclusions in this paper? Particularly the adjusted r-squared values reported.

https://www.cambridge.org/core/journals/american-political-s...


The current science economy around publishing is partially responsible, although it should also be said that finding no correlation is still a gain of knowledge that is valuable to build upon for people in the same field, even if it might not generate the most exciting read for others.

Maybe all studies should be preregistered, including their methods... like this one was?

https://osf.io/vzdh6/


I agree we shouldn't listen to noise, but small effect size is not necessarily noise. (I agree it is highly correlated.) I mean, QED's effect size on g factor is 1.001. QED was very much worth finding out.

p = 0.0001 doesn't help much. You can get to an arbitrarily small p by just using more data. The problem is trying to reject a zero width null hypothesis. Scientists should always reject something bigger than infinitesimally small so that they are not catching tiny systematic biases in their experiments. There are always small biases.

Gwern's page "Everything Is Correlated" is worth reading: https://www.gwern.net/Everything


It would at least filter out the social science experiments where results on 30 college students is "significant" at p=.04 (and it's too expensive to recruit 3000 of them to force significance).

From the article:

Ernest Rutherford is famously quoted proclaiming “If your experiment needs statistics, you ought to have done a better experiment.”

“Of course, there is an existential problem arguing for large effect sizes. If most effect sizes are small or zero, then most interventions are useless. And this forces us scientists to confront our cosmic impotence, which remains a humbling and frustrating experience.”


Must be nice. Not everyone has the luxury of being able to carry out whatever experimentation they feel like. Sometimes we’re limited by what is affordable, practical, or ethical.

To take this further. Most science is a slave to grant funding. Grant funders like certain things and most of them are not biostatisticians.

That is not to say that hypercapitalism is the problem here. I think any competitive system even under socialism would have the exact same problem. Basically there are too many voices, and the ones winning are often cheating with bad statistics.


I think the weird thing is that a bunch of people in tech understand this well _with respect to tech_, but often fall into the same p-value trap when reading about science.

If you're working with very large datasets generated from e.g. a huge number of interactions between users and your system, whether as a correlation after the fact, or as an A/B experiment, getting a statistically significant result is easy. Getting a meaningful improvement is rarer, and gets harder after a system has received a fair amount of work.

But then people who work in these big-data contexts can read about a result outside their field (e.g. nutrition, psychology, whatever), where n=200 undergrads or something, and p=0.03 (yay!) and there's some pretty modest effect, and be taken in by whatever claim is being made.


I wonder if we should separate the roles of scientist and researcher. Universities would have generalist "scientists" who's job would be to consult for domain-specialized researchers to ensure they're doing the science and statistics correctly. That way, we don't need every researcher in every field to have a deep understanding of statistics, which they often don't.

Either that or stop rewarding such bad behavior. Science jobs are highly competitive, so why not exclude people with weak statistics? Maybe because weak statistics leads to more spurious exciting publications which makes the researcher and institution look better?


The scientific establishment will never be convinced to stop doing bad statistics, so "the solution to bad speech is more speech". Statisticians should be rewarded for effective review and criticism of flawed studies, and critical statistical reviews of any article should be easy to find when they exist.

This is sounding like a great startup idea for a new scientific journal, actually.


I do enjoy the idea of a journal focused entirely on the review of statistical methods and underlying methodologies applied in modern day research. Could act as a helpful signal for relevant and applicable research.

Just adding an Arxiv filter that allows me to set a minimum p-value or variation % would do it!

Every medical researcher I've worked with had a biostatistician on hand to handle the stats. As a aerospace engineer, I always had interesting discussions with them on the meaningfulness of a clinical study with 15 people, but have come to appreciate the massive difficulty in progressing medical research if everybody were to wait for a clinical trial with a 1000 patients.

There's no problem with a n=15 study, the problem is that there isn't a proper effective process that aggregates these small studies and the designs and conducts n=1000 ones. What we have instead is academic peacocking. (Grant applications judged by other scientists who are also at the same time in the grant game.)

Of course this is somewhat a necessary consequence of having academic freedom.


Such staff scientist roles for people with particular methodological skills do exist. They are not particularly common, because there are a few issues:

1. Who will pay for them?

2. How do we make staff scientist roles attractive to people who could also get tenure-track faculty positions or do ML/data science in the industry?

3. How do we ensure that a staff scientist position is not a career dead end if the funding dries up after a decade or two?

The standard academic incentives (long-term stability provided by tenure, freedom to work on whatever you find interesting, recognition among other experts in the field) don't really apply to support roles.


We exclude people who don’t publish. Papers tend not to publish stuff that isn’t a positive result.

These discussions are fun but rather pointless: e.g., sometimes a small effect is really interesting but it needs to be pretty strongly supported (for instance, claiming a 1% higher electron mass or a 2% survival rate in rabies).

Also, most published research is inconsequential so it really does not matter other than money spent (and that is not only related to findings but also keeping people employed etc.). If confidence in results is truly an objective might need to link it directly to personal income or loss of income, ie force bets on it.


If you have a tiny effect size on X, you probably haven't discovered a significant cause of X, but just something incidental.

For example, smoking was finally proved to cause lung cancer because the effect size was so large that the argument that 'correlation does not imply causation' became absurd: it would have required the existence of a genetic or other common cause Z that both causes people to smoke and causes them to develop cancer with correlations at least as large as between smoking and lung cancer, but there just isn't anything correlated that strongly. It would imply that almost everyone who smokes heavily does so because of Z.


>Effect Size Is Significantly More Important Than Statistical Significance

Ok, but by how much?


You forgot "...and how often?"

Agree with the title, but not the contents. The study in question is actually an example of a huge effect size (10% reduction in cases just from instructing villages they should wear masks is amazing) possibly hampered by poor statistical significance (as the blog post outlines).

Without knowing how many people were wearing masks, you can’t say the much about the 10% figure.

You get approximately[1] the same outcome if:

(a) masks are 100% effective but only 10% wear them, and

(b) masks are 10% effective and 100% wear them.

Is this study showing (a) or (b)?

Let us assume (b) masks only help by 10% and R0 is 2 without masks. If exponential transmission is occurring then in ~11.5 days you have the same number infected with masks as in 10 days without masks.

Either way the study has ended up with a 10% figure, and that figure gets misunderstood or intentionally misrepresented. If you want to argue for the effectiveness of masks against those that don’t wish to wear them, then personally I think it is a terrible study to argue with because 10% sounds shitty.

[1] Actual numbers depends on a heap of other things, but just assume those figures are right for the sake of making things easy to understand.

Disclaimer: I wear a mask during Level 2 lockdown in the South Island of New Zealand, and mask wearing has no partisan meaning here AFAIK.


Also, the study, IIRC, found greater social distancing in the mask conditions, which leads to other possible explanations.

I wear a mask all the time and am happy to but I agree this study, while solid in some respects, is not exactly overwhelming in making a compelling argument for masks.


It should also be noted that the positive effect rose to 35% for people over the age of 60, who make up the overwhelming majority of serious Covid-19 cases. The omission of this important fact from the article leads me to question the motivation of the author.

HN comments are usually the time for spicy contrarian takes to OP, but this post is dead on.

Low effect sizes are often a code smell for scientific incrementalism/stagnation.


An investigator needs to rule out all conceivable ways their modeling can go wrong, among them the possibility of a statistical fluke, which statistical significance is supposed to take care of. So statistical significance may best be thought of as a necessary condition, but is typically is taken to be a sufficient condition for publication. If I see a strange result (p-value < 0.05), could it be because my functional form is incorrect? or because I added/removed some data? Or I failed to include an important variable? These are hard questions and not amenable to algorithmic application and mass production. Typically these questions are ignored, and only the possibility of a statistical fluke is ruled out (which itself depends on the other assumptions being valid).

Dave Freedman's Statistical Models and Shoe Leather is a good read on why such formulaic application of statistical modeling is bound to fail.[0]

[0:https://psychology.okstate.edu/faculty/jgrice/psyc5314/Freed...]


The studies are in villages, but the real concern is dense urban environments like New York (or Dhaka) where people are tightly packed together and at risk of contagion. I'm pretty sure masks make little difference in Wyoming either, where the population is 5 people per square mile.

Whats more important than population density is activity. A New Yorker who is mostly keeping to themselves and wearing a mask is unlikely to get the virus. A Wyoming native attending church service maskless and singing indoors for an hour is more likely to get the virus.

> If most effect sizes are small or zero, then most interventions are useless.

But this doesn't necessarily follow, does it? If there really were a 1.1-fold reduction in risk due to mask-wearing it could still be beneficial to encourage it. The salient issue (taking up most of the piece) seems to be not the size of the effect but rather the statistical methodology the authors employed to measure that size. The p-value isn't meaningful in the face of an incorrect model -- why isn't the answer a better model rather than just giving up?

Small effects are everywhere. Sure, it's harder to disentangle them, but they're still often worth knowing.


> If there really were a 1.1-fold reduction in risk due to mask-wearing it could still be beneficial to encourage it.

That's understating it. The study doesn't measure the reduction in risk due to mask-wearing, but rather the reduction simply from encouraging mask-wearing (which only increases actual mask wearing by a limited amount). If the study's results hold up statistically, then they're really impressive. With the caveat of course that they apply to older variants with less viral loads than Delta - it's likely Delta is more effective against masks simply due to its viral load.

> The salient issue (taking up most of the piece) seems to be not the size of the effect but rather the statistical methodology the authors employed to measure that size. The p-value isn't meaningful in the face of an incorrect model -- why isn't the answer a better model rather than just giving up?

Exactly. The irony of this article is that this is an example where effect size is actually not the issue - it's potential issues with statistical significance due to imperfect modeling, and an inability for other researchers to rerun an analysis on statistical significance, due to not publishing the raw data.


I agree the problem here is an incorrect model. Mask does not act on seroprevalence. Measuring mask's effect on seroprevalence is just wrong study design, although it may be easier to do.

Who cares if each effect is a factor of 2^(1/100) improvement, just give me 100 interventions and I'll double the value being measured.

There’s a whole book about this idea, Antifragile by Nassim Taleb, highly recommended

The title's misinformation: effect-size ISN'T more important than statistical significance.

The article itself makes some better points, e.g.

> I worry that because of statistical ambiguity, there’s not much that can be deduced at all.

, which would seem like a reasonable interpretation of the study that the article discusses.

However, the title alone seems to assert a general claim about statistical interpretation that'd seem potentially harmful to the community. Specifically, it'd seem pretty bad for someone to see the title and internalize a notion of effect-size being more important than statistical significance.


Not so fast. If you win your first jackpot on the first ticket. You'll require 500,000 failures (at $1 per ticket) in order to fail to reject the null hypothesis at p < 0.05. Assuming you're just doing a t test (which isn't really appropriate tbh).

If you bought just ten tickets you would have a p value below 0.0000001

And that makes sense, because a p value of 0.01 says the probability of getting a sample this far from the null hypothesis is less than 1 in a million by random chance... which is what happened when you got the extremely unlikely but highly profitable answer.

edit: post was edited making this seem out of context...


Mask's effect size on seroprevalence is probably zero. So no effect is expected result.

That's because mask acts on R0, not seroprevalence. After acting on R0, if R0 is >1, exponential growth, if <1, exponential decay. So no effect, unless it is the thing that pushes one from >1 to <1.


Also, they aren't testing masking effect on seroprevalence (or R0), they are testing the effect of sending out free masks and encouraging masking. That is only going to move the percent of people masking up or down a few percent at best.

The study says:

> The intervention increased proper mask-wearing from 13.3% in control villages (N=806,547 observations) to 42.3% in treatment villages (N=797,715 observations)

https://www.poverty-action.org/sites/default/files/publicati...


The better medical journals do stress the hazard ration efficacy and the confidence interval.

See the extract below from the NEJM: Seasonal Malaria Vaccination with or without Seasonal Malaria Chemoprevention

"The hazard ratio for the protective efficacy of RTS,S/AS01E as compared with chemoprevention was 0.92 (95% confidence interval [CI], 0.84 to 1.01), which excluded the prespecified noninferiority margin of 1.20.

The protective efficacy of the combination as compared with chemoprevention alone was 62.8% (95% CI, 58.4 to 66.8) against clinical malaria, 70.5% (95% CI, 41.9 to 85.0) against hospital admission with severe malaria according to the World Health Organization definition, and 72.9% (95% CI, 2.9 to 92.4) against death from malaria.

The protective efficacy of the combination as compared with the vaccine alone against these outcomes was 59.6% (95% CI, 54.7 to 64.0), 70.6% (95% CI, 42.3 to 85.0), and 75.3% (95% CI, 12.5 to 93.0), respectively."

https://www.nejm.org/doi/full/10.1056/NEJMoa2026330?query=fe...


But how much more important? :) Sorry, could not help myself.

If you have one BALB/c lab mouse, you give it something, and it glows in the dark few months after, effect size alone makes it significant.

there is no effect size if you have no control group. so you'd need 2 mice at the least.

Only if you have not seen lab mouse before.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: