I'm really sick of guys like Eric Topol who are basically covid influencers, bc they love to post studies but are completely unable to critically examine any studies. Pre-covid I thought that's what scientists did, but if they're doing that, they're not doing it in the public eye at all.
The last chart in the first study section is really quite special. The authors seem to have normalized a bunch of outcome measurements in units of standard deviations and then sorted them by something resembling the average differences, but they couldn’t be bothered to normalize the sign of the measurement? If you’re trying to tell me that 30-ish scores are all Gaussian enough to be worthy of using standard deviation units but that they don’t even all agree as to whether large or small numbers are good, I’m suspicious of the normalization procedure. (If “accuracy” means what I think it does, it’s obvious not even close to Gaussian. 90% might be good, 100% is perfect, and 101% is not “just z standard deviations better.”) There are decent techniques to deal with this (e.g. nonparametric models), but blindly normalizing, sorting, and adding a little caption to indicate that you did in fact notice that a bunch of the tests are backwards is not one of them.
I hoped the article would make some mention of how believable the results were, but no.
(Also, maybe mention that the pre-Omicron strains are not much of an ongoing risk and that those results, while potentially interesting, are probably unhelpful for informing future policy decisions?)
edit: It seems worse than this. Quoting the first paper:
> This invited subsample comprised participants who reported positive results on a SARS-CoV-2 test or who suspected that they had had Covid-19 and whose symptoms persisted for at least 12 weeks; participants who, as part of the REACT study, either had a positive result on a polymerase-chain-reaction (PCR) test for SARS-CoV-2 or were unvaccinated and had a positive test for SARS-CoV-2 IgG antibodies on an at-home lateral flow immunoassay device16; and participants who were randomly selected from the remaining REACT study population.
Eric Topol calls this a “prospective” study, although the study, fortunately, does not advertise itself as prospective. This is a retrospective study with an obviously biased study population. And the >12-week-symptoms group contains self-selected participants who may never have even had COVID!
Getting sick for 12 weeks sucks. Finding a detectable effect on an intelligence test should not be remotely surprising. Going from that to anything that should print COVID policy seems like quite a leap.