Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Chronic treatment with hydroxychloroquine and SARS-CoV-2 infection (medrxiv.org)
38 points by drocer88 3 days ago | hide | past | web | favorite | 23 comments





"After adjustment for age, sex, and chronic treatment with corticosteroids and/or immunosuppressants, the odds ratio of SARS-CoV-2 infection for chronic treatment with HCQ has been 0.51 (0.37-0.70). Conclusions: Our data suggest that chronic treatment with HCQ confer protection against SARS-CoV-2 infection."

Assuming this really does help protect against one virus, I wondered does that mean it would help protect against ALL or ANY other virus as well? There's no small number of people that have been taking this for a long time already, do they tend to catch colds or the flu or other virus caused illness less than other folks?


> After adjustment for age, sex, and chronic treatment with corticosteroids and/or immunosuppressants [...]

Interestingly, I didn't see from the summary that they were controlling for the conditions for which individuals were being treated with HCQ and the impact they had on their lives.

If people with Lupus are treated with HCQ (they are) and people with Lupus are often forced to make behavioral changes like staying indoors and out of the sun and avoiding physically demanding activities - which they are - doesn't it seem reasonable that those lifestyle changes would result in fewer SARS-CoV-2 infections?

FWIW, I'm very much open to the idea that HCQ may be a viable treatment for COVID-19. I've not kept up with the clinical results since it became a political issue, but it originally seemed to be a promising avenue of research. Once Trump started talking about it, it became all but impossible to find data about its effectiveness that wasn't obviously politically motivated.

This paper doesn't appear to be politically motivated... but it also doesn't appear to provide strong evidence one way or another.


I was also wondering about this. No discussion of the likelihood of someone who knows about their condition and is being treated for it being exposed in the first place. If they had a lower likelihood in the first place then it would also look like HCQ had the desired effect.

I am interested by a couple of their references, specifically regarding the possibility of HCQ only working as a prophylactic after a sufficiently long period of exposure. That could be a contributor to difficulty in replication.


2005 article (on an in-vitro study) also:

"Chloroquine is a potent inhibitor of SARS coronavirus infection and spread"

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1232869/


Consider https://news.ycombinator.com/from?site=medrxiv.org

Of all the nearly 160 posts from this domain, only three are flagged. Three that have, through various sociological and political reasons, become toxic or divisive to a large degree. The content/conclusion itself shouldn't be political, but that's where we are at.


Articles like this don't need the "flagged" treatment.

HCQ studies need serious review and discussion.

The theory has always been that HCQ increases Zinc in the cell which blocks viral replication ,might have a small effect and might help with early infection.

Research like this does not need the heated politics.

We need good clinical trials and reproduced studies.


Agree and what does "flagged" in HN mean? :\

The FAQ and Guidelines doesn't explain it.

I was following this post to see if more knowledgable people would comment if this study seems good or not (since I have no idea).


It means that enough people flagged it, so it got flagged.

The motivation for flagging is probably something along the lines of "this surely will promote poor quality discussion".

Less visibility means less discussion.


This hasn't even been peer reviewed yet. Don't expect that you, the average HN reader, knows better than reviewers with relevant backgrounds.

Peer review, while extremely useful in bringing out some, or many warts of a work, is not a substitute for quality control, which depends only on the authors.

> Don't expect that you, the average HN reader, knows better than reviewers with relevant backgrounds.

After what happened with the Surgisphere paper, where the first problem was the statistics (caught by post-publication comments) and only later there was the problem of fake data, the mere presence of a secret review does not mean the work is up to anything.

P.S: yes, I work in the fields of biology and pharmacology and have written and reviewed a number of papers myself.


I still think it's a fair comment that most Hacker News readers are not the audience for pre-prints. It's simply not possible for people untrained in the art of reading papers in a specific field to make reasoned judgements.

Out of curiosity, do you happen to have handy the proportion of papers submitted for publication that do not pass peer review?

I've worked in the scientific publishing field (though, admittedly, as a lay person) and remain generally unimpressed by the process. At best, I tend to see a paper having been accepted for publication as a positive signal - certainly not as a clear indicator.


It depends highly on the journal. In my field (optics) for a 'high impact' journal (which a paper claiming to show a way to curb the covid-19 pandemic would certainly qualify for) the accept rate is about 10% after multiple rounds of review/revision.

After your paper gets rejected from a top journal you apply to a mid-tier journal (usually for reasons related to impact, but this is tied to your claims that are not watertight being struck out in the review) which has about a 50% accept rate after you remove the unproven claims.

Then when you don't get accepted there you go to a 'just publish my paper' journal, which comes in 2 flavors:

The respectable move is to publish in a technical archive (nature scientific reports is a good example of this) where you only have to satisfy that the work is technically sound, so you remove all of the unsubstantiated claims (title goes from 'we cured covid' to 'statistical analysis of covid patients--results inconclusive') and publish your measurements and methods.

The less respectable way out is to publish in a journal with a fast-track review process that just accepts anything as long as you pay the fees. Usually people in the field are smart enough to ignore papers in those journals, but bean counters and media aren't.


> the accept rate is about 10% after multiple rounds of review/revision

This is part of what I'm driving at here - "accepted for publication in this journal" is not the same as "peer reviewed". Broadly speaking when someone says that a paper has been peer reviewed, they only mean that a third party has examined the paper and certified that they did not find any significant issues with the methodology. The GP didn't say "this hasn't even been accepted for publication" - they said "this hasn't even been peer reviewed yet". Those are very different things!

I would see acceptance into a top-tier journal - or even a reputable journal - as a very, very strong signal.

I'm still curious what proportion of papers meet the bar for inclusion into a journal but are filtered out by the peer review process. I suspect it's vanishingly small.

> Then when you don't get accepted there you go to a 'just publish my paper' journal [...]

Right, which is why I always take the conclusions of a paper with a grain of salt inversely proportional to the prestige of the journal in which it was published. If I'm looking at a paper in Nature, I'm as confident as is reasonable that the conclusions of the paper are correct (or at least well-founded). If I'm looking at a paper that hasn't been published in a journal of any type, then I essentially discard their conclusion and consider the data and the author's analysis (with a healthy dose of skepticism).

In this case, even as someone who is definitely not a professional in the field, I identified what I believe to be glaring methodological errors: https://news.ycombinator.com/item?id=23690529


> I would see acceptance into a top-tier journal - or even a reputable journal - as a very, very strong signal.

Not anymore, I'd say. Both The New England Journal of Medicine and Lancet were fooled by the Surgisphere scandal, and they didn't even notice the statistical errors, let alone the fabrication of the data.

Lancet took years to remove the Wakefield paper on autism and vaccines, and still has other papers with questionable statistics up.

What I consider a strong signal is good data with clear conclusions and well-outlined limitations (these are often as important as the results themselves). Peer review can help with that, but it's not a magic bullet. In that sense top tier journals can be "dangerous" because sometimes they focus on good storytelling instead of presenting the data correctly.


At face value this is undeniably true, but:

1. it's less so the farther `you` gets from the average HN reader

2. it smacks of (politically) motivated reasoning. The point of posting anything on here is to open discussion on it, not to shut it down


There must be a prize for this kind of work.

"- 360,304 patients with suspected SARS-CoV-2 infection

- Of these 360,304 cases, 26,815 were confirmed by a positive PCR test. The rest had negative PCR tests.

- In the set of patients with case definition, 1,292 received HCQ (at least 2 grams per month)

- The proportion of HCQ chronic treatment was higher in negative patients (0.36% vs. 0.29%, P = 0.04)

- We were able to show that patients taking HCQ have had reduced odds of SARS-CoV-2 infection"

No joke, that is the basis for their claim. Just like "compared to those that we suspected could have tested positive, it turned out that among those that weren't positive there were more people proportionally receiving HCQ." I don't understand how that can actually prove something. One can surely find a lot of things that are proportionally more present in those tested negative, but without any meaning.


That's how screening for drug treatments often works. You look for those kinds of correlations. You move on from there. With many drugs, we don't even know how or why they work for certain things, but we prescribe them anyway.

There's nothing wrong with the claim, it seems to be factually accurate. The authors are not claiming that HCQ is an effective treatment, they say the data suggests that HCQ is protective.


> they say the data suggests that HCQ is protective.

Only, I fail to see that the data does that. And their explicit claim is, which I specifically quoted directly from the paper:

"We were able to show that patients taking HCQ have had reduced odds of SARS-CoV-2 infection"

It's not different than "there were more people believing in Santa Claus proportionally in those negatively tested, so that means that Santa Claus belief reduced odds of SARS-CoV-2 infection."

It just doesn't.

Even if A is true, in such a construct B doesn't follow. Like it was already noted, those treated could have been also less willing to do risky things. Or it could have been a mere coincidence, like in the Santa Claus example.


> It's not different than "there were more people believing in Santa Claus proportionally in those negatively tested, so that means that Santa Claus belief reduced odds of SARS-CoV-2 infection."

If people who believe in Santa Claus have a 0.5 hazard ratio of contracting HIV, that suggests that belief in Santa Claus is protective against HIV. If we flip "belief Santa Claus" for some random drug, it would be the same, you just couldn't obviously tell that it is implausible.

Just because data suggests one thing or another doesn't mean that whatever the data suggests is true. It's still just a correlation, but you always start with a correlation.

You're simply not used to the jargon but feel entitled to criticize the authors for just doing their work. Please don't do that.


I claim that their statement

"We were able to show that patients taking HCQ have had reduced odds of SARS-CoV-2 infection"

is not true. It's not jargon. It's a false statement given the data that they presented. To "show" that the "odds" are reduced compared to something they have to show the reasonable odds of that something. I don't see them showing that at all, or that their paper actually demonstrated "odds."


I just realized you're the guy from the other thread. You have a rather hostile way of interpreting other people's writing.

Please, consider:

https://en.wikipedia.org/wiki/Principle_of_charity


Their prize is being promoted all over the internet by cheeto licking covid truthers.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: