Another odd thing: One of the co-authors of the study, Andrew Bogan seems to manage a hedge fund, and then published a WSJ op-ed on the study. He didn't note in the op-ed that he was involved in the study (and a co-author no less!). A lot of conflict of interest potential here. These authors seem to have an agenda and end-result in mind, no matter the data.
The more I think about this, the more outrageous I find this. It's a form of astroturfing to advance the beliefs of the authors of the study. (Note that the senior author Bhattacharya and of course Ionnadis were advancing this theory before data collection began. This means their analysis deserves even more scrutiny)
> Declaring a hypothesis publicly and then gathering data to see if it’s supported or not is good science
Not for something like this. This study isn’t about running an experiment to see if treatment A or B is better. This is an observational study that is supposed to determine the prevalence of SARS-CoV-2 infection in the population. There is no hypothesis here - That is to say, there shouldn’t be anything to prove. Whatever answer you measure, you measure (ignoring any other data collection issues).
But if you declare “I think the rate of infection is X”, and you end up with a rate of X, then no one knows if you’ve really found the rate to be X or if you put your thumb on the scales.
This should have been a fact finding mission to establish some of the basic numbers that will be needed to design the next wave of experiments or to guide policy.
Very beautiful said. I always belive that bad scientist will say with certainty that "this is the reason". Good scientists will always say it's probable ,they usually don't give certainity. Policy makers want certainity which is why there's always a tiff between good scientists n policymakers. My principle- When we are not sure of data usually simple models beat complex models.
Is it though? Why declare publicly at all then? You’d just be giving the impression that when you do collect the data you’d be cherry picking, exactly like what’s happening here.
> It’s widely accepted as best practice by mainstream scientists.
Simply declaring your hypothesis is not best practice. Publishing the details of your experiment--the protocols, analytical methods, etc--are what's best practice, AFAIU. Few scientists run an experiment they expect to fail, and without publishing the details ahead of time it's trivial to fudge an outcome, so simply pronouncing your hypothesis is entirely uninteresting.
This is one of the sources of the replication crisis. People would do experiments to test a hypothesis but find nothing conclusive, they would then p-hack their data until they found something worth publishing back-hypothesize.
The solution is to publish hypothesis and test methodology ahead of time and publish your data afterward.
> The process of the scientific method involves making conjectures (hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments or empirical observations based on those predictions.
They made a hypothesis and conducted a study to validate it. Perhaps they are wrong. Would be far from the first time studies are proven wrong. At least they did something. Make a better study. Make a hundred better studies. Prove them wrong!
I think the user is referring to the conflict of interest and the lack of interest in proper ethical conduct by one of the co-authors. If they won't follow generally accepted ethical guidelines, what other rules will they ignore or bend?
The "Note that the senior author Bhattacharya and of course Ionnadis were advancing this theory before data collection began" is not. That's how science is done: you advance a theory, then you conduct experiments to validate / invalidate the theory.
I think there's a difference between postulating a hypothesis, and then testing it, which is what I believe you're referring to, versus going around advocating a particular position as being true and then "testing" it.
When people are strong advocates of a particular position, they tend to find ways to design or process experimental data in ways that support that position. The ways in which bias can seep into the experiment are not always easy for outsiders to see.
So it comes down to whether their "advancing this theory" went beyond what a reasonable scientist would do in postulating a hypothesis that needs testing.
It's one thing to advance a theory in academic papers during normal times. It's another to pen widely-read op-eds advancing said theories in the middle of a pandemic. Their theory and op-eds have already, and are, influencing policy - and it's clear they want to influence policy. Hence, it merits more scrutiny than the average study. These results are being used by policymakers to make decisions - skepticism and criticism is most definitely warranted.
I'm in no position to conduct studies, though I am certainly doing things in my wheelhouse to help. Other studies will clear up matters, but results may be weeks away. And these results may be used to end quarantines in the intervening time.
It's not just a matter of conducting a study and "perhaps they are wrong". If it can be shown that their study made unreasonable assumptions, and there was a conflict of interest, then any conclusions drawn from the study need to be taken with a grain of salt. When a primary researcher in the study has a financial interest in a specific outcome, it is at best a poorly designed study, and should be suspect. When the author has, prior to actually collecting data, advanced a desire to influence policy as if the results were a foregone conclusion, that is a problem.
Many of the people who read the Wall Street Journal love those articles. There are 1400 comments
I read the Journal every day. News like the Stanford study is weaponized.
They tend to do hit pieces on climate change too
Here’s a recent WSJ comment:
“ USC just replicated the Stanford study proving that the COVID death rate is 0.1% - 0.2% and the true infection rate is 50x the official numbers.
This means every model has been wrong, lockdowns were foolish, and this is a bad flu.
In light of this new information I don't understand why the narrative hasn't immediately shifted to an unconditional opening of the economy everywhere. Please, enlighten me.”
EVEN IF the rate is 0.1% this virus spreads much faster than the flu due to having no symptoms. We still need to flatten the curve to prevent overwhelming the hospitals.
The WSJ's opinion page is mindbogglingly bad. Their pro-Trump stance borders on worship, and almost every article is dedicated to either promoting Trump or defending one of his positions (in this case, lifting lockdowns).
The comment section is even worse. Breitbart-esque.
It's hard to find a consistently reliable and credible source these days. Last year I switched from an FT subscription to Bloomberg after the FT moved all the adult content (non-blog type articles from senior staff that aren't just reiterating a liberal Twitter consensus) to a much more expensive subscription level. But Bloomberg has too much content. I can barely wade through it and don't know where to start. I'm not a trader--I just want an objective, consistent, and timely eye on global political-economic happenings beyond what comes off the AP wire. sigh Maybe I should go back to the FT.
A comment from Australia's previous Prime Minister on Murdock's newspapers, who lead a government on the right https://theconversation.com/e-136746:
> News Corp “I think was well described as ‘a political organisation that employs a lot of journalists’”; The Australian “defends its friends, it attacks its enemies, it attacks its friends’ enemies, and the tabloids do the same.”
> Turnbull says his fatal flaw, according to media barons, was not that he was “too liberal” but his lack of deference and his personal wealth, because all the billionaires liked a politician who depended on them.
That last statement has all the hallmarks of a conspiracy theory - no evidence, and so appealing to the intellect.
For what it's worth, the rest of their reporting is still fairly high quality; from what I understand, newsrooms and editorial boards are entirely separate entities at most news companies.
If you're looking for deeper analysis-type articles, I would recommend The Economist--they only publish once a week, but they provide broader views on political and socioeconomic trends that daily papers lose amongst all the details. (They are owned by several rich families including the Rothschilds, though, and some of their articles have a globalist bias.)
I use the Bypass Paywalls extension[1] to read FT premium articles; the daily "US markets open higher/lower/flat on <insert post-rationalization here>" on the FT's front page does get tiring.
I like the Economist, but the Economist isn't timely. If you pick a dozen of the best political-economic reporters, especially the ones following the scholarship, aggregate their articles, and connect the threads, you've got an Economist issue 6 months to 2 years before the Economist does, except the Economist articles will elide all the messiness so that the concepts seem more definitive.
Quote from ad: "We are looking for participants to get tested for antibodies to COVID-19."
A quote from someone who participated in the study:
"I participated in the study because I had been sick the week before and was very curious. In the intake questionnaire they asked if I had recent symptoms. I'm unpleasantly surprised that they seem not to have made an effort to use that data to unbias the study."
"I was part of this study and that is totally why I signed up! People I talked to who tried to sign up had similar reasons. Lots of subjects at the testing site wearing masks, more than you see at the grocery, more evidence that a lot of us were more conscious about transmission"
The symptoms of COVID range from a cold to a flu (for those who are ineligible for viral RNA test kits). If the selection is biased, it is biased toward people who felt cold or flu symptoms in the past few weeks, which is going to be a huge amount of people.
So, I'm not getting why selection bias would matter here. If they are counting the total number of participants who were positive for the antibodies, and this number was out of the total number of residents in the county, big deal.
Why would they ask all the healthy people to come get tested? Just assume that all the untested people are negative for the antibodies.
Otherwise you would get a higher infection rate count, which would of course result in a lower mortality rate for the disease.
If the sample was biased, then the results cannot be generalized to the whole population. The purpose of this study was not to find positive cases to treat, it was to estimate the prevalence in the general population.
You're misunderstanding the methodology. They tested a sample and multiplied to get the overall infection rate, under the assumption that their sample was representative.
This analysis is pretty damning, and is more credible in context (the topline findings of the original study constituted extraordinary claims, which, if extrapolated, could imply that a majority of all New Yorkers had C19 antibodies).
Some of the underlying ideas here are pretty straightforward. For instance, the fact that even with 90+% specificity, if your rate of false positives exceeds the true positives in the population (as can happen even with good tests when the underlying condition is rare, as it is with C19), you're going to have problems.
Other antibody studies have consistently found similar results to the Stanford one. The Stanford numbers are very high - most have found closer to 20-30x reported cases than 50-85x - but it's virtually guaranteed that a huge fraction New Yorkers have C19 antibodies today. I'd be pretty surprised if it's less than 20%.
We don't know if these antibody tests are specific for covid-19 only or in fact for any coronavirus. We don't know if they randomly test positive either due to something else. There are a lot of unknowns. These antibody tests are not FDA approved.
They validate the antibody tests against blood collected before the outbreak. If the tests detect other corona viruses, they should get positive tests from blood that existed before Sars2.
Yes there are still unknowns, that’s the point of is science. FDA approval is irrelevant.
We lost a month at the beginning of this pandemic waiting for FDA approval of the tests confirming coronavirus was spreading in the US. I am unconvinced that this approval offered any value at all, and we can't afford to lose another month waiting to see how far it's spread.
Your statement amounts to "we have to do something. This is something. Therefore we have to do it."
Doing the wrong thing can be extremely dangerous in this situation. Consciously choosing to wait until you have greater certainty that the action taken is correct isn't the same as inaction, not when the downside risks of getting it wrong are so high.
I disagree that theorizing and gathering data can be extremely dangerous in this situation. If we're worried the Stanford study is flawed, the right response is to urgently do more studies, not sit on our hands waiting for some ponderous FDA approval process.
There's a certain limited amount of qualified research capacity representing a bottleneck. So, urgently doing more studies to refute one that has flaws and bias by the author isn't the right type of action to take if it will take resources from other, more productive endeavors.
Theorizing and gathering data is fine; going on to write a WSJ op-ed pushing it in a way you hope will influence policy goes a far step beyond proper scientific research protocols.
Standard scientific research protocols are heavily biased towards inaction. We can't afford that bias in the middle of a global emergency; it matters a lot if necessary action gets delayed for a month or even a week. Imagine how bad things would have gotten if we had made the same demands a month ago - if we had refused to institute social distancing until randomized controlled trials proved it helped for this disease.
There was no need to delay social distancing on account scientific uncertainty over the practice. We already knew social distancing was an effective method of preventing spread. Heck, we've known it since the 1918 spanish flu, since quarantines themselves begam. (and still, it was delayed, still is delayed in some places)
The point is that there is a resource bottleneck on "action". There are limited resources. You qualified your own statement as necessary action. How do you know what is necessary? Options must be weighed, the most promising chosen, as many, but not all paths can be pursued. I'm not saying "do nothing", I'm saying that we can't do everything. And then, yes, in areas of great uncertainty, where wrong action can cause more harm than choosing to wait, then we should not take action merely for the sake of "we can't do nothing!!!" emotional response to the crisis.
The linked article is the source. The analysis in the linked article placed the specificity within a range of "they could all be false positives" to "they might mostly be legitimate results".
I've seen in mentioned in a few articles criticizing unapproved tests, but it was more about the potential for this than actual testing. I think one was on CNN, but a quick search doesn't turn it up. There's just too much noise in the new-sphere right now to easily find a specific article from a week or two ago if you don't remember something ultra specific about it. You get, um, lots of false positives in your search results ;)
They are! I don't mean to deny that, there's a large discrepancy which Stanford and other researchers in Santa Clara County should work to resolve. But when newspapers and governments report numbers that are off by at least a factor of 20, a factor of 2-4 isn't the most pressing concern.
They report people who tested positive as cases. The testing is quite limited and almost never done for people without symptoms. It is typically done for people with strong symptoms. Existence of asymptomatic infected people is widely known at this point.
So I would expect undereporting in cases. So the additional 2-4 factor from study that is supposed to figure out underreporting is actually big deal.
Because reported cases where never assumed to be everyone sick (or infected) and I never seen anyone pretend they represent all infected people. Meanwhile, this study presents itself as estimate of immune people in population.
I've never seen anyone assert the specific claim "reported cases capture everyone infected". But I've regularly seen people claim that we know it's a very deadly disease because the number of deaths is 1-2% the number of reported cases, or claim that there's some burden of proof which other studies have to give to justify deviating from the reported count.
This is why you don't compare the current situation tothe known fatality rates of another pandemic for which the dust has settled and we already know everything. A better (not perfect) method is the CFR, where you compare the current CFR to the CFR of the other pandemic when that pandemic was ongoing.
If we take something like H1N1, the CFR during that pandemic was significantly lower than what we face right now [0] and was highly dependent on healthcare infrastructure. We're getting CFR rates, e.g. in NYC, far in excess of those seen in even the worst places with H1N1.
1.) Case fatality ratio, alias CFR, alias the number of deaths per the number of reported cases is NOT 1-2%. And pretty much never was. It was much higher.
2.) We knew it is deadly from observing meltdown in Italy and from observing China. Currently from observing New York.
3.) When you complain that reported cases "underreport" what exactly are you claiming? And what exact unfair burden of proof is there on this scientific study?
If the point is that cases are underreported, there isn't much of a difference between 20x, 30x, and 80x. The post was likely emphasizing that other studies have found even crazier numbers so 20-30x isn't unreasonable.
Many of the other studies have also been with populations that had relatively low numbers infected. Fortunately New York is about to do broad antibody testing themselves, so we should get better answers soon.
Almost every claim that's ever been made about the coronavirus hasn't passed peer review and has been criticized heavily. If you only accept non-controversial peer-reviewed information, you're pretty much going to be stuck with "COVID-19 is a new pandemic respiratory virus".
As flawed as this is, it's in line with other studies around the world. You can nitpick and critique each one for something, but we now have a whole body of evidence using different techniques and different methods that are all stating the number of cases is vastly undercounted, and the IFR is under 1%.
There's a world of difference between an IFR of "under 1%" and a claimed IFR of 0.1%.
It's been clear since the Diamond Princess that IFR would end up, very roughly, around 0.5%. But 0.1% is further outside the reasonable range than even 1%, because 0.1% has already been entirely excluded by the number of confirmed COVID-19 deaths in NYC.
I'm curious how all of these people claiming that Covid is at 0.1% or something close reconcile the fact that it has been the leading cause of death in the United States for multiple days ... and that's without most parts of the country having reached their peak.
I think NYC skews it. NYC, which has a loose definition of COVID-19 deaths, is responsible for half of the US deaths. There's also really no other metro area in the US which rivals NYC in population density. After all of this, I think we're going to find the IFR for NYC to be closer to 1% while the rest of the country is under 0.5%
The critique is the same for all papers and systematically impacts all estimates, so seeing more consistent results doesn't invalidate the criticism.
That said, it's quite possible the number of cases is undercounted! Someone should do a meta-analysis to obtain a better estimate, given the testing uncertainty.
No professional was ever under the delusion that the IFR was >= 1%. Perhaps you meant CFR[1], which by definition does not reflect total number of infections. See, e.g., https://en.wikipedia.org/wiki/Case_fatality_rate
[1] Where 'C' means "case" or "confirmed", which in the context of medical terminology and despite some ambiguity categorically is not synonymous with infected.
Anthony Fauci has stated multiple times in the past that he thought the "fatality rate" of Covid-19 would end up "around 1%." He didn't make a distinction between CFR and IFR, but the general public's understanding of "fatality rate" is obviously that of IFR.
> but the general public's understanding of "fatality rate" is obviously that of IFR
Looking at my nextdoor group. I'm going to disagree with you on this. There's a lot of people screaming, "4.7% of people will die!" Even for a CFR, that's high.
Specifically, the Imperial College model with the "two million deaths" conclusion assumed an IFR of 0.9% (adjusted for age distribution) in every scenario. Maybe not quite a delusion, but the only IFR number they modeled is starting to look pretty close to one.
The derived IFR is still <1% (0.66%), but I'll admit it seems unjustifiably high given the inconsistent reporting out of China. It would have been better characterized, at least qualitatively, as somewhat pessimistic, given all the assumptions.
Ah right, the 0.9% is after adjusting for the age distribution of Great Britain:
> These estimates were corrected for non-uniform attack
rates by age and when applied to the GB population result in an IFR of 0.9% with 4.4% of infections hospitalised (Table 1).
I believe the originally, WHO, albeit based on incomplete and confusing data out of China, said it was 3.something percent. It got cleared up pretty quickly as the situation cleared up.
However, the press keeps reporting the CFR as "death rate" without clarifying the definition nor the distinction between IFR. You then amplify that on social media, and it's become a shit show. "Dr. X [who's really a chiropractor] said on his YouTube video that 5% of people will die!" Even worse, this distinction is still not clear with politicians on all sides.
So I agree that professionals have basically agreed the worst case is the IFR is under 1%. However, if you go to non-HN sites, you'll find that you'll have to fight the terrible messaging that's out there.
A broken clock and stuff. The problem is that this study is polluting the info-sphere, the flaw is so obvious that it should never get any wider audience.
The fatality rate is below one percent. It's quite a bit higher for the old and sick, much lower for healthy people. The only rational and sustainable solution is to isolate the vulnerable and let the other people live their lives with the appropriate hygiene and safety measures as well as extensive testing.
Based on what? It seems like there is always an intrinsic value judgement here because reopening for young people means more old people will die, even with isolation. That's maybe a necessary cost to pay, but don't elevate your moral judgements to the "rational" when they are anything but.
I think the criticism in this thread and elsewhere is a bit too harsh. It’s by no means a perfect study, nor the last word, but hopefully will motivate further studies.
I volunteered on this study and talked with hundreds of the participants, at least 200 and possibly as many as 400. Two reported previous COVID symptoms, unprompted.
The bigger problem was socioeconomic bias. Judging from number of Tesla’s, Audi’s, and Lamborghis, we also skewed affluent. Against the study instructions, several participants (driving the nicest cars I might add) registered both adults and tested two children. In general, these zip codes had a lower rate of infection. It’s very hard to understand which way this study is biased, and a recruiting strategy based on grocery stores might be more effective, but difficult to get zip code balance
There has been additional validation since this preprint was posted and now there’s 118 known-negative samples that have been tested. Specificity remains at 100% for these samples. An updated version will be up soon on medrxiv.
It is polluting the info-sphere - because people expect it to mean much more than it actually does. If the newspaper articles mentioned that the result has like 90% chance of being just a random fluke [1] - then nobody would care to read them.
1. It is of course much more complex it has 90% probability in the high end of the Confidence Interval. After some more thinking - I have made here a similar manipulation to the authors - they take the low end of CI - I take the high end. The study does have some information value - but it is way overstated in the media - and that requires a correction.
This strikes me as classic, Reddit-style conspiracy theory. I have no idea who Andrew Bogan is, but he didn’t play a major role in the execution of the study. I’m sure on the paper’s published the contribution section will specify what he did. Remember: this is a preprint!
I hate that the word “conspiracy” is now lobbed at others to immediately invalidate what they have to say, even with evidence.
Did you skim through Bogan’s op-ed? He references the study without ever mentioning once that he played a role in it (no matter how small it was).
You don’t need a tin foil hat to ask yourself what incentives a hedge fund manager would have in downplaying the severity of the virus by participating in the creation and media distribution of a study that does exactly that.
The author provides an insight into the motive for the study. If these authors decided to come together to do this study with the intention of downplaying the lethality of the virus then it poses a very serious ethics violation. Stanford should investigate.
Is it wrong to have a theory that the virus is less lethal than widely assumed and to investigate this theory? Are scientists not allowed to investigate theories that are contrary to dominant political or religious dogma? What if they end up being right?
I have wondered about the selection bias issue, and I was hopeful that this writeup would give a good look at this and other potential issues with the study.
But when I read it, I was a bit turned off by the author's attitude — which seems to be that he or some of his colleagues should have been consulted by the study authors because they are "statistics experts".
He refers to this apparent omission multiple times, and he also seems to think he's dunking on the authors when he references Theranos (and the fact that its advisors came from government/law/military). But this study is completely unrelated to Theranos (though they both involve blood and Stanford). Off-topic comments like these left me wondering if his analysis is a fair critique, or if he has an axe to grind.
But he's right though---the authors should have consulted some survey data experts. And the other people he names are, in fact, well-known survey data experts who could have made the work better.
There's always going to be some subcategory of experts they didn't talk to. Do medical studies typically include survey experts signing off on the surveys?
When a survey is a critical aspect of your study & data gathering, then yes: consultation with someone familiar with best practices in survey design is pretty common.
The author, Andrew Gelman, is one of the best statisticians of our times. And his criticism that they should have consulted an expert seems fair, since this blog post clearly shows that the study did not give strong evidence for a high prevalence of Covid-19. It might have all been false positives.
The off-topic comments make more sense when one has read the blog for a longer time. The author has criticized so many studies over the years that he must feel a bit of frustration by now. I guess the hacker news crowd wasn't the intended audience for such comments.
Every study on coronavirus antibodies that's released gets panned here, yet every single one of them show strong enough evidence that infection is more widespread and death rates lower than widely assumed.
Even if every one of them is flawed, all the information taken as an aggregate paints a picture. In my province (Alberta, Canada) the health authority has recently expanded tested and as a result there's more confirmed cases and a lower death rate. Other health authorities in the country have strongly suggested infection rates are much higher than confirmed cases (which lowers the death rate since every single death is being accounted for in our country).
So there's concerns with this study, and maybe another, but there's no evidence to counter the conclusion we're seeing again and again.
I don't think anyone doubts that infections are much more common than the recorded rate - we all know that a significant number of people are asymptomatic, hence never tested and recorded.
The problem is that those observations are typically used to conclude that "this is just a bad flu" and advance political demands such as "liberate X" and "immediately reopen the country".
The question that truly matters is "which model is less wrong" and the overflowing ICUs in NYC and Europe (that's the bottom line) provide a clear answer - we should err on the side of caution.
> The problem is that those observations are typically used to conclude that "this is just a bad flu" and advance political demands such as "immediately reopen of the country".
And why not? Should studies and observations be dismissed just because the result isn't what some people want?
>The question that truly matters is "which model is less wrong" and the overflowing ICUs in NYC and Europe
No overflowing ICUs here. Emergency rooms are way under capacity.
Where I am, we have a population of 4 million. Only 40 ICU visits during the whole pandemic. Only 59 deaths and the majority were individuals over the age of 80. Over half were in nursing homes. And yet there's a vocal group who don't want anything to reopen.
Where is the line where we reopen? 25,000 Albertans die every year. 275,000 Canadians die per year. 2.8 million Americans die every year. It's not reasonable to wait until coronavirus is completely eliminated when economic hardship itself is correlated to a higher mortality rate.
Edit - just looked at some closer stats for my region: only 3 deaths under the age of 60. Population 4 million. With 3k official cases and likely 30k or more total cases.
Congratulations! It's working. As to when and how it is appropriate to reopen without spiking the curve and killing millions - many of whom would die from easily treatable and unrelated conditions because they wouldn't be getting the attention they need - I think we should defer to the experts.
This thing was never on pace to kill millions. Maybe if it was actually as deadly as SARS and that was a valid concern, but it's proven to be nowhere near as deadly as SARS.
India was one of the first countries where Covid-19 reached and Thailand was the first. Both had limited spread despite being very densely populated. If it was going to spread further it would have already.
Based on the available evidence it doesn't spread well in hot climates so no, I don't think it's reaching 500k. Maybe there's a chance if the US keeps fucking up.
Singapore has 11 deaths total from Covid... In a city state of 5 million.
Also, warm climate doesn't mean there's no air conditioned buildings or absolutely no spread, but the stats do suggest it makes spread more difficult.
500k excluding the US seems pretty good for me. That means you expect 5x more deaths than up to now, despite deaths peaking in all the hardest hit countries that aren't the US...
> 500k excluding the US seems pretty good for me. That means you expect 5x more deaths than up to now, despite deaths peaking in all the hardest hit countries that aren't the US...
Yes. You seem to be under the impression that the virus has saturated the world and has stopped spreading for some, to me, completely unfathomable reason
So. What'll you say.. $50? $100? $1000? Money goes to charity, or for personal gain?
$100 works for me for an internet bet. I think charity, can just provide proof so don't have to bother with transferring money. And I'm ok with the terms. If you have a favourite charity can go with that.
> ...and Thailand was the first. Both had limited spread despite being very densely populated
Cases in Thailand absolutely exploded a few weeks ago which is why they shut down their borders, went on lockdown, canceled New Years, and instituted a curfew.
Not related to your overall point, but if every study is flawed, you cannot take the aggregate as more accurate than the sum of its parts. Maybe if we did a meta-analysis by aggregating all the data.
"John Newton, Public Health England’s director of health improvement, said:"
"A number of companies were offering us these quick antibody tests, and we were hoping that they’d be fit for purpose, but when they got to test, they all worked but were just not good enough to rely on.
“The judgment was made [that] it’s worth taking the time to develop a better antibody test before rolling it out, and that is what the current plan is.”"
"Newton told the committee that the tests trialled so far had lacked sufficient sensitivity to identify people who had been infected. “We set a clear target for tests to achieve, and none of them frankly were close.”"
It looks like the test used in this study (and the LA study) was actually manufactured by Hangzhou Biotest Biotech Co., Ltd and is not FDA approved. Premier Biotech is simply the US distributor. [1]
Given the news reports of Chinese companies shipping tons of faulty tests this does raise a serious question as to the reliability of this data. [2]
The manual for the test also indicates it will produce a positive test result for other coronaviruses, which seems like a huge red flag. [3]
> Positive results may be due to past or present infection with non-SARS-CoV-2 coronavirus strains, such as coronavirus HKU1, NL63, OC43, or 229E.
> Results indicate that over 50% of infected are asymptomatic.
If infections double every five days, and it takes 5 days for symptoms to show, then a huge number of samples will be presymptomatic.
The same thing happened with the Diamond Princess, where initial testing showed there were many “asymptomatic” people, but they then got symptoms and the true “asymptomatic “ number dropped to a much lower number.
The GP post can be incorrect without lying. Such accusations are not conducive to positive discussion, and run counter the the guidelines under which HN operates.
Mm, indeed I meant to say "also contagious" in the first part, not "most contagious," pardon me, I've edited to reflect. I mean, I did provide a link that said as much.
Well maybe positive but not necessarily able to infect. If they used PCR tests it could give positive on broken up inactivated viruses for awhile after you cleared the infection
If I could upvote this a million times I would. If 50% of those tested positive are asymptomatic, then a good first-order approximation is that the false positive rate is 2x the actual prevalence, and the results say very little about prevalence that actual hospitalizations don't say.
People, and to a large degree the media, are putting a huge emphasis on testing, not always in a way sensitive to the essential difficulty of evaluating test accuracy, especially in epidemiological settings.
Would it be disingenuous to say that false positives in the current case fatality rate would bring the rate down because hospitals are using symptoms combined with PCR for deaths?
From what I've heard, at least in some areas, a Covid test is often performed only after a (usually very fast) flu test is performed. So a death resulting from a person negative for flu but also false-positive for Covid would seem very unlikely. As would any death of this sort from a person who came up positive. Much more likely would be that false positives cluster among those with mild or no symptoms (with the mild symptoms instead coming from a more mundane infection)
If false positives were pushing the published death rate up, it's important to remember there are also deaths among people infected but never tested that would be pushing the rate down. I don't know if they would cancel each other out though. I would however really like to see published data from the CDC about reported pneumonia deaths just prior to the "official" outbreak to see if there was an otherwise unexplained increase.
I don't know but the problem is more with low infection rates. Suppose you have 3% false positives and 3% false negatives, and 0.1% of people are infected.
The false negatives will be 3% of 0.1%. The false positives will be 3% of 99.9%. You exaggerate your infection rate by about 30X even though the test is equally inaccurate in both directions.
What criteria was used for testing? Completely random? If it was volunteer, their method of obtaining and screening the volunteers is very important as well.
Diamond Princess was a very old population, and started off this whole scare campaign that this was going to be Spanish Flu Part 2. It's like doing study within just a nursing home. It won't be representative.
No, Wuhan's medical system getting overloaded started the scare campaign, and then Italy's medical system getting overloaded confirmed that this is, in fact, the real McCoy.
It did not confirm anything was the real McCoy. What's confirming things is the latest serological studies that show the data from Italy was far from representative, and so was the data from Wuhan. The data from the rest of China was much more in line (CFR 0.6%) with the serological studies we're seeing of late (IFR 0.25-1%).
The data from Italy isn't representative of anything other than Lombardy's overwhelmingly old and sick population, Italy's unfortunate response to the situation and the fact the disease hits older folks hundreds of times harder than younger folks. Italy's CFR is about 2 orders of magnitude higher than the IFR we should be basing our broader response on.
Well, all the antibody studies around the globe have several things in common:
* relatively small numbers of participants (< 5000 and therefore only dozens of partifipants with postive results)
* focusing on relatively small geographical areas
* working with antibody test with a high uncertainty regarding
the specifity
One argument that strongly contradicts the narrative that a huge number of people already are/were SARS-COV-2-positive is the of positive PCR tests in Germany. Germany performs hundreds of thousands of PCR tests per week but still mainly tests people with some symptoms. If SARS-COV-2 were that prevalent, you would expect a large proportion of the tested to be positive but its only 4% by late March [1]. Every expert I heard admits that there is a significant number of undiagnosed cases. But 30x-60x seems to be quite unrealistic if even only 4% of people with symptoms are positive.
I get this study may be flawed, but there are several upcoming studies showing similar findings in respect of likely lower CFR than expected. The issue is, with what magnitude
Because when you claim that 2-3% of the population has been infected in an area where 1/30000 of the population died from the virus, it implies that the virus has a 1/600 infection/death rate.
Since ~1/500 New Yorkers have died from this virus, that would imply that 120% of the population of NYC have been infected, and that new NYC cases will drop to zero in two weeks.
Which is, obviously, nonsense. One of the numbers here doesn't fit the facts, and it's probably the one that claims that 2-3% of the population of Santa Clara was infected, with the overwhelming majority not having any symptoms.
Data from the Diamond Princess points to something between 20% and 50% of infected people being asymptomatic - not 95%.
Try 1 in 600. There are also huge error bars on that ratio, because the denominator (population size of New York), is a rough estimate, at best.
In any case, this is nit-picking: the obvious source of error here is trying to cross-apply the IFR from a limited sample in one city to another, completely different city. If you’re only off by a factor of 20%, that’s a pretty strong indication that you’re on to something real.
New York is testing a lot more than California, so I expect their ratio of undiscovered cases to be lower. But it’s becoming clearer and clearer that the true IFR for this virus is substantially lower than previously estimated.
Nobody is estimating that 100% of New York has already been (or even will be) infected, so something is off by much more than 20%.
With an optimistic, but at least plausible, estimate that 40% of NYC have been infected, then the IFR is around 0.4% which actually is within 20% of the 0.5% conservative guess that people have been making.
Who are “people”? Who is defining “optimistic”, and “plausible”? Where are you getting these numbers?
The LA study implies a ratio of 30-50x the number of confirmed cases.
Even at the high end of that range, given the current confirmed infection count in nyc (141.2k), then about 7M people would have been infected. That’s not 100% of the population, and it’s entirely plausible.
> Comparing deaths onboard with expected deaths based on naive CFR estimates using China data, we estimate IFR and CFR in China to be 0.5% (95% CI: 0.2-1.2%) and 1.1% (95% CI: 0.3-2.4%) respectively.
7 million is 80% of the population, which is at the high end of high estimates for the total portion of the population expected to be infected in the end. That would be more plausible if daily deaths were well into the long tail, but NYC appears to be just past the hump and hundreds/day are still dying.
I would say 30x undercounting (48% infected) is highly optimistic but still plausible if you want to embrace that.
Ok, so you’re taking one paper and assuming it is correct, and using that to dismiss more recent data.
Keep in mind that not even two weeks ago, the best estimate of IFR in China was 0.66%. It keeps dropping. Also, the confidence interval on the paper you’re citing extends to 0.2%.
I’m not saying that the factor is 50x, just that it’s plausible.
I don't assume that paper is correct. It's the first example I found of people estimating 0.5% since you apparently hadn't noticed that number being bandied around for the last month. It's a number I've been seeing since early March, along with 40-70% of the population ultimately being infected. I saw one guy suggesting up to 80%. If NYC is already at 80% and rising, that's pretty surprising.
I would like to get an idea of what the IFR really is, not which extreme end of the range of uncertainty looks the best or worst. If it's a little less than 0.5%, great. But when some paper comes out of left field and suggests it's closer to 0.05%, sure, I'm skeptical, especially when mortality is already above that fraction of the entire population in several places.
> But we didn’t then write up a damn preprint and set the publicity machine into action.
The latter part is crucial here. I don’t think you should have to apologize about mistakes in a preprint. Papers get improved by the review process. And most of us just upload them to get around journal paywalls, or to make it easier to share with our colleagues what we are working on.
But when a University hears that you’ve got a result on a hot topic, dollar signs light up in their eyes, and they go to work. Scientist beware.
I hope we can find a balance where scientists don’t rush something out just because it’s a hot topic, yet are also not paralyzed from working on something because of the dangers of the spotlight.
The lead professor (Eran Bendavid) in charge of this study wrote an op-ed a month ago in the WSJ (with another contributor Jay Bhattacharya to this paper) about how we shouldn't shut down over coronavirus: https://www.wsj.com/articles/is-the-coronavirus-as-deadly-as...
I would be very surprised if they weren't aware of what they were doing by releasing their pre-print.
All these confidence interval discussions (both from the original study and from the critique) have no value besides entertaining professors who have more knowledge of math than common sense. Nothing good will ever come from buzzwords such as "Agresti-Coull 95% interval".
Bayesian techniques are not much better since no one will ever agree on the prior.
Just treat the study as some super rough point estimate. Adjust for biases such as selection bias if you can. Look at other studies too. Add your personal opinions (e.g., on whether conflicts of interest are relevant here). Complex statistical arguments won't buy you much more than that.
The comments below the article are also incredibly interesting.
I'll now wait on feedback from the authors to the concerns expressed here. But also, the focus will be on many more serology studies in the coming months. Looking forward to their results.
In Sweden they did antibody tests on blood givers and found that 11% had antibodies. There was no ad or survey so there couldn't have been any selection bias.
The OP is, for lack of better words, so academic. He wants an apology? OK, the study has flaws X, Y and Z. How about propose and conduct a better study? ASAP? Throw darts on a map if you have to.
There are millions of people kicked out of a job. People defer medical procedures indefinitely. Kids skipping school for months on end. We will, sooner or later, run out of basic necessities as well. The world doesn't run on money or theories. It runs on us, real people, shuffling our hands and turning sun and soil into food and heat and clothing. Right now we are grounded at home. This can't go on forever. We are running against the clock. Do something about it!
We are already drowning in misinformation. We have no clue how deadly this disease is, and yet we are using what appears to be [gross?] overestimations to drive public policy to the tune of trillions of dollars printed with a flick of a pen.
Could people stop looking at the death rate ? If you have millions of people in intensive care for 3 to 5 weeks it is equally detrimental to the economy. Plus, the more people are infected, the more likely it is to mutate. Did you take those 2 points into account ?
January. February. March. April. It's been almost 4 months since we know about Covid-19. From the rock I'm living under, it appears that the scientific community at large has produced remarkably little reliable data. Where are the epidemiological studies?
Epidemiological studies are known as “models”. Those are being produced at a rapid clip, and are being updated daily as we get more data. As with most models, they start out with huge error bars and get better over time.
Every epi would LOVE to have widespread test data, but we don’t. Oh well.
These antibody tests only became feasible in the past two weeks, when the tests were actually developed and validated, and then this test was run.
There’s a ton of science being done, if you stop and listen to what is being published.
Conducting experiments is expensive in both money and effort. Let's all sit in our comfy chairs and build fancier models with garbage data instead. If we could divine just the right formula, it will magically paper over the holes in the data. Sounds like every other scientific area I'm aware of. For whatever reason, the experimentalist is a dying breed.
Edit: Do you happen to have a link to a good aggregator for listening to "what's being published"?
You do know that epidemiologists are statisticians, right? They, um, don’t do experiments and never have and never will. They crunch numbers and fit them to models. Which is what they are doing.
If you want to see more data, well yes, so does everyone. What magic tool would you like people to use to get that data? As I explained above, we seem to have a shortage of swab tests to determine infection, and antibody tests only became available about a week ago, at which point people started to use them.
Not quite sure what more you are asking to be done here?
That attitude reminds me of certain type of software engineers: We are engineers, our time is too valuable to write tests, hire someone else to do so. When I hear that, I run away. Fast. The world needs results, and there are little. How about we stop the excuses and roll our sleeves instead?
Edit: here's the WSJ op-ed (paywalled): https://www.wsj.com/articles/new-data-suggest-the-coronaviru...
The more I think about this, the more outrageous I find this. It's a form of astroturfing to advance the beliefs of the authors of the study. (Note that the senior author Bhattacharya and of course Ionnadis were advancing this theory before data collection began. This means their analysis deserves even more scrutiny)