Hacker News new | past | comments | ask | show | jobs | submit login
Finding SARS-CoV-2 carriers by optimizing pooled testing at low prevalence [pdf] (medrxiv.org)
76 points by FuckButtons 33 days ago | hide | past | favorite | 27 comments



People seem to be missing the point of this article. Pooled testing is accepted and routine. The point of the article is that by constructing pools and subpools in a particular way you can dramatically speed up the process by exercising in parallel / up front all the possible testing paths that would normally have to occur in series so that the individual(s) who are positive are identified in only one round of testing - giving a crucial advantage in a time sensitive situation such as contact tracing for infectious diseases.


Yes, and it's already been done for covid. In may, China wanted to test all 11m people in Wuhan, so they used pooled testing:

https://www.livescience.com/pooled-sampling-covid19-in-wuhan...


The algorithm presented in this paper is a significant improvement over what was done in wuhan. In a low infection population it reduces the cost of testing an individual to around $0.75 and can find infected individuals in a single step, without having to do subsequent assays to narrow down which samples tested positive.


Array testing is an improvement over Dorfman, but this has been widely known for decades. Blood banks regularly use high dimensional arrays for testing. Here they test the hypercube planes in serial instead of parallel, but this increases latency and decreases sensitivity.


I think to authors are overly optimistic in our ability detecting highly dilute samples. Pooling studies in Dengue Fever using RT-PCR found sensitivity fell faster than could be explained by dilution effects alone. At 100:1 or 20:1 pooling the false negative rate will be unacceptable.

Edit: false positive -> false negative


False negative rate maybe?

In general, titers of the virus in NP swabs should be high enough to withstand 10, maybe 20-plex pooling. However, as you allude to, there are plenty of other effects that can influence PCR. If there is one sample, for example, that contains a PCR inhibitor, like heme, it may then inhibit all the samples it is pooled with.

Additionally, swabs are not just swabs of virus, they also pull of highly variant amounts of bacteria and human material. These things may also affect the efficiency of PCR.


> False negative rate maybe?

Yes, thanks, fixed.

Check out Table 1 in this paper. It isn't very encouraging. https://onlinelibrary.wiley.com/doi/full/10.1002/jmv.25971


This is very, very, very easy to test experimentally though, right?


It's already been done no? That's how Wuhan tested 11 million residents in May.


Wuhan tested in batches of 10.


Do they stick the same q-tip up everyone’s nose?


That would be a very effective way to achieve herd immunity quickly. Typically the swab is stored in a liquid viral transport medium. You perform a RNA extraction on this liquid, pool the product and run RT-PCR on it.


I'm not saying you're wrong, but, if that's the case, why haven't we used the same technique already to ramp up testing?


The interview covers this to an extent, the reason is a lack of appreciation of the potential and an overestimation of the complexity. It’s a somewhat complicated thing to multiplex all the samples correctly, it’s too complex a scheme for an individual lab scientist to do, it either requires a robot or some form of semi automation, I think in Rwanda they were using a smartphone app to tell the scientists which batch to put each sample in. I think that if you could convince the worlds public health bodies, and we can get saliva sampling off the ground in combination you could get to the levels of testing needed to actually do national level surveillance.


This has been used at the University of Nebraska Medical Center for a few months now. They do 5-way pooling.


Aha! Someone must have read my comment on HN. https://news.ycombinator.com/item?id=22616288. Where's my citation lol


I was listening to the bbc’s science in action podcast[0] and came across an interview with the mathematician whose group had developed this.

By mapping individuals onto a hypercube they were able to reduce the number of tests needed to find unique positive individuals by a very large percentage relative to non pooled testing as long as the positive test rate was low in the population.

[0] https://www.bbc.co.uk/sounds/play/w3cszh0k


A link for those who prefer reading: https://www.nature.com/articles/d41586-020-02053-6

It seems like this method can be generalized to a detection problem. Anyone knows other applications of this technique?



Isn't this just an extension of one of those dumb brainteasers people sometimes ask in interview questions? (You have N rats and K vials, P of which contain poison. Can you determine which vials contain poison?)


Like so many clever puzzle answers, this doesn't take into account the effect of measurement error. The tests apparently have a false negative rate of 20% as it is. Now start mixing samples together, causing more dilution of the positive samples, and the false negative rate is bound to go up further. Now you have to do studies to figure out what the false negative rate is going to be, and put that into your model as well. This could get impractical fast.


I’m not a virologist, but from my understanding the false negative rate is caused when collecting swab samples and not related to the concentration during the PCR. There are also thresholds in the process (mainly the number of reaction cycles) that can be tuned to account for lower concentrations.


That is correct.


> this doesn't take into account the effect of measurement error.

So then how are you otherwise interpreting the paragraph discussing robustness to measurement error? If I understand it correctly, it is one of the major benefits of the proposed method.


> tests apparently have a false negative rate of 20%

no, there are different kinds of tests, and from different manufacturers, and the materials are handled differently.. Please do not spread this information as 'facts'


Didn't realize you're on HN. Your channel is great!

Things like the pantograph make me think about computation as a physical process.


Let's imagine, for the sake of argument, that the tests are indeed with the false negative rate of 20%. Imagine also that the tests are done to protect a community of 100000 people and that there is simply no more than 10000 test available during some period, and an infrastructure that can never do more than e.g. 1000 tests per day, which all could be valid assumption in real life: the number of tests is limited, the infrastructure is limited etc, that's why pooling is discussed anyway. The first question is "does it make more sense to test everybody once or those regularly exposed and capable to pass to many others more than once."

I believe that the answer to the first question is "test those regularly exposed more than once" (e.g. in the hospitals, it would probably be preferable to test hospital workers regularly if you don't want them to pass the infection to the patients who aren't infected).

The second question is only then "should I pool the tests and perform the tests more often or not pool the tests and perform them infrequently". Again, I believe the answer is "better testing more often" if the tests are done on those without any symptoms.

Because the if goal is to catch those who could transmit without them knowing they are infected, and they are regularly potentially exposed (e.g. workers in hospitals), even if the single test makes a false negative 1 of 5 times, having them 10 times more often significantly increases the chances to catch all those who get infected early enough to minimize their chance to infect a lot of people.

The practice is of course not easy: one has to be able to not depend on e.g. 10 workers at once when only one is potentially infected, until "which one it is" is resolved.

In short, the pooling could work better if the actual prevalence is low, as the papers also recognize.

I guess the main idea is: the reasonable goals for all testings aren't the same, and the logic of "what has sense" for what goal should be adapted correspondingly, and no approach should be immediately declared wrong without carefully considering what is the real goal of the tests.

Another scenario where one wants to use tests is to decide when the already sick people aren't any more capable infecting somebody. There false negative rate is also worrying. Again, what is the goal can change depending on how empty the hospital beds are -- in situations like it was in New York there weren't enough places in hospitals, and "making place for new patients" was priority, which, it was admitted, allowed more people getting infected -- still sick people were dismissed, only to infect others.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: