Edit: false positive -> false negative
In general, titers of the virus in NP swabs should be high enough to withstand 10, maybe 20-plex pooling. However, as you allude to, there are plenty of other effects that can influence PCR. If there is one sample, for example, that contains a PCR inhibitor, like heme, it may then inhibit all the samples it is pooled with.
Additionally, swabs are not just swabs of virus, they also pull of highly variant amounts of bacteria and human material. These things may also affect the efficiency of PCR.
Yes, thanks, fixed.
Check out Table 1 in this paper. It isn't very encouraging. https://onlinelibrary.wiley.com/doi/full/10.1002/jmv.25971
By mapping individuals onto a hypercube they were able to reduce the number of tests needed to find unique positive individuals by a very large percentage relative to non pooled testing as long as the positive test rate was low in the population.
It seems like this method can be generalized to a detection problem. Anyone knows other applications of this technique?
So then how are you otherwise interpreting the paragraph discussing robustness to measurement error? If I understand it correctly, it is one of the major benefits of the proposed method.
no, there are different kinds of tests, and from different manufacturers, and the materials are handled differently.. Please do not spread this information as 'facts'
Things like the pantograph make me think about computation as a physical process.
I believe that the answer to the first question is "test those regularly exposed more than once" (e.g. in the hospitals, it would probably be preferable to test hospital workers regularly if you don't want them to pass the infection to the patients who aren't infected).
The second question is only then "should I pool the tests and perform the tests more often or not pool the tests and perform them infrequently". Again, I believe the answer is "better testing more often" if the tests are done on those without any symptoms.
Because the if goal is to catch those who could transmit without them knowing they are infected, and they are regularly potentially exposed (e.g. workers in hospitals), even if the single test makes a false negative 1 of 5 times, having them 10 times more often significantly increases the chances to catch all those who get infected early enough to minimize their chance to infect a lot of people.
The practice is of course not easy: one has to be able to not depend on e.g. 10 workers at once when only one is potentially infected, until "which one it is" is resolved.
In short, the pooling could work better if the actual prevalence is low, as the papers also recognize.
I guess the main idea is: the reasonable goals for all testings aren't the same, and the logic of "what has sense" for what goal should be adapted correspondingly, and no approach should be immediately declared wrong without carefully considering what is the real goal of the tests.
Another scenario where one wants to use tests is to decide when the already sick people aren't any more capable infecting somebody. There false negative rate is also worrying. Again, what is the goal can change depending on how empty the hospital beds are -- in situations like it was in New York there weren't enough places in hospitals, and "making place for new patients" was priority, which, it was admitted, allowed more people getting infected -- still sick people were dismissed, only to infect others.