
We Don’t Need More Blood Tests - fisherjeff
http://fivethirtyeight.com/features/theranos-is-wrong-we-dont-need-more-blood-tests/
======
Floegipoky
It seems like one of the biggest issues here is that a single test for a given
thing isn't enough data to be able to reliably tell whether or not the results
are significant. So people are arguing that we should collect fewer data
points?

I totally disagree with this viewpoint. The medical community's current
approach to testing (positive result, treat ALL THE THINGS!) is an artifact of
the difficulty and cost of performing the tests; the inability of many
providers to apply basic concepts of probability to test results should not be
used as an argument against advancing the state of the art, particularly as
the industry begins baking data-driven clinical decision support into
automated health systems.

If you have 40 tests spanning 20 years saying that you aren't at risk for
Total Scrotal Implosion, and then suddenly, without any symptoms, you get a
result saying your testicles will fall off tomorrow, you have context with
which to interpret this result. Without the historic data there is much
greater risk of you and your healthcare provider agreeing to an unnecessary
knee-jerk scrotalectomy.

Less data is never the answer. Just my 2 cents.

~~~
paviva
You are incorrect.

All kinds of "shotgun testing" (i.e. indiscriminate testing for everything
like you recommend) have been studied, and proven worthless at best, and more
often than not, actively harmful.

First, there is the issue of tests' limitation, and extremely low predictive
power. For instance, if testing positive on A makes its 20 times more likely
that you'll get B, and the prevalence of B is 1 out 1000 000 in the general
population, your own personal risk remains low enough that nothing has changed
-- except that you will panic and do unnecessary interventions to reduce this
risk. That is precisely the reason tests are asked when you already have
symptoms, so if you're pretest probability of disease is 10%, a positive test
results means you most probably have it, and it is worth doing something about
it.

Second, all known treatments (including "preventive" ones) carry non-zero
risk. When you don't have symptoms, whether or not you test positive, your
risk of dying from a disease remains lower than dying from an intervention to
prevent the disease -- thus, you gain nothing by testing.

Let's take for instance a 40-yo female who gets an ECG done for no good
reason. It shows signs of heart disease, which could be a variant of normal,
or a sign of a disease. The lady is worried, so she goes on with a stress test
just to be sure. She tests positive (a sizeable proportion of those tests are
false positive for multiple reasons), so she decides to go on and follows up
with a coronary angiogram to see if there's any blockage. Angiogram is normal,
but, a coronary is perforated during procedure (1/10 000 risk), and she dies
on the table, when she never had any health problems beforehand. This kind of
stuff happens all the times.

Finally, from an ethical stand-point, as long as healthcare -- and the
individual's stupid testing choices -- are paid for collectively, individual
choices should be severely restricted.

If we were in a country without any kind of state-sponsored healthcare, where
you'd get to pay for any self-harm from your own pocket, I'd argue for free-
for-all testing for anyone without any oversight.

~~~
bobowzki
I'm a physician. This is the thing most people don't get about tests. It's
tiring to see these startups with their flawed agenda.

~~~
petra
But what if we could redesign the healthcare systems, in a way that doesn't
expose people to all their test results , only critical ones, but gives the
doctor/system that data in order to enable to better watch their patient ?

Would that be useful?

~~~
bobowzki
Well, it's a good point but how do you decide what's critical?

To answer my own question; I believe the future lies in machine learning
algorithms processing symptoms and tests (I guess a symptom is also a test in
the sense that it's the answer to a question).

Most of the times there's also not a simple answer to be found . The right
answer depends on many factors including the capabilities of your
hospital/country/economy and the state of science.

~~~
paviva
> The right answer depends on many factors including the capabilities of your
> hospital [...]

Absolutely !

I'll add that the physician himself is a kind of test, in the sense that his
own sensitivity/specificity to diagnosing a disease can be calculated.

It is well known -- and I'a argue it's a feature, not a bug --, that the exact
same patients with the exact same symptoms will have a different work-up
whether he's seen a GP, an emergency physician, or, say a heart surgeon. The
reason is pretty simple: because disease prevalence is different in those
three practices, the doctor has to order more or less tests to get the same
predictive power. E.g., when every patient has heart disease, every ECG change
is probably sign of disease, whereas when almost nobody has any heart
problems, ECGs are pretty meaningless.

------
bigchewy
For those who haven't had a chance to read about the negative impact of
testing, check out Atul Gawande's Overkill -
[http://www.newyorker.com/magazine/2015/05/11/overkill-
atul-g...](http://www.newyorker.com/magazine/2015/05/11/overkill-atul-gawande)

It explains why one of the most dangerous things for a healthy person to do is
get tested. I've been in healthcare analytics for almost a decade now and see
the same thing in the population data.

Theranos and other pro-testers are usually well intentioned but fundamentally
misguided and haven't looked at population wide data sets, which tell a
different story.

~~~
grandalf
What Gawande describes is more of a sysematic irrationality in the healthcare
system than a flaw with a test.

The decision to over-treat is part of the tradeoff we get when physicians are
viewed as authority figures. Their inaction (not doing a treatment) is viewed
as a delegitimization of the patient's needs, and so there is social pressure
to treat, even when harm could be caused. This is a psychological blind spot
that equally effects patient and physician.

But with respect to tests, so long as a test has a known false positive and
false negative rate, its result can be accurately factored into a
probabilistic model of a patient's overall health.

Our healthcare system is biased toward acute conditions and extreme
interventions. Things like early disease progression and wellness are
generally not even considered relevant to most doctors.

The reasoning approach of an evidence-based differential diagnosis which is
taught to medical students is a powerful heuristic, but it is designed to work
within the constraints of acute illness and (potentially) urgent intervention.
So of course if fails when test results are considered without appropriate
measures to improve the signal to noise ratio of the first branch of the
decision tree.

With any kind of broad-spectrum, speculative testing, any result would need to
be considered over time and in the context of many other factors. It is not a
drop-in replacement for any step of the traditional differential.

~~~
gvb
> its result can be accurately factored into a probabilistic model of a
> patient's overall health.

But _my_ health with respect to an illness is _not probabilistic[1]_. I either
have the illness or don't have the illness. Probabilities are not useful when
the sample size is one (me).

[1] Pedantic: it is probabilistic, but the probability is either 0% or 100%
because the confidence interval sucks.

~~~
TeMPOraL
It's probabilistic because _you don 't know_ whether or not you have an
illness. It's like tossing a coin - it's 100% on one side and 0% on the other,
but you don't know which side it is on until you check - that's why we say a
fair coin has a 50% probability of landing on either side when tossed.

Now the test you use with that (mathematical) coin is 100% accurate. Tests in
medicine are not. They're more like "oh I see you sort of seem to have X; X
has been known to occur a bit more in people suffering from Y than in those
not suffering from Y". Hence the uncertainty.

------
jimrandomh
I strongly disagree. We need more blood tests, and we need them badly. We need
them in forms which people can self administer, outside of the prescription-
and-professional-blood-draw model.

The history of blood glucose testing is informative here. It used to be a
hospital lab test, like most other tests: done fasting, infrequently, to
diagnose diabetes. Today, that test is done with an over-the-counter test kit,
and diabetics do it multiple times per day. This provides information that the
infrequent, hospital version couldn't provide at all: how blood glucose
responds to meals.

There are many other tests which, if they could be frequent and self-
administered, would enable people to make new discoveries. The common
metabolic tests, for example - HDL, LDL, etc - are very closely analogous to
glucose tests, in that they respond to meals and that response is probably
more informative than the fasting test. But there's nothing analogous to a
glucose tolerance test for cholesterol, because that requires ten tests in a
row and that's too expensive.

One of the more common complaints I hear from friends is about migraines.
Blood tests are worthless there because it's impossible to get a blood draw
during an actual migraine. Same for most mental illnesses; comparing blood
tests between a bipolar person's manic and depressive phases would be
_fascinating_ , but no one does it.

And that's not even mentioning micronutrient status screening. The rates of
micronutrient deficiencies in the United States found by the National Health
and Nutrition Examination Survey (NHANES) are shocking; it's the twenty-first
century and the rate of iodine deficiency is 9%.

~~~
pak
_The history of blood glucose testing is informative here._

You're conflating a _diagnostic_ test with a test that patients need to
control _dosing_ (of insulin). To make a diagnosis of diabetes, such frequent
testing is not any more informative. Better tests, such as HbA1C, have been
developed to indirectly measure blood glucose levels over a 3-month timescale,
which is more appropriate for diagnosis.

I don't think there's any evidence yet that people being able to monitor their
lipid levels while eating provides any useful medical information, unless you
have some kind of (incredibly rare) inherited lipid metabolism deficiency.

 _comparing blood tests between a bipolar person 's manic and depressive
phases would be fascinating, but no one does it._

There in fact has been plenty of work on this, but in a research setting,
where it belongs. See section 6 of
[http://www.ncbi.nlm.nih.gov/pubmed/27017833](http://www.ncbi.nlm.nih.gov/pubmed/27017833)

 _it 's the twenty-first century and the rate of iodine deficiency is 9%._

Micronutrient deficiencies are usually a result of dietary choices. This
problem is more easily solved by encouraging everyone to take a daily
multivitamin, which would be completely prophylactic, than by encouraging the
same population to subscribe to series of blood tests that may or may not
reveal the problem, and would require follow-up action. Again, think about it
from a population health perspective.

~~~
shanusmagnus
The fact that you immediately jumped to the assumption that the only useful
thing one could do with glucose testing is to diagnose diabetes or plan
insulin doses is indicative of the failure of imagination endemic to the
system.

Glucose testing is useful for all kinds of things; the fact that you yourself
(and most doctors) don't know that, or think that other people can't be
trusted with their own data without some Credentialed Professional to
interpret it for them, is both insulting and limiting.

I don't want to go back to a world where AT&T had to anticipate the ways I'd
want to use telecom. Although sadly in some respects we've never left it.

~~~
pak
Whoa, let's not put words in my mouth here. First of all, I said nothing about
denying people access to their medical data. Once the tests are done, yes,
it's the patient's data (and in the US, HIPAA concurs). We're not in
disagreement there.

Secondly, there may be all kinds of other uses for glucose tests that one
could _research_ , but consumers running tests on themselves in an
uncontrolled manner is not research. I would never say that no other uses will
ever be discovered, but let's do that scientifically, please. My specific
issue was with how diabetic glucose self-testing was used as rhetorical
evidence that more blood tests help people, while failing to note that those
tests are done to dose (potentially dangerous, fast-acting) medications, not
to "keep tabs" on anybody's diabetes in a diagnostic sense, as was implied by
the omission.

You say below that "people are coming around on glucose in the same way that
we now understand that the cardio signal [...] are predictive of an enormous
number of physiological and psychological phenomena." That's a lovely
hypothesis, but please tell me who these people are, and please show me the
evidence of the predictive value.

Until then, the Credentialed Professionals are perfectly justified in
shrugging their shoulders at post-prandial glucose data from healthy patients
(who, contrarily, will demand that needless and dangerous follow-up procedures
are ordered for them), and the companies selling consumers these tests will
not be helping anybody become healthier. I could go on, but this comment sums
up the societal effects better than I could, even referencing your "ideal" of
the ECG for screening.
[https://news.ycombinator.com/item?id=11694341](https://news.ycombinator.com/item?id=11694341)

~~~
shanusmagnus
Really? Okay.

Dunstan, D. W., Daly, R. M., Owen, N., Jolley, D., De Courten, M., Shaw, J., &
Zimmet, P. (2002). High-intensity resistance training improves glycemic
control in older patients with type 2 diabetes. Diabetes care, 25(10),
1729-1736.

Mäntyselkä, P., Miettola, J., Niskanen, L., & Kumpusalo, E. (2008). Glucose
regulation and chronic pain at multiple sites. Rheumatology, 47(8), 1235-1238.

Newcomer, J. W., Haupt, D. W., Fucetola, R., Melson, A. K., Schweiger, J. A.,
Cooper, B. P., & Selke, G. (2002). Abnormalities in glucose regulation during
antipsychotic treatment of schizophrenia. Archives of General Psychiatry,
59(4), 337-345.

Nybo, L. (2003). CNS fatigue and prolonged exercise: effect of glucose
supplementation. Medicine and science in sports and exercise, 35(4), 589-594.

You have a lot more faith in Credentialed Professionals than I do, apparently.
Or a lot less faith in anybody else.

------
ssivark
Abstracting out the essence of this article: It's arguing about the danger of
ubiquitous "big data" in certain contexts.

The human population around 8 billion, which is 2^33. 33 is roughly the number
of pairs out of 9 objects. Heuristically, this would mean that if we test for
more than 9 boolean variables (high/low), purely by randomness we will start
seeing correlations due to a "limited" population size of 8 billion. If we
start treating people just based on those observations, we would seriously
wreck human health.

Diagnosis (decision making) based on correlations (rather than causality) is
very tricky.

~~~
harigov
For the record, we _are_ treating people based on exact same observations
albeit correlated on much less number of people. However, we make use of some
probability, other observational and subjective evidence in order to determine
right course of action. I agree with another comment made above regarding
"less data is never the answer". We may make some mistakes with small amount
of data but the only way we will ever learn is by making a hypotheses and
acting on it. We should tread carefully but not too careful to never step
forward.

------
tokenadult
A lot of the comments here reveal not having read the fine article (which is
not very long). If you look at the graphic labeled "How Accurate Tests Can Be
Mostly Wrong," about halfway down the displayed Web article, you will see a
very carefully worked out (and realistic) example about how even a very
accurate medical test can result in mostly false positive indications of a
disease--all that is necessary for that, mathematically, is that there is a
low base rate of the disease. Examples like this have been commonplace in
books about statistical reasoning for making medical decisions for more than a
decade, and I have shared links to Hacker News before that make this same
point. This is something everyone needs to know (but the investors in Theranos
didn't know) to make sound decisions about how much testing to do and what to
do with test results.

Other authors who write about this issue are cited in the article kindly
submitted for our discussion. I urge everyone here to read a lot of the
writings of Dr. John P.A. Ioannidis,[1] who is quoted in the article.

[1] [https://med.stanford.edu/profiles/john-
ioannidis?tab=publica...](https://med.stanford.edu/profiles/john-
ioannidis?tab=publications)

------
chuckcode
Pretty disappointed with 538 on this article. The basic argument is that we
shouldn't do more medical testing as doctors aren't sure how to interpret the
results, public isn't sophisticated enough to understand and too lazy to do
anything even if they did.

Personally I feel that if we had more data over a larger group of people we
would be able to learn what leads to disease better. Further I'm a little
shocked that the medical industry can get away with a stated policy that
patients should wait until their symptoms are acute and then get a minimum
number of tests so their doctors don't get confused. I'd like to see us have
more data about our health even if we're not sure what to do about it right
away. Ignorance is easier to implement but isn't always bliss.

~~~
nemothekid
> _Personally I feel that if we had more data over a larger group of people we
> would be able to learn what leads to disease better._

The article is pretty much arguing that there isn't currently a method that
exists that will accurately give us more data.

How useful is "more data" when 73% of it is garbage?

------
pidge
Think about it as getting a time series of previous measurements to aid in the
detection of anomilies. If you just give me the metrics of a production system
at an instant in time, it's hard to say whether anything's wrong. But when I
know the response time has been in the same range for the past year, and then
increases to 2x and stays that way for a week, I can be pretty confident that
something broke.

Whether we're anywhere near feasibly getting that resolution of data is a
different question.

------
melling
The argument is that some blood tests aren't that effective and you get false
positives. For certain tests that might be true. However, aren't we still
better off if we can perform all tests more cheaply and on a more regular
basis? Having a few tests that aren't effective does not invalidate the entire
goal.

~~~
blakesterz
That's exactly what I was thinking. Sure, if we look at a population then
maybe we don't see a remarkable result. But surely we've helped SOMEONE here.

From the article: "which means that only 27% of positive test results are
right"

Yes, 27% is not great, but that's still more than 1/4 of the people that had a
good result and will live longer or better or whatever. 1/4 of the people got
some kind of benefit. Sure, we'd rather see 75% but 27% is SOMETHING and
that's worth, well, something I guess. Does the cost mean it's worth it?

For those in the 27% they are sure as heck going to say yes, right?

~~~
brianwawok
Maybe, maybe not. For example the treatments for prostate cancer can be worse
than not treating early state elevated PSAs.

Being told you have AIDs, then 24h later being told "just kidding false
positive"... do you think that has no consequences on your life?

~~~
amelius
> For example the treatments for prostate cancer can be worse than not
> treating early state elevated PSAs.

I suppose that positive test results need not be followed directly by
treatment, but can also be used to trigger further testing.

~~~
brianwawok
In the specific case of prostate testing, all they can do is repeat the test.
And it likely comes back with the same results.

Many men in their 40s and 50s have high test results. Those same men can live
30, 40 more years with the condition. Or they can die of prostate cancer in 3
years. Or they could get treatment, and lose all use of their prostate for the
rest of their life.

It seems like prostate cancer testing is the poster child for "the test tells
you something, but often you statistically don't want to act on the test...".

I have no plans of ever having the test done.

------
jrapdx3
After decades of practicing medicine and seeing results of countless lab
tests, I tend to agree with many of the comments here. The article isn't wrong
at its core, but it is sensationalizing some very important issues.

The idea that lab studies yield false positives (and negatives too) is hardly
novel. Of course test results can be misleading or easily misconstrued. We
_know_ a single test, or even a set of tests is rarely definitive. We _know_
interpreting tests is an exercise in probabilistic thinking and careful
practitioners rely on test results only to the extent warranted.

I often get a question like "so what does this test mean?" A single anomalous
reading, probably not much. I answer "it's only a test", confirming a
diagnosis is a laborious process to make sure the facts align as best as can
be determined. That is, the gamut of history, direct observation, and a
variety of lab/imaging measurements looking at a clinical situation from
several angles need to converge.

In many practice domains lab/measurement technologies provide tremendous
benefit. Think about the contributions of imaging (CT, MRI), endoscopy
(colonoscopy, etc.), and yes, advances in medical laboratory science also save
untold lives every day. Everyone here on HN knows all technologies can and
will be misused but that doesn't mean they are not valuable and worthy.

I take my own advice to never forget: "it's only a test", and any test is no
more useful than the limits of its credibility.

------
erikb
I have very high chance of diabetes and other blood sugar related health
problems, like heart diseases. I certainly think that cheap, repetitive blood
tests would help me improve my health in the same way having my phone count my
steps helps me to have a more active life. Please make it work. Maybe that
start-up won't make, then please make another one. Thanks.

------
lordnacho
Something is not clear to me. How do you know that a false positive is, in
fact, a false positive?

I presume there are some other pieces of information that provide the
reference value.

So when someone gets a positive test, why not put them through the next test
to see if they keep coming up positive?

What's the argument for not testing in the first place? It will panic people?
If that's the case, why not make it random whether you're called to an extra
test?

~~~
Jtsummers

      > What's the argument for not testing in the first place?
    
      The wider the pool of people being tested, the greater
      the chance of false positives, which is why screening
      guidelines generally limit the population to be screened.
      The more independent tests you do at once, each with its
      own chance of error, the larger the chance that at least
      one of those tests produces an incorrect result, said
      Rebecca Goldin, director of STATS.org and a professor of
      mathematical sciences at George Mason University.
    

It's not just panic, but that if the false positive rate is too high then
untargeted testing becomes overwhelming and counterproductive. There's a slim,
very slim, chance that as a man I could develop breast cancer. But there's
little value in testing me as it's not present in my family history (male or
female).

So let's say (in this example) that the false positive rate is _only_ 1% in
men and the true incidence rate is 0.01%, and we test 100 million men:

    
    
        10,000 men have breast cancer, great we've detected it
       999,900 men don't, but are subjected to more testing
    

That more testing is likely costly, time consuming, and potentially invasive.
This acts as a net drain on medical resources because we've just performed
nearly 1 million unnecessary mammograms. They take technician and physician
time. They take machine time. And so on.

So false positive rates may be low, but when applied to a massive population
the total number of false positives can overwhelm the system.

------
jerryhuang100
the article and most comments here are more about EH's VISION of empowering
regular people for cheap and on-demand blood tests. but the major problems the
regulatory bodies, medical practioners and diagnostics experts have are mostly
about theranos' underlining TECHNOLOGY, its basic SCIENCES and daily lab
PRACTICES. to this date none of these three has been established at theranos
(non-peer-reviewed, non-core patents, criticized by cms & fda w/ banning
threats). in short, it just does not work (yet?)

it's like someone always talk about building beautiful resorts or gigatic
mining operations on Mars, but fails to develop rocket propulsion systems or
space travelling vehicles. if there would be other someone to bring us to
Mars, it must be elon musk.

------
nommm-nommm
See also Science Based Medicine - a skeptical look at screening tests.

[https://www.sciencebasedmedicine.org/a-skeptical-look-at-
scr...](https://www.sciencebasedmedicine.org/a-skeptical-look-at-screening-
tests/)

------
amelius
> which means that only 27% of positive test results are right

The negative test results are still right 99.8% of the time.

~~~
pessimizer
But throwing the blood sample in the trash and just saying "You don't have it"
is right 98% of the time. Would you support that test? Because I'm willing to
charge you for it.

------
bane
Yeah sure, but you know...what about _disruption!_

