Elsewhere it was mentioned it was 90% sensitivity and 95% selectivity which is not that good enough for something serious as cancer. Hopefully they can improve on it.
Medical tests are always going to be messy, because biology is messy. It's ultimately up to the physician to order tests and interpret the results. Here's a list of tests for diagnosing breast cancer, none of which approach the numbers you are hoping for (a needle biopsy is only performed once a mass is found)
If it's very cheap, fast and convenient, then it's enough for at least screen the population regularly. Catching only part of the cancers is quite nice already.
Sensitivity is what fraction of the affected people are actually found. Here: 90%, so 10% are missed.
Specificity is what fraction of the unaffected people are detected as such. Here: 95%, so 5% wrongly detected ("false positives").
In Europe, there are 60 cases of lung cancer per 100 000 people.
That makes 54 correctly detected per 100 000, missing 6 cases.
That also means 5000 people incorrectly suspected of lung cancer (5% of 100 000).
Update: using the accuracy from the article itself, we would still get a total of 1000 of false negatives (affected but not detected) and false positives (unaffected but suspected). Incidence is still 60/100 000.
Yup. And some of those follow on tests could be quite invasive and risky in their own right. With that kind of false positive rate, you'd be risking the life of a not insignificant number of people.
Either that or driving up healthcare costs significantly as those 5000 people are going to need an MRI or CAT scan or something else to rule out cancer.
What follow on test for suspected lung cancer would risk the life of a not insignificant number of people?
An MRI without contrast has no impact. An MRI with contrast has relatively little impact. A biopsy would only be done if the MRI with contrast lit up areas of concern. At the point a PET is ordered, you have narrowed the false positive pool substantially and probably want the scan no matter what.
Just telling millions of people per year they might have cancer... say the MRI is booked for 2 weeks out... which they are then going to majorly stress about the whole time regardless of how low you tell them the risk is - is going to have some crazy societal impact.
I typically have several MRIs per year. I'm not claustrophobic and don't stress about them, but sometimes MRIs cause me physical discomfort, even without contrast. With contrast, its slightly worse, it seems.
I think its related to a genetic disorder that causes me to retain high levels of iron, but I have nothing to back this up.
This discomfort in the MRI comes in the form of waves of warmth I feel throughout my abdomen. Its mostly annoying/distracting and not painful. The loudness of the MRI is far worse than the slight warming I feel. Less annoying than the pee your pants feel of a CT with contrast.
I believe the stress would come from the uncertainty of whether or not they have cancer, not from going into the MRI machine.
Just looking at myself, I feel a great level of distress over other, less serious, variations of uncertainty.
Having been told I might have cancer, and then not getting to know for certain this very instant would be nothing short of nerve-wracking...
The warmth is caused by the RF pulses and has nothing to do with the magnet or iron levels. Basically the scanner behaves the same as a microwave oven.
That's why you have to enter the patient weight and size before scanning, in order to keep the SAR within acceptable values and limit body heating.
My father has hemochromatosis which causes high iron uptake. He didn’t discover this until his fifties and it’s like he has rust in his joints. I did 23andMe and I only have one allele so I don’t have it. If you suspect you do, a dna rest will tell you. Please get that checked out. You can deal with it if it’s early in your life.
Ive been tested, and I have hemachromatosis. That's what my next MRI is for - to check on my liver.
So far, for me, it's monitoring. Some dietary changes such as cutting back on red meat and high iron vegetables (mostly cut out spinach, which sucks because I've loved spinach my entire life, especially the Greek dish spanikopita).
My blood iron levels fluctuate a lot based upon diet. Ive mostly got it under control, but normal for me is still on the high end of the normal ramge for normal people.
It probably went undiagnosed for a long time because I used to give blood regularly, but had to stop when my work hours changed to be 7am-7pm and could not make it to a facility to donate. Thats when blood tests started ahowing a problem. Not yet to scheduled phlebotemy, but probably the best option if it becomes an issue.
We don't really know. Data that show X dose of radiation increases the probability of developing cancer by Y% is based on studies on survivors of atomic blasts. The number of people exposed to an amount of radiation equivalent to a handful of CT scans is pretty small relative to the effect size that's claimed, so there is reason to be skeptical.
It doesn't have to instantly and directly kill somebody to be bad in aggregate.
This kind of analysis has been done, most memorably for breast cancer screening. The conclusion I recall was that it did more harm than good a few years ago (opportunity cost of unnecessary spending, pain and complications of biopsy, unnecessary mastectomy, psychological damage, etc. etc.). The follow-up tests and analysis also have an error rate and no treatment is zero cost.
That's also why there was no screening for thyroid cancer after above-ground nuclear weapons testing. Thyroid cancer itself isn't that hard to treat. So the risk of follow-on testing outweighed cancer risk.
And that's assuming that the 99% accurate really means 99% sensitive, 99% specific.
To amplify your point,
99% sensitive from 100000 people with an incidence of 60 means 1 false negative, assuming you can't detect .4 of a person and floor to integer.
99% specific from the same pool means 999 false positives, same assumption.
You mentioned that re: 1000 total, but the kicker:
Total population, 59 true positives + 999 false positives.
So, if I test positive, absent any more knowledge that means it's a 59/(999 + 59) chance of being true, or around a 6% chance of being true.
Probably enough for followup testing, but an interesting demo of why the statistical accuracy is meaningless unless you also know the actual incidence. 99% becomes not many % right quick.
The article says 99% accuracy for 13 cancer types.
Some cancers like pancreatic are a death sentence because it's usually caught too late.
"Toshiba says its device tests for 13 cancer types with 99% accuracy from a single drop of blood"
"The test will be used to detect gastric, esophageal, lung, liver, biliary tract, pancreatic, bowel, ovarian, prostate, bladder and breast cancers as well as sarcoma and glioma."
Cancer diagnostics do not rely on a single test. There's a number of (more expensive) metabolic and endocrinology panels, imaging, biopsies, etc. that follow to validate.
The idea is that you have something cheap and easy up front before or in parallel to further downstream diagnostic procedures.
I think it's pretty intuitive once your attention is called to the issue. The specific numbers aren't even that important. The general principle is that a screening test for any condition that is rare compared to the error rate isn't going to be very useful due to the large number of false positives. And most severe conditions we want to know about are rare.
Even though those false positive rates are high, I don't think that in itself is enough reason to dismiss the idea.
You'll still be able to identify a pool of people that as a group will develop this cancer at a rate 20x above the normal population. That still seems like a big deal, for instsance if I discovered I had a genetic factor that made me 20x more likely to get a particular cancer I think I would want to be tested for it out of precaution. This seems like the same thing.
(Now if the only further test you can do is itself super invasive or risky, that obviously has to be weighed into the decision too).
It's kind of silly to argue from first principles when you can ask the people who study this stuff, epidemiologists. 90% detection and 95% specificity would be horrible numbers if you asked any of them.
Isn’t the gain of this kind of machine that you are testing many people that weren’t tested at all before? Some of those will get a scare for nothing, but more will find out earlier that they have cancer.
If all it takes is a drop of blood (as opposed to more invasive tests) to know with ~90% accuracy if I have cancer or not (and when the machine says I do, then do a more accurate follow up test) then it’s far more likely more people will get diagnosed sooner.
That would decrease the amount of people correctly identified. And you're also under the assumption that the misclassification of people is a random phenomenon. In the real world that wouldn't be the case and some type of blood may get misclassified all the time by the system.
Is there “error” for these these tests independent when performed on the same person spaced out over a long enough time period?
Say you run the test every day/week/month, can you look at the total results or do the failure cases for the tests themselves depend on the individual?
Many diagnostic blood tests for cancer markers are more useful as a data series over time than as single point in time tests, because individuals have variable "normal" or baseline levels. You establish an individual baseline for each marker (for each patient) and then re-test every 6 months. If a number moves from its baseline, then you investigate.
Screening techniques are a real problem if their false positive rate is high. Any significant increase in the work up rate will be a real burden on the health system (so you had better see a corresponding benefit). Worse, gold standard for follow up is going to vary by disease and location, but if it is say a biopsy and you are screening a large population you are often going to start killing people without the disease. Not many, of course, but you have to balance that against the putative improvements.
No, because you'll be telling 1 out of 20 healthy people they tested positive for cancer. Or, if it's 95% selectivity per test, which it probably is, 13 out of 20 healthy people will test positive.
This is true for any medical act you perform on a massive scale really. Including vaccination, supplementation, etc. Everything is a cost/benefit ratio.
Yes, and this math has been done on many of those interventions (certainly for vaccination) and the benefit holds up. That statistical base rates are problematic for cancer detection is a pretty well-understood problem.
What's the current sensitivity of current screening modalities at the stage that these blood tests can be used? Apologies is this is answered elsewhere in this thread.
I think they are creating microfluidic chips that analyze miRNAs. So, I imagine it's an orchestra of pumps, valves, etc pushing very very small amounts of liquid around and using an array of techniques to detect things, all dictated by microcontrollers. It's as interdisciplinary as you can get!
It should be noted that these papers are actually in great abundance, when I get happy is when a big company (e.g. Toshiba or Olympus) takes over because it means a return is to be made, i.e. they're going to pay for the patents and finally bring academic papers to fruition, and putting their weight behind it, probably make it work to solve problems like "cancer" (by detecting it super-early, making it easier to take out).
That’s the important point here. RNA analysis by chips or sequencing is nothing new. But there’s always a long way from the lab to the clinic, and its capital intensive to fund a startup in this field.
Not even in trials yet. GRAIL is running a 100K person trial, to read out next year. Also, not one paper showing this kind of prediction in the clinic, which GRAIL, freenome, Guardant etc all have.
No meaningful papers published + huge claims +single drop of blood = TheranosII
Don't confuse a US-centric view of medical technology development, western medical journals, US trials etc with something developed in Japan. This is Toshiba, working with the (Japanese) National Cancer Center Research Institute and you can be sure there's extensive research behind it, even if it's in Japanese.
"Planned for details of the technology will be announced at the "42nd Japan Annual Meeting of the Molecular Biology Society[5]" to be held in Fukuoka on December 3 to 8 days."[0][1]
Thanks. None of these are peer-reviewed articles or scientific conference presentations (although it will be great to see what comes out of the meeting).
The original poster was pointing out that this is, so far, vaporware because Toshiba has not presented any actual scientific results, unlike Grail, Guardiant and others. Everything seen on this so far is just marketing (all of the scientific literature links on Toshiba's website are to cancer epidemiology surveys, not articles about this technology). The parent poster has yet to back up their assertion that there is "extensive research" behind it. I've been unable to find any publications or patent applications on this, so I'm very interested if someone has links.
This is part of the damage Theranos has done. Even when a legitimate advance comes along in the same field, people are reluctant to believe it because of the hype that Theranos created.
Not exactly. What Theranos has done is to make clear in its aftermath why the problem of making a diagnosis from a single drop of blood is difficult. In short, the sample size is too small and may not contain biomarkers of the diseases of interest even if they are present in the body. Unless this advancement in its application can show how and why this is not a problem, there will be resistance towards accepting what is being put out without further substantiating evidence.
Jokes aside, my understanding is that diagnosing diseases using a drop of blood was once a promising and legitimate application of microfluidic & lab-on-a-chip technology and many believed it was the future. The problem of Theranos was that the company was being completely delusional about its serious limitations, and now everything that involves "a drop of blood" has a bad name afterwards.
Is there anyone who work in the related area of research? Could you give us an overview of the actual progress of the technology today? What can the technology do today? And what is the limitations?
A future iteration of this product should be a household item that you use it during your morning routine, even before brushing your teeth. Preventative measures like this should be more widely available and built into peoples everyday routines.
Ah, we are getting ahead of ourselves. This isn't a preventative measure. It is a diagnostic device. The distinction is important because what will become increasingly apparent is we don't have good tools to deal with cancer when it is detected very early. One of the major problems is working out which of these 'detection events' is going to progress into clinically meaningful disease. We can't just take out (or even biopsy) the prostate of everyone with a hit.
Wait a second. That HEAVILY depends on the different aspects of detection here. For starters, mammography is not performed every year for women at any age. Why?
First, you need near perfect results on 1) false AND true positives and 2) false AND true negatives. Otherwise statistics will screw your results over hard. [1]
Second, the benefits need to outweigh the issues with invasive testing (taking your blood every day, 365 days, for XX years is bound to introduce some risk of infection etc...).
This is very exciting. I worked on these kinds of technologies a bit in graduate school (before switching gears completely and getting a PhD in astrophysics). In the coming decades, we will be able to detect who is on the path to disease before they ever begin showing any symptoms at all. Granted, curing these diseases post-detection is a whole different matter.
Theranos wanted to do any and all blood tests from a single drop of blood. Toshiba is limiting themselves to just a single kind of test, which I'm guessing the science agrees is possible with such a small blood sample.
The problem w/ Theranos was that at first glance everything about it was physically too small. A single drop of blood is too little, the cost of running the tests was too low - to do all of the separate tests it was supposed to be able to do and still be within the realm of the possible.
The Toshiba machine doing one type of test from one drop of blood at a cost of ~$184 is well within the range of what should be possible in the real world.
The signal to noise ratio is still devastatingly bad.
Most cancers aren't throwing large amounts of detectable crap into your bloodstream. If they were, people would be doing routine cancer tests when you do things like cholesterol screening.
The fact that nobody can do this with vials makes me suspicious of being able to do this with drops.
The finger prick is an interesting non-sexual way to positively identify masochists. Everything about the finger prick takes away control while maximizing pain. A nurse can't/won't give you an arm jab until you relax your arm but they'll detonate a finger jabber randomly with an audible 'snap' while you're helplessly watching it all happen in front of you.
Finger pricks aren't that bad. Ive been hospitalized several times with cyclical vomiting. It throws my entire body chemistry way off and often looks like I'm diabetic (I'm not). So, I typically get several finger pricks for blood sugar tests daily when I'm admitted. The finger pricks are nearly painless. I dont typically even notice them. Really only gets bad if they use the same finger repeatedly and bruising starts occuring. Its really not bad at all if you rotate fingers.
The IV is far more painful, of course, it is also a fairly large gauge needle. I still have a track mark from my last hospital stay in my arm due to an intubated IV (a blow out) 3 months ago. It also took about 3 weeks for the swelling in my forearm to go down after that.
I also get weekly allergy shots. Slightly more painful than a finger prick, but far less than a typical vaccination.
This kind of device should join perpetual motion machines on the very short list in patent law where a working model is required as part of the patent application.
In its favor is the fact that no Toshiba executive will be able to distract VCs and senior statesmen the way the top Theranos executive could.
A number of companies are working on cfDNA (cell free DNA) approaches to early cancer detection. GRAIL and Freenome are probably notable examples.
Far fewer seem to be looking at miRNA, as Toshiba are here. I know of one other in Japan with a novel approach. If any Bioinformatics people would be interested email me and I’ll introduce them, they’re hiring.
This is great. I hope this kind of screening tech, like this, becomes commonplace and highly available. Early detection could save many lives and greatly reduce later stage treatment costs.
Wow maybe Toshiba is able to do what Theranos failed to do. And of course with their failure rate, they can bring up healthcare costs as well. Win win for everyone... cough