Hacker News new | past | comments | ask | show | jobs | submit login
Detection of L-Ventricular Systolic Dysfunction from Electrocardiographic Images (doi.org)
26 points by johntfella on Aug 14, 2023 | hide | past | favorite | 19 comments



It's a great demonstration of using AI to see signals that are not apparent to practicing clinicians, but I'm not sure how novel the algorithm is. Mayo Clinic was testing such an algorithm in field clinical trials already a couple of years ago: https://www.mayoclinicproceedings.org/article/S0025-6196(22)...


This is a terrible demonstration of an unnecessarily complicated solution using pictures of signals instead of the actual signal data for something that is very unlikely to change patient management.

The Mayo study is marginally less questionable with uncontrolled confounders and an outcome measure carefully chosen to show a positive result for a tool that is patented by the Mayo Clinic. Also doubtful that this changes patient management and they chose not to look at that in their study despite having access to that information.


The images are the actual data. Vectorizing/normalizing/transforming the representation of data by means of moving it into an image space is a completely valid approach in ML and can have many advantages, not the least of which is being able to take advantage of models and research that have been done with image data.

After all, this is the same thing that we do for human doctors reading an ECG. Or at least I've never heard one exclaim "This chart is useless, can I please have the CSV?"


> The images are the actual data. Vectorizing/normalizing/transforming the representation of data by means of moving it into an image space is a completely valid approach in ML and can have many advantages.

I'm aware that this is a valid approach in general.

> not the least of which is being able to take advantage of models and research that have been done with image data.

Essentially all of the research, clinical and computer science, that we would be able to take advantage of is based off of digital ECG recordings and numerical data not digitized ECG images.

This is the first publication to my knowledge using an image based approach and their innovation is partially validating a deep learning model that is as accurate as explainable models and algorithms described as early as the mid 2000s and can already be clinically implemented.

> After all, this is the same thing that we do for human doctors reading an ECG. Or at least I've never heard one exclaim "This chart is useless, can I please have the CSV?"

So because a human is incapable of analyzing digital signals the best approach for ML is to artificially use the same limitation?

I would love to hear a reason for why it may be a better approach to analyze a noisier and less rich digitized image space representation an ECG rather than the raw digital signals.


> Essentially all of the research, clinical and computer science, that we would be able to take advantage of is based off of digital ECG recordings and numerical data not digitized ECG images.

This project isn't using any of that. It's what one would call a 'novel' approach. It may upset you that they are not building on the existing body of medical knowledge that you might consider important. They are instead building on the body of knowledge that exists in a different space.

> This is the first publication to my knowledge using an image based approach and their innovation is partially validating a deep learning model that is as accurate as explainable models and algorithms described as early as the mid 2000s and can already be clinically implemented.

Then it's notable they achieved this result given they did not inherit any of the previous research.

> I would love to hear a reason for why it may be a better approach to analyze a noisier and less rich digitized image space representation an ECG rather than the raw digital signals.

Once again, transforming data into an "image" representation does not automatically imply a lossy process or a process that introduces noise. There are ML models which operate on the raw bitstream from a CCD camera just as well as ML models which operate on "frames" of image data. Both approaches are valid ways to "see the world."


Because you can take advantage of pretrained CNNs and perform transfer learning, which is significantly more data-efficient than training from scratch, which is what you'd likely have to do with raw digital signals. This paper is not unique in this approach and many papers have obtained SOTA results by processing digital signals as images.


The complexity/dimensionality of the data representation is increased considerably when going from time series to images of said time series. Sure one can then use transfer learning to manage this complexity. But do you have any references for this approach being more data effective overall?


It's highly likely that they analyzed images because they could not reliably access the digital signal traces. ECG machines, including networked "digital" ECGs store digital ECGs only in proprietary formats, and do not make them accesible as raw data.

The Mayo models were built using digital signal traces.


I have no opinion on the value of the study as a matter of changing practice, or patient outcomes. I referenced it to illustrate that AI algorithms for diagnosing LV function from ECGs are already in the field, being tested.


Let me guess, they just rediscovered that Q waves in V1-V3 are predictive of LVSD?


Yes, but with machine learning this time!

> This approach represents an automated and accessible screening strategy for LV systolic dysfunction, particularly in low-resource settings.

If the low-resource setting means there is no one to interpret a 12-lead that suggests LV dysfunction, what are the chances the individual will have reliable access to an echo or, further down the line, an ACEi to slow remodelling?


> If the low-resource setting means there is no one to interpret a 12-lead that suggests LV dysfunction, what are the chances the individual will have reliable access to an echo or, further down the line, an ACEi to slow remodelling?

Why would a human be needed to interpret anything, if the detection can be done by software?

It may be legally required, but that can change with the stroke of a pen.

As for getting access to an echo, why wouldn't it be possible to also have that done by software?

Then for ACEi, if people already can easily purchase illegal drugs, why do you think they won't be able to buy ACI?

In a "low resource setting", I think enforcing drug laws may also be affected by the "low resources": people who want to avoid heart problems may be strongly incentivized to disregard the already poorly enforced laws to acquire whatever they need thay may increase their lifespan.


> Why would a human be needed to interpret anything, if the detection can be done by software?

ECG machines already do this based on the electrical signal data rather than using a picture of the ECG.

> As for getting access to an echo, why wouldn't it be possible to also have that done by software?

You need an ultrasound machine which at the very least is a point of care model (~$1-2000) as well as an operator competent in acquiring the images. Ironically if you have one of these the software already exists to do what this model is doing with higher accuracy and provides substantially more information.

> Then for ACEi, if people already can easily purchase illegal drugs, why do you think they won't be able to buy ACI?

I can't imagine a location having a healthcare provider, ultrasound machine and ACEI accessible while still using an ECG machine obsolete enough to require this.


Using pictures instead of signals, allowing to do automation with existing (or obsolete) tools, is a true innovation IMHO.

> I can't imagine a location having a healthcare provider, ultrasound machine and ACEI accessible while still using an ECG machine obsolete enough to require this.

I can imagine many locations having no healthcare provider (or maybe just a nurse) and people putting a vest/belt/whatever whose electrodes are hooked to an obsolete machine, to get a quick estimations of their risk, using special software running on their smartphone to interpret the pictures.

Updating software running on the machine would be hard and risky.


> Using pictures instead of signals, allowing to do automation with existing (or obsolete) tools, is a true innovation IMHO.

> I can imagine many locations having no healthcare provider (or maybe just a nurse) and people putting a vest/belt/whatever whose electrodes are hooked to an obsolete machine, to get a quick estimations of their risk, using special software running on their smartphone to interpret the pictures.

So the innovation is that: A low-resource location with no medical expertise (and again is using a 20 year old ECG machine that's somehow still functional) is going to be able to jerry-rig a vest (noting that 12 leads require accurate placement) and then is going to take a picture of the resultant ECG with a smart phone and use a model that's not been validated on an average risk person or noisy ECG data to analyze said picture?

Or we can keep it simple and just use a $50 single lead ECG that plugs into a smart phone and/or is already incorporated into wearables requiring zero medical expertise for accurate use.

https://www.medrxiv.org/content/medrxiv/early/2022/12/04/202...

> to get a quick estimations of their risk

This is my point about not understanding medical relevance.

Phenomenal, you know that you have a risk of left ventricular systolic dysfunction. Now what? What's the next step? Where are you going to get the echo or medical professional?

> Updating software running on the machine would be hard and risky.

You don't have to update the software, you just have to use a machine from the 2000s.


> Phenomenal, you know that you have a risk of left ventricular systolic dysfunction. Now what? What's the next step? Where are you going to get the echo or medical professional?

You don't understand, because you keep assuming echo or medical professionals will be needed, to keep doing what is now medically relevant.

If the same tweaks can be done at the next step (automating readings from obsolete machines, say by recognizing some heart landmarks to align and measure doppler flows purely though software) yes, the end result is still an "intervention" ("take this pill")

But if everything leading to that intervention can be optimized, or even large parts of it money will be saved.

The current approach is not set in stone: this new approach could help those who have no healthcare professional, can't even pay for the extras (modern EKG, confirmation by echo etc) but can pay for basic EKG + ACEi if needed.

These same person which could get nothing in the current approach could at least be better treated


> You don't understand, because you keep assuming echo or medical professionals will be needed, to keep doing what is now medically relevant.

Yes, I believe in the laws of physics which state what you are describing is impossible.

> say by recognizing some heart landmarks to align and measure doppler flows purely though software

No amount of AI will change the fact that you cannot derive volumetric information from the electrical rhythms of the heart. You cannot obtain doppler or volumetric flow information from an ECG, you need ultrasound (i.e. an echo).

This model is almost certainly learning that certain ECG waveform changes reflective of other diseases and physiological changes place you at higher risk of LVSD. We've known these relationships from the 1990s, probably earlier.

> The current approach is not set in stone: this new approach could help those who have no healthcare professional, can't even pay for the extras (modern EKG)

If you live in a developed country there is no such thing as modern vs old EKG. I would strongly wager the same holds true in most developing countries as these older machines are unlikely to still be functional.

> (confirmation by echo etc) but can pay for basic EKG + ACEi if needed.

There are multiple causes of LVSD. They have different treatments.


I should point out that ECG machines that use heuristics to provide on the spot diagnoses already exist, are widespread and are way easier to implement.

[0] https://www.nejm.org/doi/full/10.1056/NEJM199112193252503


So what they're doing here is using deep learning on pictures of ECGs instead of the electrical signals used by machines that provide heuristics.

The proposed use case/workflow seems to be that (somehow) someone, somewhere is using an ECG machine that doesn't provide an automatic preliminary interpretation (i.e. > 20 years old) that is (somehow) still operational and the operator doesn't know how to interpret an ECG. They would then presumably upload a picture of the ECG to a platform that can run a deep learning model on the image. This is also apparently happening in a place where a clinician is then available and echo (for confirmation, quantitative EF and etiology) as well as medications are still accessible to impact patient management/outcomes.

This reads like something done by pure CS folks who don't understand how medicine works but the authors include cardiologists.

Ignoring the glaring validity issues of a study population that only included patients who had an indication for echo, the only explanation I can see for why someone would do this is to puff up the author's h-index as this will undoubtedly be cited in several "emerging applications of AI in medicine" papers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: