Hacker News new | past | comments | ask | show | jobs | submit login

I am an emergency physician without any formal software training and for the last three months I’ve been trying to build a program that segments the wall of the heart from an ultrasound video and then identifies regions that aren’t moving (an early sign of heart attack).

There are many similarities between this man’s project and mine. And if I had his knowledge I may have cracked my problem by now and would have new way to detect heart attacks early.




Hi! You should consider contacting Stephen Aylward at Kitware. His corner of the company specializes in connecting clinicians with expertise in point of care ultrasound with machine learning programmers paid through Kitware, funded by NIH grants. I worked for him for the last three years, focused on assessing intracranial pressure with ultrasound and assessing pneumothorax with ultrasound.


Heck yes, will do!

For intracranial pressure did you look at ocular ultrasound? How did that work out? Did you have ICPs from actual bolted patients for the gold standard / ground truth? That would be incredibly useful especially in patients on a ventilator who can’t provide a neuro exam.

Pneumothorax seems tricky since really it requires a lung point for diagnosis, and those can be hard to find. Did you simply look for lung sliding? That’s all a physician really needs to make an informed diagnosis.

I love hearing about this. Thank you for the work you do


Do you realize that there are thousands of scientific publications in medical image processing that address exactly the same concrete problem that you have?


Yes, however much of that research is for formal ultrasound obtained by a professional sonographer with an expensive machine. I'm interested in bedside ultrasound performed by an emergency physician with a mediocre machine (eg butterfly).

It seems the primary way to detect regional wall motion abnormalities is with speckle tracking, which requires way too much post-processing for a clinician.

A system that segments the left ventricle and finds akinetic regions in realtime from a parasternal long axis view or an apical four chamber view would be pretty nifty.

If you know of a paper or system that does this now then please let me know. I would love for someone else to have solved this, haha.

My email is Davidm.Crockett [at] Utah.edu


I love this comment deeply, because it is a view into someone else's world, someone who is chipping away at real problems and making "the future" happen. A future where more people are saved from death through incredible-yet-easy-to-use technology.

Kudos for advancing the human race.


This paper might be useful [0]. It measures the thickness of a tissue. If you do that over time you can detect the parts that aren't moving. I studied under Yezzi for a while. Cool dude.

[0] http://iacl.ece.jhu.edu/~prince/pubs/2003-TMI-Yezzi-Thicknes...


Awesome, thank you for the reference. That's the essence of finding regional wall motion abornomalities. When looking at the ultrasound I look for parts of the myocardium that don't change in size relative to their neighbors during the cardiac cycle.


Please post your contact info in your profile so that folks can reach you. Hopefully one of us can help.


OP is the author of the post, in case that helps any.


Done. Appreciate it!


If you have any examples of what you're talking about, can you send them to me? That sounds like something I'd love to poke at. My email is my username at gmail.


Correct me if I'm wrong (I haven't worked with health data science like this) ... wouldn't an end-to-end approach work better from the get-go?

What I mean is, rather than developing a segmentation algorithm and then a motion detection algorithm, why not just feed a bunch of frames into a CNN and have it directly predict "heart attack risk"?

Or is the segment-then-motion-detect approach necessary because of its better explainability?

I guess I view the end-to-end approach as being less fiddly than the more traditional computational imaging approach. And it has a bonus. If data is available, you could feed it historical ultrasound data from patients that later had heart attacks. With that, it's possible it will learn other features of an ultrasound that predict future heart attack.


That would require a dataset of ultrasounds from people having active myocardial infarctions, which we don’t have, and would take at least a year of academic coordination to assemble.

The current datasets are just labeled anatomy at end systole and diastole.


Hmm, curious. I suppose I would wonder, then, if there's no dataset how any algorithm could be developed. What I mean is ... once you've developed something, how do you test how well it performs? How do you analyze the effectiveness of the algorithm, the false positive rate, etc.


I could assemble a small dataset of less than a hundred ultrasounds to test the algorithm. A big dataset that could train an AI would require quite the effort.

Great questions, and you highlight the need for shared ultrasound data.


In addition to posting your contact info please also check out this course designed for people from other fields who want to learn machine learning.

https://www.fast.ai/


Fast.Ai is incredible. Love it. I've bumbled my way through it and was able to make a unet learner trained from this dataset -- https://www.creatis.insa-lyon.fr/Challenge/camus/ -- in order to segment the left ventricle from an apical four chamber view. It's a start, but I still have a long way to go.


Awesome!


Wow I sometimes I understand the downvotes but this one doesn’t make any sense...


Agreed! Thanks for posting this




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: