Hacker News new | past | comments | ask | show | jobs | submit login

I trust the current post on the physics and EE side of things. That said I'm also very curious about the computer graphics side nowadays. It seems to me that historically, most of the complexity was handled through sophisticated mathematical abstractions at the material level, making the research very costly. How much post processing can be done to infer more information from less costly inputs (I'm thinking ideas like Super Resolution used in spatial imagery). Nowadays very very powerful GPU chips can be found for not much and could be used at this part of the process. Maybe you heard about such ideas, I don't know. I've seen NVidia involved in computer vision for the medical field, but not a lots filter through to me.



There is work going on in this sort of arena, and I've wondered why there hasn't been more. I think probably there are a few reasons.

First is medical imaging tends to have relatively low resolution and high noise compared to optical images. So there is a danger of over-processing.

Second is, somewhat related, that I think there is distrust in clinical practice and regulation of black boxes where processing happens in the background. You're taking responsibility away from a doctor and putting it on your system, so there is a greater burden of validation.

Third is cultural. Doctors get a ton of training to interpret the signal in these images, and I think there is both a badge of honor in this as well as a barrier to adjust to something new. In my Masters, one of the medical device design instructors talked about a prototype of a digital stethoscope with huge improvements to acoustics. Doctors hated it - even though the sound was better, it wasn't the sounds they were used to. Although this was before the Eko Stethoscope, which is a digital stethoscope and seems to be doing well. Maybe an example of market timing.


Quick answer - yes, GPUs are now reaching the point that they are becoming useful, but also imaging techniques such as superresolution methods, ultra fast imaging, and others, demand even more computation. I expect to see advancements in the next few years because of this, and at some point (maybe 10+ years from now), that GPU power will exceed the demand from ultrasound and even the highest end systems can use them for all needs. This is not lost on the community and they are working on exactly that kind of research.


There are improvement in modes that have been recently achieved thanks to the computing power available in CPU or GPU. For example you can now see the microvasculature with far more details than before on some systems.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: