Sorry about the lack of technical knowledge into the CS stuff, first post here and I haven't really put in the time to learn about CS and AI yet.
No need to apologize, you made an interesting and relevant comment. The fact that you feel the need to apologize is more telling of the culture you perceive in this site than of your lack of knowledge in any area.
I'm just getting down to writing some posts about the big question the New Yorker article introduces, but doesn't really make any real progress with: will machines actually replace doctors.
Hope you enjoy them :)
I think the overall tone of the article is a bit harsh on the machines, especially considering the coda.
Same with self driving cars. Self driving cars could be proven to be 100,000% safer than human drivers - but until it is legally mandated people will prefer humans behind the wheel because "what if the self driving car runs a red light and kills someone?" ignoring the hundreds and thousands of humans who run red lights and kill people.
>why we were not using algorithms to assist with diagnosis already.
On the bright side - we increasingly are! I think it's more an issue with budgeting and legal issues that it isn't as widespread.
I've read a very interesting article  about autonomous driving recently, a discussion about the ethics and social obstacles in the way of adopting autonomous cars. The author makes the case that while algorithms might perform a task safer than humans in total but they will inevitably make mistakes. And those mistakes may well be mistakes that are trivial for a human to prevent.
Algorithms making lethal mistakes that a human professional will be extremely unlikely to make, will be the hardest part to swallow if we adopt automation in critical roles - even if the total amount of deaths can be reduced that way.
Even if they are safe across the board, statistically proving to people that autonomous machines perform correctly and aren't prone to obvious mistakes would take decades and billions of hours of operation to be able to make a statement about their safety relative to human performance - since lethal accidents due to human error are relatively rare events. So we are faced with a hen and egg problem in that regard.
One other wrinkle... Having an algorithm strictly following rules and procedures to the T means that we're taking moral/human decision making out of the mix. This matters in situations where two different diagnoses may have the same treatment, but different outcomes on the patient's livelihood aside from their health. E.g., a human doctor may note that a tumor is just big enough to be considered cancerous, but marks it down as "pre-cancerous" in their official diagnosis. The treatment is still the same, but it keeps the more serious diagnosis off the patient's record, which can help them when dealing with insurance.
One follow-up argument to this may be that humans would have the final say in a given diagnosis, but I bet this wouldn't always be the case, particularly in lower income scenarios.
The final question to me is if the risk of the loss of moral agency outweighs the risk for incorrect diagnosis. I think there's going to be variance in the machine/human accuracy rates, and the need for a deeper understanding of the human condition.
Most of the automation in a modern airliner, for example, isn't there to make the plane "fly itself" from point A to point B -- it's there to do things machines are good at in order to reduce the workload of the human crew, so they have more time/energy to focus on the things humans are good at.
Flight plans, for example, are still decided on by humans, even if a programmed computer carries out many aspects of the plan once decided upon. Same for en-route deviation from the prior plan in order to either avoid bad weather or reach good weather (where "weather" is kind of a broad term, and includes things like winds which would help/hinder an airliner). Although "autoland" is a feature, it's not something that's used every time and can be moderately complex to set up, since a bunch of factors have to be decided/input by a human. And of course the final decision to land or go around is made by a human.
Which drives home the fact that automation, in aviation, is a partnership between humans and machines, with humans benefiting and being more productive from offloading some work.
A lot of fields should be looking at that as a model.
as a lay person who's visited too many people in hospitals, this sounds like a reasonable description of the modern hospital room as well.
* bp and pulse monitors do a job which doctors and nurses used to do manually
* non invasive oxygen saturation level monitors do a job which could not be done easily in the past
* automated IV drip monitors do a job nurses used to have to do manually
* wrist bands with bar codes and an accompanying scanner coupled to a database serve as a "second opinion" or double check on the medication a patient is being given
An analogy is instant replay in football. There are some plays that are just so close, that they can't overrule the field judge - and wouldn't be able to even if the field judge made the opposite ruling!
That being said, you're betting on humans to be irrational, and that is always a good bet unfortunately.
Also my oncologist didn't know if some vitamins could actually help my cancer and he was skeptical that my diet could effect my IGF1 levels. He couldn't tell me how I got cancer and couldn't tell me any way to prevent it.
It makes me think there's a future potential problem of Establishment AIs vs contrarian AIs? How would it determine what is best with contradictory information?
My research thesis is in this area, hoping for a nice career doing modelling to predict illness. I'd like to have a nice paying job doing something that actually help people instead of just only making somebody more richer.
Also it doesn't matter if the AI gets wrong, the algorithm gives say 80% accuracy and it tells you base on your genetic make up if you should take the surgery route or chemotherapy route.
1. It's to assist the doctor. Also perhaps it can be cheaper to diagnose than a doctor. If say the algorithm says with 80% chance you have cancer, then you should go to your doctor and have it check. If it says no, then don't go. You have another tool to evaluate your health keeping in mind that it's a tool and aid, not a replacement for doctor.
The only concerns are genetic discrimination which GINA law addresses. And medical algorithm usually err on the side of false positive. So it rather get it wrong in saying you have cancer than saying you don't have cancer and in reality you do.
Any body know of any companies that does this please send them my way. ^__^
So here is the warning: Medical startups are hard, really hard.
- It is difficult to apply research on 32x32 cat icons on 15megapixel xrays.
- It is difficult to get data without year-long contract efforts
- It is very difficult/expensive, if not impossible, to use cloud resources due to HIPAA and localization rules so you need to build your own on-premise GPU grids like we did
- It is difficult doing most medical things in the USA unless you are on the revenue side (e.g., collections, increasing yield, etc.) -- we got so many raw deal partnership offers in the US that we went overseas to trail our product.
The entire medical system in the US is corrupt from the ground up, geared to maximize revenue with minimal lawsuit risk. Patient care rarely enters the conversation internally. I thought financial services was bad (my past career), but at least the metrics were all agreed upon by everyone. In medicine, everyone has an agenda, often diametrically opposed to other parties.
ie - they are unfamiliar with how to secure, (and KEEP), FDA approval for their products. It actually comes as a surprise to many of them that they even NEED FDA Approval. Then they are further shocked by how invasive the process is, and how long it can take. And don't even get me started about the look on their faces when they realize that, despite all of that, you're still held CRIMINALLY accountable for any bugs in your software. (Stay away from anything that might injure anyone if you are unsure of the ability of your process to reliably prevent bugs. Things like RTP are no-go areas for all but the most meticulous medical software developers.)
Don't get into medical software unless you have an extremely long, and very well funded, runway. This is not an industry where 2 guys in a garage can innovate and side-step the regulations. And whatever you do, stay away from anything that could actually harm the patient if it is wrong.
FDA is definitely the biggest hurdle of them all. I'd shudder if we had to deal with that 600 pound gorilla.
It's sad but one of those things that is only apparent once you're neck deep in it.
The people who want to truly help patients are swept aside by those who are in positions to benefit themselves or their friends at the expense of others, including patients. Think administrators, bureaucrats, regulators.
When this thread was raised the other day, I jokingly said how machines could never replace doctors.
The reason is that you can't sue a machine that makes a mistake. Poor medical outcomes like how random flies get hit by cars on a highway.
The powers that be will always insist a doctor be there to sign off. There needs to be a fall guy.
Self Driving Cars will face the same challenge w/r/t liability insurance when there isn't a driver. It might be a insured & bonded set of algos and systems.
For my thesis I'm trying to design better medical software. Good luck with yours!
If you happen to disagree, step away from your doubt for a sec and listen: we can, and will cure these diseases with AI. That's the whole point. We imbue our intelligence into a machine and voila, the machine does what we ask it to do with greater expediency and more acumen than an individual can do alone. We don't garden with machines and say" wow, this took fifteen thousand people to build, should we use it so we can do other stuff instead?".
Nope. We say, thank you John Deere, I'll take 2. While we are at it, let's look a little deeper and think about how civilization functions in general. Is that not what we do? We connect, we decide to work together, and next thing you know, we improve our quality of life: otherwise known as a corporation (or conglomerate if you wannanother version, ya heard?).
So, is AI good for medicine, yes: it is.
Here's a free thought to prove my point. Using my intellect today I deduced that anxiety is absurdity masked as truth. Imbue that into some AI, you'll heal the minds of the world, ok?
Peas in the ground, head in the air, thoughts with the 1.
Most machine-learning companies have misunderstood the role of radiologists. Their first job is not diagnosis. It is to find the right domain, the right view of the data, from which to formulate a diagnosis. So the problem is to map from one set of parameters, say, how an X-ray is taken, to another more promising set of parameters for the same X-ray, to get the view they need. It's not just see an image, find a tumor.
If you're not in it for the money, or if just fame is enough, I'd suggest to go upriver: individual diagnostic data is rare because of medical privacy and market forces, but there is boundless open data in, for example genomics and proteomics. If you search for [bioinformatics competition], you should find a nice selection of opportunities with good data availability and clearly defined objectives. ML is slowly revolutionising this field, although it's a good idea to pay attention to what happened before–there were some seriously smart people working on these problems for a long time, and they found some rather ways to extract the most value from data with the tools available at the time.