User side of false negatives: You miss skin cancer. The user delays a visit to a doctor by a month. The user dies in 3 months, and her relatives sue the hell out of the model creator.
User side of false positives: The model thinks a blemish is malignant. The user spends few grand to verify it is not, and blames you for scaring her.
Doctor side of false responses: The fucking engineers do not know what they are doing. Please, my patients, do not use that. We the doctors and the responsible patients should unite against the stupid AI.
Arguably, the user's side is about being moral. The doctor's side is more important for adoption.
BTW, what is the accuracy, false positives and false negatives for just an AI model, just a doctor, and a doctor equipped with an AI helper model?
Liability I guess. Similarly, I've imagined making a local-first web app for quickly evaluating sti risk for non-monogamy.
Both would constitute medical advice, and you can't get in trouble for not giving advice. Imagine if I missed one bug (or the underlying math was simply wrong!) and someone got HIV
There are times when harm reduction is an obvious win (narcan I hear is very cool) but in this case it's hard to justify
It's just, if someone says "I use protection with strangers, I test twice a year, I'm on prep" then it only tells you that they were negative a couple months before their last negative test. What if they have unprotected sex with a steady partner, who has unprotected sex with a steady partner, who hasn't been tested since a one-night stand where they maybe forgot to follow procedures with a stranger?
If you can draw a circle around a group of people and say that none of them have sex outside the circle anymore, and they all tested negative a few months after their last encounter outside the circle, it should be fine. With monogamy that circle is two people. If you can't draw that circle, the math gets confusing quick. How many degrees between me and a bug chaser?
The best models may be owned (and perhaps somehow patented) by some large medical company like Merck or Stryker.
I always have a side project or two going on, and I think it would be a neat side project, but I would need a LOT of pictures of people’s moles to train a >95% accurate model to detect cancer. I’m not sure how one would go about getting that unless they work for a hospital or large health company, and obviously they’d frown upon stealing their images for a side project.
User side of false negatives: You miss skin cancer. The user delays a visit to a doctor by a month. The user dies in 3 months, and her relatives sue the hell out of the model creator.
User side of false positives: The model thinks a blemish is malignant. The user spends few grand to verify it is not, and blames you for scaring her.
Doctor side of false responses: The fucking engineers do not know what they are doing. Please, my patients, do not use that. We the doctors and the responsible patients should unite against the stupid AI.
Arguably, the user's side is about being moral. The doctor's side is more important for adoption.
BTW, what is the accuracy, false positives and false negatives for just an AI model, just a doctor, and a doctor equipped with an AI helper model?