Specifically, because you will require vast human health records to train your model, and that model will interact with my health records, and I trust you just about as far as I can throw you as a steward of my or the public’s data. You will intentionally or accidentally expose me to risk, with no meaningful punishment.
Now as a person, of course I’m sure you are kind and responsible and we’d have a lovely lunch (and I mean that). It sounds like a fascinating problem to solve. As a group though, acting within a regulatory regime that doesn’t value privacy at all - excepting that one law from 1996 with more holes than my socks - you just can’t be trusted.
Would you claim personal responsibility for any downside risks your product introduces, in a “like for like” manner with respect to the actual damage caused? Like if a doctor relying on your product caused a death?
It's a productivity AI agent meant for workers who were already dealing with medical records and creating medical chronologies manually, primarily in the legal space.
I get why you’d want to distinguish yourself from competition that relies heavily on RAG, but “chatting” is putting it mildly.
I’d use RAG with prompts like “what was billed but did not produce a record”. Or rehydrating the context of, for example, the hospital’s own model for predicted drug interactions. I could see it being lucrative if that model produced those results without traceability.