Hacker News new | past | comments | ask | show | jobs | submit login

I totally agree on the hallucination front, that is a big problem.

But the privacy thing is less of a concern if we are only talking about "using" (not training) these medical models. You could upload your entire medical history to such a model within the context window, and then provide your latest symptoms to get a diagnostic from the LLM. The LLM has not remembered any information about you at this time (it isn't "within" the model itself), it is only in the context window.

Anyway, just pointing out that there is a world of difference between training these models and using these models. If you are only using, there is really very little fear of privacy assuming the whole interaction is discarded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: