Hacker News new | past | comments | ask | show | jobs | submit login

This is terrifying. I have no idea how accurate it is, but the hallucinations of regular ChatGPT worries me to no end when it’s something important like healthcare.



Yep, that's exactly why we're building it. Our goal is to make generative output explainable and interpretable. Right now we're doing this by showing the retrieved context, which is fed directly to the LLM to perform generation. You can then verify for yourself whether or not the generated content is hallucinated from the retrived document


So this is just like Google? Which shows me quick answers but highlighted from different sources and linking to them


Yeah, like Bard or BingGPT but more focused.


This does appear to be doing some sort of semantic search retrieval; as well - try a query real fast. With that said, yeah, I'm not trying to actually use this.


Yes, it's retrieval augmented generation, aka IR + NLP


There's no problem if it learned properly.


> hallucinations of regular ChatGPT worries me to no end

Why? I mean, I see very much potential in LLM providing hints to doctors based on the symptoms about things the doctors would not think themselves. I also assume doctors would quite easily note majority of baseless hallucinations. So there remains two problematic cases, first, where both doctor and LLM misses the actual reason for the symptoms and second, where LLM makes a mistake that makes the doctor to apply wrong remedy without realizing it. First one is something you can't blame the LLM, and second one, well, what makes you think it outweights the benefits?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: