Hacker News new | past | comments | ask | show | jobs | submit login

> If this tool is even 1% lower on average, it's already better than a human.

Without diving into the source data behind the 'wiendlaw.com' article (are they authoritative sources? Can we believe their interpretation of a biomedical journal article?), it is facile and likely wrong to conclude, a priori, that one distribution of errors (that of humans) is indistinguishable from another distribution of errors (your hypothetical LLM with '1% lower on average' errors).

It is often said that 50% of biomedical facts are wrong. The trouble, of course, is determining which half of the textbook to disregard. While this trope is cute and also incorrect, it is difficult to imagine how an LLM trained on past knowledge will be able to create the new knowledge required to establish new facts




>It is often said that 50% of biomedical facts are wrong. The trouble, of course, is determining which half of the textbook to disregard.

So abduction ? Good thing that's what LLM do best I guess.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10425828/


Most people can’t create new knowledge from established facts in the first place. Have you met some of the providers? They have a hard time discerning a rock from a pillow




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: