Hacker News new | past | comments | ask | show | jobs | submit login




> FRCOphth Part 2 questions were sourced from a textbook for doctors preparing to take the examination [17]. This textbook is not freely available on the internet, making the possibility of its content being included in LLMs’ training datasets unlikely [1].

I can't believe they're serious. They didn't even write any new questions?


I’m sure they’ll be surprised how many books aren’t freely available on the internet that are in common training sets.

It would have taken them a few minutes to learn about the current lawsuits around book piracy.

I agree with you. That they didn’t try asking novel questions or even look into what training data is used makes this paper bunk.


Funny thing is that this textbook can be easily found on LibGen. I do not know a lot about LLM datasets, but they probably include books from these shadow libraries, right ?


Nice catch. GPT-3 at least was trained on “Books2” which is very widely suspected of containing all of Libgen and Zlibrary. If the questions are in Libgen, this whole paper is invalid.


Considering that textbooks are probably the single highest quality source of training data for LLMs, I would be very surprised if OpenAI wasn't buying and scanning textbooks for their own training data (including books that aren't even in Books2).


It’s highly unlikely that they will spend hundreds of thousands of dollars on buying their own copies when it’s not even remotely clear that doing so will be enough to answer the copyright violation cases they are facing.


Not just that but wouldn't the references for the textbook be mostly research papers that would be freely available?


I didn't even bother checking Libgen, because if it wasn't in Libgen, it'd be in some other dataset.


"Large language models approach expert-level clinical knowledge and reasoning in ophthalmology: A head-to-head cross-sectional study" (2024) https://journals.plos.org/digitalhealth/article?id=10.1371/j...

> [...] We trialled GPT-3.5 and GPT-4 on 347 ophthalmology questions before GPT-3.5, GPT-4, PaLM 2, LLaMA, expert ophthalmologists, and doctors in training were trialled on a mock examination of 87 questions. Performance was analysed with respect to question subject and type (first order recall and higher order reasoning). Masked ophthalmologists graded the accuracy, relevance, and overall preference of GPT-3.5 and GPT-4 responses to the same questions. The performance of GPT-4 (69%) was superior to GPT-3.5 (48%), LLaMA (32%), and PaLM 2 (56%). GPT-4 compared favourably with expert ophthalmologists (median 76%, range 64–90%), ophthalmology trainees (median 59%, range 57–63%), and unspecialised junior doctors (median 43%, range 41–44%). Low agreement between LLMs and doctors reflected idiosyncratic differences in knowledge and reasoning with overall consistency across subjects and types (p>0.05). All ophthalmologists preferred GPT-4 responses over GPT-3.5 and rated the accuracy and relevance of GPT-4 as higher (p<0.05). LLMs are approaching expert-level knowledge and reasoning skills in ophthalmology. In view of the comparable or superior performance to trainee-grade ophthalmologists and unspecialised junior doctors, state-of-the-art LLMs such as GPT-4 may provide useful medical advice and assistance where access to expert ophthalmologists is limited. Clinical benchmarks provide useful assays of LLM capabilities in healthcare before clinical trials can be designed and conducted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: