Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Really? ChatGPT 3.5 and beyond models are fairly capable of understanding PoS and doing text analysis. I have never seen that issue yet with the more advanced models, although smaller/older ones tend to imagine fmthings about the text.

Last year I wrote a paper about using LLMs for definition generation for unknown words based on context, and the models did a fairly good job. https://ieeexplore.ieee.org/abstract/document/10346136/ if someone is curious.

I would like to read prompts where the models are failing in such way. The field is moving quite fast.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: