Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This was my take.

I think absolutely anyone claiming that detecting LLM generated text is easy is flat out lying to themselves, or has only spent a few tokens and very little time playing with it.

Take semi-decent output, give it a single proof read and a few edits... and I don't fucking believe anyone who says they'll detect it. They absolutely will detect some of the most egregious examples of it, but assuming that's all of it is near willfully naive at this point.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: