Hacker News new | past | comments | ask | show | jobs | submit login

That's simply not how LLMs work, and are actually awful at reverse engineering of any kind.



Are you saying that they cant explain the contents of machine code in human readable format? Are you saying that they can’t be used in a system that iteratively evaluates combinations of inputs and check their results?


Just that they're horrible at it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: