Hacker News new | past | comments | ask | show | jobs | submit login

Author here! We have experimented with building this, see my comment elsewhere here: https://news.ycombinator.com/item?id=42348651

One of the surprising things is that LLMs regularly "fix" things that no other system can fix. Like if we both add the same sentence to a doc. It's interesting stuff.

With that said I am not sure that this specific LLM is providing the "right" answer. It seems like AN answer! But I think the real solution might be to ask the user what to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: