Hacker News new | past | comments | ask | show | jobs | submit login

> Any tool, software language, or AI coder will still be limited by the clarity and completeness of the specs presented.

> Nothing will ever turn your fuzzy intent into your clear best interest.

I've seen a lot of SWEs saying this, and while it's true to an extent, it misses a lot. Good engineers don't simply turn a given spec into code, indeed there are somewhat deprecating terms for these types of positions, like 'code monkey'.

A good engineer does not require the spec they receive to be absolutely precise. They will recognize the intent, and make good judgements about what the requester most probably wants. They will ask clarifying questions when there is important information missing, or a decision that needs to be made where it isn't clear what the requester wants.

LLMs can't do this very well right now, but it doesn't seem like a stretch to say that they will be able to. Will they be able to turn half-baked, very underspecified requests into exactly what the requestor is looking for with a press of a button? No. But I think they could get, and often already are, quite good at filling in the blanks. Seems like current LLMs have a way to go before they can recognize deficiencies in a prompt and ask for more info, but it's somewhere in the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: