Hacker News new | past | comments | ask | show | jobs | submit login

> In my experience the trouble with LLMs at the professional level is that they're almost as work to prompt to get the right output as it would be to simply write the code.

Yeah. It's often said that reading (and understanding) code is often harder than writing new code, but with LLMs you always have to read code written by someone else (something else).

There is also the adage that you should never write the most clever code you can, because understanding it later might prove too hard. So it's probably for the best that LLM code often isn't too clever, or else novices unable to write the solution from scratch will also be unable to understand it and assess whether it actually works.






Another adage is "code should be written for people to read, and only incidentally for machines to execute". This goes directly against code being written by machines.

I still use ChatGPT for small self-contained functions (e.g. intersection of line and triangle) but mark the inside of the function clearly as chat gpt made and what the prompt was.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: