Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Quick note that this has not been my experience. LLMs have been very useful with codebases as far from crud web apps as you can get.




This is consistent pattern.

Two engineers use LLM-based coding tools; one comes away with nothing but frustration, the other one gets useful results. They trade anecdotes and wonder what the other is doing that is so different.

Maybe the other person is incompetent? Maybe they chose a different tool? Maybe their codebase is very different?


I would imagine it has a lot to do with the programming language and other technologies in the project. The LLMs have tons of training data on JS and React. They probably have relatively little on Erlang.

Mass of learning material doesn't equal quality though. The amount of poor react code out there is not to underestimate. I feel like llm generated gleam code was way cleaner (after some agentic loops due to syntactic misunderstanding) than ts/react where it's so biased to produce overly verbose slob.

Even if you're using JS/React, the level of sophistication of the UI seems to matter a lot.

"Put this data on a web page" is easy. Complex application-like interactions seem to be more challenging. It's faster/easier to do the work by hand than it is to wait for the LLM, then correct it.

But if you aren't already an expert, you probably aren't looking for complex interaction models. "Put this data on a web page" is often just fine.


This has been my experience, effectively.

Sometimes I don't care for things to be done in a very specific way. For those cases, LLMs are acceptable-to-good. Example: I had a networked device that exposes a proprietary protocol on a specific port. I needed a simple UI tool to control it; think toggles/labels/timed switches. With a couple of iterations, the LLM produced something good enough for my purposes, even if it wasn't particularly doted with the best UX practices.

Other times, I very much care for things to be done in a very specific way. Sometimes due to regulatory constraints, others because of visual/code consistency, or some other reasons. In those cases, getting the AI to produce what I need specifically feels like an exercise in herding incredibly stubborn cats. It will get done faster (and better) if I do it myself.


It's like when your frat house has a filing cabinet full of past years' essays.

Protestant Reformation? Done, 7 years ago, different professor. Your brothers are pleased to liberate you for Saturday's house party.

Barter Economy in Soviet Breakaway Republics? Sorry, bro. But we have a Red Square McDonald's feasibility study; you can change the names?


There was actually a good article about this the other day which makes sense to me, it comes down to function vs form kinda: https://www.seangoedecke.com/pure-and-impure-engineering/



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: