Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What's the difference between a messy codebase created by a genAI, and a messy codebase where all the original authors of the code have moved on and aren't available to ask questions?

The difference is the hope of getting out of that situation. If you've inherited a messy and incoherent code base, you recognize that as a problem and work on fixing it. You can build an understanding of the code through first reading and then probably rewriting some of it. This over time improves your ability to reason about that code.

If you're constantly putting yourself back into that situation through relegating the reasoning about code to coding agent, then you won't develop a mental model. You're constantly back at Day 1 of having to "own" someone else's code.





The key point is "relegating the reasoning". The real way to think about interfacing with LLMs is "abstraction engineering". You still should fully understand the reasoning behind the code. If you say "make a form that captures X, Y, Z and passes it to this API" you relegate how it accomplishes that goal and everything related to it. Then you look at the code and realize it doesn't handle validation (check the reasoning), so you have it add validation and toasts. But you are now working on a narrower level of abstraction because the bigger goal of "make a user form" has been completed.

Where this gets exhausting is when you assume certain things that you know are necessary but don't want to verify - maybe it let's you submit an email form with no email, or validates password as an email field for some reason, etc. But as LLMs improve their assumptions or you manage context correctly, the scale tips towards this being a useful engineering tool, especially when what you are doing is a well-trodden path.


I find this to be too rosy a story about using agentic coding to add to a codebase. In my experience, miss a small detail about the code and the agent may can go out of control creating a whole new series of errors that you wouldn’t have had to fix. And even if you don’t miss a detail, the agent eventually forgets because of the limited context window.

This is why I’ve constrained my use of AI agents to mostly “read-only and explain” use cases, but I have very strict conditions for letting it write. In any case, whatever productivity gains you supposedly “get” for its write scenarios, you should be subtracting your expenses to fix its output later and/or payments made for a larger context window or better reasoning. It’s usually not worth the trouble to me when I have plenty of experience and knowledge to draw from and can write the code as it should be myself.


So there’s another force at work here that to me answers the question in a different way. Agents also massively decrease the difficulty of coming into someone else’s messy code base and being productive.

Want to make a quick change or fix? The agent will likely figure out a way to do it in minutes rather the than hours it would take me to do so.

Want to get a good understanding of the architecture and code layout? Working with an agent for search and summary cuts my time down by an order of magnitude.

So while agree there’s a lot more “what the heck is this ugly pile of if else statements doing?” And “why are there three modules handling transforms?”, there is a corresponding drop in cost to adding features and paying down tech debt. Finding the right balance is a bit different in the agentic coding world, but it’s a different mindset and set of practices to develop.


In my experience this approach is kicking the can down the road. Tech debt isn't paid down, it's being added to, and at some point in the future it will need to be collected.

When the agent can't kick the can any more who is going to be held responsible? If it is going to be me then I'd prefer to have spent the hours understanding the code.


> who is going to be held responsible?

This is actually a pretty huge question about AI in general

When AI is running autonomously, where is the accountability when it goes off the rails?

I'm against AI for a number of reasons, but this is one of the biggest. A computer cannot be held accountable therefore a computer must never make executive decisions


The accountability would be in whoever promoted it. This isn't so much about accountability, as it is who is going to be responsible for doing the actual work when AI is just making a bigger mess.

The accountability will be with the engineer that owns that code. The senior or manger that was responsible for allowing it to be created by AI will have made sure they are well removed.

While an engineer is "it" they just have to cross their fingers and hope no job ending skeletons are resurrected until they can tag some other poor sod.


So not really any different from how things work without AI.

> You're constantly back at Day 1 of having to "own" someone else's code.

If only there were some people in software engineering in this situation before AI… oh wait.

In the current times you’re either an agent manager or you’re in for a surprise.


> In the current times you’re either an agent manager or you’re in for a surprise.

This opinion seems to be popular, if only in this forum and not in general.

What I do not understand is this;

  In order to use LLM's to generate code, the engineer
  has to understand the problem sufficient enough to
  formulate prompt(s) to use in order to get usable
  output (code).  Assuming the engineer has this level
  of understanding along with knowledge of the target
  programming language and libraries used, how is using
  LLM code generation anything more than a typing saver?

The point is an engineering manager is using software engineers as typing savers, too. LLMs are, for now, still on an exponential curve of capability on some measures (e.g. task duration with 50% completion chance is doubling every ~7 months) and you absolutely must understand the paradigm shift that will be forced upon you in a few years or you'll have a bad time. Understanding non-critical code paths at all times will simply be pointless; you'll want to make sure test coverage is good and actually test the requirements, etc.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: