> It still requires a technical person to use these things effectively.
I feel like few people critically think about how technical skill gets acquired in the age of LLMs. Statements like this kind of ignore that those who are the most productive already have experience & technical expertise. It's almost like there is a belief that technical people just grow on trees or that every LLM response somehow imparts knowledge when you use these things.
I can vibe code things that would take me a large time investment to learn and build. But I don't know how or why anything works. If I get asked to review it to ensure it's accurate, it would take me a considerable amount of time where it would otherwise just be easier for me to actually learn the thing. Feels like those most adamant about being more productive in the age of AI/LLMs don't consider any of the side effects of its use.
A non-technical PM asked me (an early career SWE) to develop an agentic pipeline / tool that could ingest 1000+ COBOL programs related to a massive 30+ year old legacy system (many of which have multiple interrelated sub-routines) and spit out a technical design document that can help modernizing the system in the future.
- I have limited experience with architecture & design at this point in my career.
- I do not understand business context of a system that old and any of the decisions that occurred in that time.
- I have no business stakeholders or people capable of validating the output.
- I am the sole developer being tasked with this initiative.
- My current organization has next to no engineering standards or best practices.
No one in this situation is interested in these problems except me. My situation isn't unique with everyone high on AI looking to cram LLMs & agents into everything without any real explanation of what problem it solves or how to measure the outcome.
I admire you for thinking about this kind of issue, I wish I could work with more individuals who do :(
This resonates a lot, and I think your example actually captures the core failure mode really well.
What your PM asked for isn’t an “agentic pipeline” problem - it’s an organizational knowledge and accountability problem. LLMs are being used as a substitute for missing context, missing ownership, and missing validation paths.
In a system like that (30+ years, COBOL, interdependent routines), the hardest parts are not parsing code — they are understanding why things exist, which constraints were intentional, and which tradeoffs are still valid. None of that lives in the code, and no model can infer it reliably without human anchors.
This is where I have seen LLMs work better as assistive tools rather than autonomous agents: helping summarize, cluster, or surface patterns — but not being expected to produce “the” design document, especially when there is no stakeholder capable of validating it.
Without determinism around inputs, review, and ownership, the output might look impressive but it’s effectively unverifiable. That’s a risky place to be, especially for early-career engineers being asked to carry responsibility without authority.
I don’t think the problem is that LLMs are not powerful enough — it is that they are often being dropped into systems where the surrounding structure (governance, validation, incentives) simply isn’t there.
You can ask AI to focus on the functional aspects and create a design-only document. It can do that in chunks. You don't need to know about COBOL best practices now, that's an implementation detail. Is the plan to modernize the COBOL codebase or to rewrite in a different language?
First off, what you shared is cool, thank you. Especially considering it captures problems I need to address (token limitations, context transfer, managing how agents interact & execute their respective tasks).
My challenge specifically is that there is no real plan. It feels like this constant push to use these tools without any real clarity or objective. I know a lot of the job is about solving business problems, but no one asking me to do this has any idea or defined acceptance criteria to say the outputs are correct.
I also understand this is an enterprise / company issue, not that the problem is impossible or the idea itself is bad. Its just a common theme I am seeing where this stuff fails in enterprises because few are actually thinking how to apply it... as evidenced by the fact that I got more from your comment than I otherwise get attempting to collaborate in my own organization
I feel like few people critically think about how technical skill gets acquired in the age of LLMs. Statements like this kind of ignore that those who are the most productive already have experience & technical expertise. It's almost like there is a belief that technical people just grow on trees or that every LLM response somehow imparts knowledge when you use these things.
I can vibe code things that would take me a large time investment to learn and build. But I don't know how or why anything works. If I get asked to review it to ensure it's accurate, it would take me a considerable amount of time where it would otherwise just be easier for me to actually learn the thing. Feels like those most adamant about being more productive in the age of AI/LLMs don't consider any of the side effects of its use.
reply