It's interesting to see a lot of senior folks are against this arguing that if you're working on a larger software project this falls apart. Two things can be argued against this:
1. Context sizes are going to grow. Gemini with 2M tokens is already doing amazing feats
2. We all agree that we should break bigger problems into smaller ones. So if you can isolate the problem into something that fits in a LLM context, no matter how large the larger software system is, you can make a lot of quick progress by leveraging LLMs for that isolated piece of software.
Compartmentizing the code only matters if it works in the first place.
The main argument from senior folks is probably that vibe codes won't cut it for actual sizeable problem. There is complexity that can't be abstracted away just because we want to.
I think we're overestimating how much humans can keep in context. Throughout my career I've seen many instances that folks completely get it wrong and miss the context.
You're right, but it's also why vibe coding doesn't work IMHO.
We had the same situation with TDD: can we give out succinct specs and ignore what happens in the code if the specs are met ? For anything beside Hello world, the answer was no, absolutely not.
It still mattered that the logic was somewhat reasonable and you were not building a Rube Goldberg machine giving the right answers to the given tests 95% of the time. Especially as the tests didn't cover all the valid input/output nor all the possible error cases in the problem space.
It's because there's a lot happening that we need to have simple blocks that we can trust and not black boxes that were already beyond our understanding.
I honestly think it comes down to preferred work style in some cases. Once a codebase gets decently complex, it’s true that you have to supervise the AI extremely closely to get a good result. But at least for me I tend to enjoy doing that more than writing it myself. I’m a fast typer of prose, I like to plan my coding ahead of time anyways, so whether the AI does it or me is kinda immaterial
You don't understand software engineering. The goal isn't to produce a bunch of code. The goals are actually:
1. Build a computable model of some facet of reality to an achieve certain goals.
2. Realize a system that manifests the model, satisfying a set of other constraints, such as resource constraints and performance.
3. Ensure a community of system owners comprehend the key decisions made in the system and model design, and the degree to which certain constraints can be relaxed or increased and how the system can evolve such that it behavior remains predictable over time.
Basically none of that, in my view, is supported by LLM driven "vibe" coding. Sure your hobby project might be ok to treat like an art project, but, oh, I don't know, how about software for train communications, or aircraft guidance systems? Do you want to vibe code your way then? Do you want a community of engineers who only dimly understand the invariants satisfied in individual components and across the system as whole?
LLM fanatics are totally ignorant of the actual process of software development. Maybe one day we'll find a way to use them in a way that produces software that is predictable, well-specified, and safe, but we are not there yet.
honestly after the first line I decided to stop reading your comment. but to answer that one line, I probably do. You probably used software that I've worked on
1. Context sizes are going to grow. Gemini with 2M tokens is already doing amazing feats
2. We all agree that we should break bigger problems into smaller ones. So if you can isolate the problem into something that fits in a LLM context, no matter how large the larger software system is, you can make a lot of quick progress by leveraging LLMs for that isolated piece of software.