Hacker News new | past | comments | ask | show | jobs | submit login

Really? yes, it is very common.

The problem of this approach is that it does not scale to large systems. If you don't spend much time on thinking in the abstract about how it will work and what might go wrong, then, by the time you have written enough code to find that out, you may have gone a long way down the wrong path, and not all architectural-level mistakes and oversights can be patched over.

No-one does this perfectly -- even people using formal methods will overlook things -- but, on a big project, if you don't put much effort into thinking ahead about how it should work, and try to identify the problems before you have coded them, you are likely to end up where, in fact, many projects do find themselves: with something that is nominally close to completion but very far from working. Those that are not canceled end up looking like legacy code even when brand new.




If you want to have low quality solutions that kinda work on the first try then sure go for it. Your approach will inevitably lead to insurmountable technical debt that can't be paid off.

Big projects should be cut into smaller pieces where each piece can be relatively easily rewritten.


> Big projects should be cut into smaller pieces where each piece can be relatively easy rewritten.

To come up with the right smaller pieces, you have to think about how they will work together to achieve the big picture. That means interfaces and their contracts, and if you get them wrong, you end up with pieces that don't fit together, and do not, collectively, get the job done.

Big problems cannot be effectively solved in a bottom-up manner, and perhaps the most pervasive fallacy in software development today is the notion that the principle of modularity means you only have to think about code in small pieces.


That's my point you CANNOT possibly come up with the right smaller pieces until you have a solution that you have verified works.

What do you think other engineering principles do? They create a proof of concept. Verify it works and then create the real thing. That is why "real" engineering companies have hundreds of tools to test stuff.

I really don't understand why people want software to be different. You write some shitty throwaway web app then sure go ahead and don't prototype anything just hire a "software architect" that designs something and use that.

But do you want something that actually works then that is completely useless. Prototype, verify, start over if necessary. That is the way to write quality software.


>That's my point you CANNOT possibly come up with the right smaller pieces until you have a solution that you have verified works.

That's beside the point. The point is that coding is not the only way to verification, especially at the architectural level.

> I really don't understand why people want software to be different.

It seems to be you who wants to be different. Making prototypes is expensive and time-consuming, so engineers try to look ahead to anticipate problems. Prototyping in software is cheaper, but not so cheap (especially at the architectural level) that thinking ahead isn't beneficial.


in my opinion the big difference and problem is that space and resources are virtually unlimited, and a "product" can keep changing indefinitely too. and if something fails, in most cases it will fail very differently than in other engineering disciplines. I agree it would be great to write more prototypes and all that, but hey, capitalism: good_enough/shitty makes money, so that's where we are (also, CS is rather new, we are still figuring out a lot of things, still getting deep in the mess)




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: