Hacker News new | past | comments | ask | show | jobs | submit login

> What's the point of spending the time and money on beautiful engineering, if it's going to be scrapped tomorrow anyway (or even before it's finished)?

Good luck maintaining that code. Don't forget about the team, a messy codebase and/or poor requirements, eventually, will break a team's morale until the day news devs come along and demand a refactor.

I start to love the idea of a waterfall model for software. People can still iterate on the problem with wireframes, even high fidelity ones, and speaking with customers, market research, etc.

There is a lot of value, time and money saved when you have a decent level of accuracy in the specification.




It really depends on what you're building. There's a reason civil engineering isn't "Agile" - this also applies to major IT systems. A tiny difference in requirements can have massive impacts.

Honestly, in these cases, there's nothing better than waterfall, partly to save development time, but mostly for contractual protection: If a small change can cost millions more (and from experience, they can), you need to know who is responsible for paying those millions...


> There's a reason civil engineering isn't "Agile"

There are plenty of reasons why engineering isn't "agile" and it's only usable in software development, and one of the main reasons is that software development projects manage a single resource: man/hours. Building the wrong or inadequate solution has its cost, but rebuilding something from scratch does not require additional resources to be allocated to the project: just keep the same team working on it and results will pop up.

Engineering projects are very different than software development projects. Materials and components are the driving cost of a project and there are plenty of stuff that must be done right from the very start. It's unthinkable to scrap a machine or building or tunnel midway through, and it's inconceivable that some disaster happens at all. Engineering either gets it right at every single stage of a project or there are serious consequences to deal with, which in some cases might even be criminal charges. If for some reason a prototype crashes during development then the project might be forced to shut down.


I 100% agree with you. Software is "soft". Making changes to a code base is very cheap. Iteration just makes sense in software even for critical systems. If you build a suboptimal bridge, you have to live with it. If you write suboptimal software, you can test the actual product and fix it before you even ship. You can't really test a bridge outside of simulations. GP's comparison is apples to oranges in a lot of ways.


> You can't really test a bridge outside of simulations.

Just to pick a few nits, actually bridges are indeed tested during and after construction. It used to be standard practice to do test runs with near limit loads to inaugurate bridges, consisting of getting a fleet of military vehicles or water tankers to cross the bridge while surveyors monitored the bridge's response.

Nowadays non-destructive testing techniques are favoured for a number of reasons, including the fact that sensor rigs can also be used throughout the structure's lifetime to help determine its fatigue life.


>you need to know who is responsible for paying those millions

In the macro view from 10000 meters high, it's always the client that pays. Even if he gets away without paying once, he'll pay it back (and more!) in stuffing on other projects. Because without that, the service provider goes under.

Of course, occasionally, inexperienced service providers that do not stuff their projects do go under. This puts evolutionary pressure on the ecosystem so that the surviving service providers are selected over many such trials to know to stuff projects and invoices.


stuff -> pad


Software that is used, changes, because successful software influences the world around it, which in turn changes it's requirements. So typically, even if a team gets it right the first time (which by itself requires enormous effort), the once perfectly specified requirements will have changed shortly after release.

Edit: sibling is right as well, it depends on what you're building. Sometimes there is nothing better than waterfall.


> Software that is used, changes,

Yes, but... A good model can support larger changes than a bad model. For example, a well designed relational model can support iterative change better than a slapped together system using CSV. So does a system that supports a consistent mental model for the end-user.

This is the fundamental skill: abstraction. To find the right abstractions, sustaining simplicity while opening up ability to change is an extremely difficult and hard-fought skill.

Unfortunately, due to a constant influx of new developers, its value is underappreciated. Requirements for this skill are (non-exhaustive): excellent communicative abilities, combined with a predilection for logic reasoning, good technical understanding, some psychological understanding and perseverance for when a model proves unsuccesful.

From my experience, the best systems we have designed and implemented started with long sessions at the white-board, often followed by some tech 'spikes' [1].

[1] https://en.wikipedia.org/wiki/Spike_(software_development)


Survivorship bias/fooled by randomness. If a system can be abstracted to a model that can account for the bizarre stuff that the business throws at it, then the domain you are operating in is either Simple or slightly Complicated (as defined by the Cynefin framework). The real problem lies in Complex/Chaotic.


Beautifully said. Anyone can understand this when it's put this way.


Users want plausible deniability. And some need it. With iteration-based development you have the potential to avoid the "yeah, it meets the spec, but it is not what we wanted" problem.


This was the exact attitude I held for the first 5 years of my career until I realized that:

* There's a ridiculous amount of code that either gets tossed or goes almost totally unusued and time spent on beautiful engineering on that is 100% waste. This is, I think, why languages with incredibly strict type systems that try to 'force' good design up front never end up being widely deployed. They make prototyping way too expensive.

* Only juniors demand to rewrite or refactor anything. Being a really good developer means being able to figure out creative ways to fix the most enormous technical debt incrementally and knowing that you shouldn't ever have to 'demand' to do that because you can just do it.

* Pretty much every project I've ever seen that does anything useful starts out with nasty code because the programmers who worry about making it beautiful before it's useful never make it useful to begin with.

* People who dream up architecture before writing the code dream up the worst architectures. The best architectures, on the other hand, happens as a result of a lot of time spent tediously refactoring code.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: