The best thing probably would be to try to figure out whether you'd hit that point in the near future, so you can start work on the rewrite or overhaul ahead of time instead of being forced to make the new code a rush job as well because everything's breaking down around you.
* Release early, release often, evolve fast or even pivot. What is good for performance and availability is often bad for simplicity and flexibility. Build with an intention to rebuild when the time comes.
* It's like throw-away prototypes, only in production. When your business grows, you may have to throw away some or all of your previous code base (as eBay did, twice). This does not mean that the previous solutions were bad: not at all, they were adequate for the previous step.
* Modularity (e.g. microservices) is good, but it may add complexity. A monolith may be a fine sacrificial architecture, to be eventually replaced with something better (and more complex and expensive to build).
- "performance is a feature" ... But any feature is something you have to choose versus other features.
- Good modularity is a vital part of a healthy code base, and modularity is usually a big help when replacing a system. Indeed one of the best things to do with an early version of a system is to explore what the best modular structure should be so that you can build on that knowledge for the replacement.
- Design a system for ten times its current needs, with the implication that if the needs exceed an order of magnitude then it's often better to throw away and replace from scratch
- Microservices imply distribution and asynchrony, which are both complexity boosters. I've already run into a couple of projects that took the microservice path without really needing to — seriously slowing down their feature pipeline as a result. So a monolith is often a good sacrificial architecture, with microservices introduced later to gradually pull it apart.
Another example. This dude had a giant wordpress site with a bazillion plugins. It worked but when something went wrong he couldn't figure jack shit and couldn't add more. Re-wrote it with a CMS, he got a 100x boost on how fast he could get things done.
Sometimes the old just needs to be replaced with the new.
Fred Brooks makes a distinction between a program, a product, a system, and a "programming system product" (what I believe Joel was thinking about when he was talking about Netscape)
"A product (more useful than a program):
can be run, tested, repaired by anyone
- usable in many environments on many sets of data.
- must be tested
Brooks estimates a 3x cost increase for this.
To be a component in a programming system (collection of interacting programs like an OS):
- input and output must conform in syntax, semantics to defined interfaces
- must operate within resource budget
- must be tested with other components to check integration (very expensive since interactions grows exponentially in n).
Brooks estimates that this too costs 3x.
A combined programming system product is 9x more costly than a program.
"Once you have gained traction, never ever allow yourself to be caught without ship-able product"
Also old code indeed rusts. Framework, API, binaries, libraries change, sometimes in very subtle ways, so a perfect piece of code that was written for php 3.2, apache 1.1 and MySQL 3 and left untouched for a decade just refuses to work when php ticks from 5.3 to 5.4
I found the following links:
Edit: Stanford link
"The IRS says the costs of developing computer so closely resembles research and experimental expenses that it warrants similar accounting treatment. As a result, a taxpayer may use any of the following three methods for costs paid or incurred in developing software for a particular project, either for the taxpayer’s own use, or to be held by the taxpayer for sale or lease to others:
The costs may be consistently treated as current expenses and deducted in full.
The costs may be consistently treated as capital expenses that are amortized ratably over 60 months from the date of completion of the software development.
The costs may be consistently treated as capital expenses and amortized ratably over 36 months from the date the software is placed in service.
Under this method, the cost may also be eligible for a bonus first-year depreciation allowance."
ERPs merit their own chapter.
Just like the sexy new iPhone 6 looks like the thing, in a few years it will look old and dated. Everything looks good in its time.
The same goes for software, however simple fundamentals do remain and grow stronger. Hopefully over time iteration does simplify, but as a whole, it can look complex to meet the needs of the current market. Everything we do now won't be used in the future, it does expire, but it is useful as a step or else there is no future version without this version.
For some things you need compatibility, and that means that writing a replacement must be done extremely carefully and will require significant effort.
As a system ages, it becomes more and more complicated because it is serving more and more purposes. Accordingly making upgrades while preserving backward compatibility becomes harder and harder. I remember hearing about an old AT&T switch with software so intricate that in the end it required one meeting for every line of code changed. And so progress grinds to a halt.
At some point, you either break backward compatibility or face stagnation. Careful modularization and upgrades can push back the boundary, but not forever. Even good practices, good people, and a good organization can only tame so much complexity.
Your tired has-been COBOL is so infrequently used that on Tiobe, it outranks D, Erlang, Common Lisp, Haskell, Scala, Go, Lua, ...
In light of that, I'd be careful with predictions like "few or no new projects will be done in C++". It might be a long while before we reach that point.
Is that for new projects, though?
You won't get stuff to post on HN, but the bank accountant will be very happy.
Or that since COBOL 2002 there is support for OOP, and the standard just got a new revision this year.
Most corporations of enterprise level, seldom write full new applications, rather extensions to existing systems.
So the language depends a lot of the pool of available skills. If the employees are COBOL guys, the new modules will be COBOL as well.
COBOL had a purpose, which was making programs more readable / understandable. We now have much better ways to do that, and it's a primary area of research.
C++ has a purpose, which is to maximize abstraction without imposing any unnecessary runtime cost. But we're not focused on doing better at this, since most applications don't really care about performance. C++ may become somewhat of a niche language only used to build the invisible low-level stuff, but it will never go the way of COBOL until something better at its purpose comes along (I've heard good things about Rust?).
> Is it possible that software is not like anything else, that it is meant to be discarded that the whole point is to always see it as a soap bubble?
which seems an echo.