Hacker News new | past | comments | ask | show | jobs | submit login
Sacrificial Architecture (martinfowler.com)
120 points by resca79 on Oct 20, 2014 | hide | past | web | favorite | 32 comments



The sentence that I found the most interesting is this one: "The right architecture to support 1996-ebay isn't going to be the right architecture for 2006-ebay. The 1996 one won't handle 2006's load but the 2006 version is too complex to build, maintain, and evolve for the needs of 1996." This is something to keep in mind when you are bashing the old codebase and want to rewrite it desperately.


I think it's crucial here to identify the point where a particular architecture is breaking down and no longer serves the needs to find out when you should rewrite. Oftentimes the bashing of an old codebase occurs not while it's still adequate, but rather when it's still in use long after its expiration date. What starts as a quick hack can survive remarkably long, but sooner or later technical debt makes things so complicated that thinking about a rewrite is probably not the worst thing to do ... if you have the time and money to do so (but if you don't you'll just ink more and more time and money into an inadequate solution that breaks down catastrophically sooner or later).

The best thing probably would be to try to figure out whether you'd hit that point in the near future, so you can start work on the rewrite or overhaul ahead of time instead of being forced to make the new code a rush job as well because everything's breaking down around you.


tl;dr:

* Release early, release often, evolve fast or even pivot. What is good for performance and availability is often bad for simplicity and flexibility. Build with an intention to rebuild when the time comes.

* It's like throw-away prototypes, only in production. When your business grows, you may have to throw away some or all of your previous code base (as eBay did, twice). This does not mean that the previous solutions were bad: not at all, they were adequate for the previous step.

* Modularity (e.g. microservices) is good, but it may add complexity. A monolith may be a fine sacrificial architecture, to be eventually replaced with something better (and more complex and expensive to build).


The last point is especially good, and from my experience I can say: leave the most complicated part of the system as a monolith until it's functional, and do detailed design of the easy parts. You never can really predict which dependencies will arise within the complex part.


I believe the "e.g." in the last point is especially important: Even if you don't use microservices, you can (and should!) still make your architecture modular.


Main points that I keep repeating to our team:

- "performance is a feature" ... But any feature is something you have to choose versus other features.

- Good modularity is a vital part of a healthy code base, and modularity is usually a big help when replacing a system. Indeed one of the best things to do with an early version of a system is to explore what the best modular structure should be so that you can build on that knowledge for the replacement.

- Design a system for ten times its current needs, with the implication that if the needs exceed an order of magnitude then it's often better to throw away and replace from scratch

- Microservices imply distribution and asynchrony, which are both complexity boosters. I've already run into a couple of projects that took the microservice path without really needing to — seriously slowing down their feature pipeline as a result. So a monolith is often a good sacrificial architecture, with microservices introduced later to gradually pull it apart.


Microservices imply distribution and asynchrony, which are both complexity boosters. In many case you are right. While Starting with a microservice architecture maybe is a premature optimization, Microservice are a kind of pattern that fits well when the business logic will grow in complexity and in term of entities/actors number. At the some time Microservice give you a chance to scale inside the microservice itself without touch any other service


On the opposite side, there is http://www.joelonsoftware.com/articles/fog0000000069.html "the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch."


Rewriting from scratch when the old code is completely thrown away is a very expensive process. Joel is advocating a safer approach where you take the long route and refactor the system gradually. This is actually pretty close to what Martin is saying too where you gradually transform your legacy system into something new by refactoring the old architecture and, if/when necessary, splitting bits from the old monolith system into microservices (for example).


I've re-written a couple of systems. I think sometimes a refactor is a lot more expensive. e.g old system was written in actionscript/flash. The old devs left and lots of little bugs. Best to just rewrite in HTML/JS that works on mobile and you have full control.

Another example. This dude had a giant wordpress site with a bazillion plugins. It worked but when something went wrong he couldn't figure jack shit and couldn't add more. Re-wrote it with a CMS, he got a 100x boost on how fast he could get things done.

Sometimes the old just needs to be replaced with the new.


Are you sure you rewrote a "system"? Or a program?

Fred Brooks makes a distinction between a program, a product, a system, and a "programming system product" (what I believe Joel was thinking about when he was talking about Netscape)

"A product (more useful than a program):

can be run, tested, repaired by anyone

- usable in many environments on many sets of data.

- must be tested

- documentation

Brooks estimates a 3x cost increase for this.

To be a component in a programming system (collection of interacting programs like an OS):

- input and output must conform in syntax, semantics to defined interfaces

- must operate within resource budget

- must be tested with other components to check integration (very expensive since interactions grows exponentially in n).

Brooks estimates that this too costs 3x.

A combined programming system product is 9x more costly than a program.

http://www.cs.usfca.edu/~parrt/course/601/lectures/man.month...

http://books.cat-v.org/computer-science/mythical-man-month/t...


Greatly misunderstood essay. What joel says is:

"Once you have gained traction, never ever allow yourself to be caught without ship-able product"

Also old code indeed rusts. Framework, API, binaries, libraries change, sometimes in very subtle ways, so a perfect piece of code that was written for php 3.2, apache 1.1 and MySQL 3 and left untouched for a decade just refuses to work when php ticks from 5.3 to 5.4


The article made me wonder how code bases can be amortized by bean counters. What are the various accounting approaches?


It's an interesting question, first of all it varies by country and I guess even by state (or whatever equivalent your country has).

I found the following links:

https://www.cspcpa.com/2012/07/17/overview-of-tax-rules-for-...

http://www.nysscpa.org/cpajournal/2002/0402/features/f044602...

Edit: Stanford link

http://web.stanford.edu/group/fms/fingate/staff/capitalequip...

"The IRS says the costs of developing computer so closely resembles research and experimental expenses that it warrants similar accounting treatment. As a result, a taxpayer may use any of the following three methods for costs paid or incurred in developing software for a particular project, either for the taxpayer’s own use, or to be held by the taxpayer for sale or lease to others:

The costs may be consistently treated as current expenses and deducted in full.

The costs may be consistently treated as capital expenses that are amortized ratably over 60 months from the date of completion of the software development.

The costs may be consistently treated as capital expenses and amortized ratably over 36 months from the date the software is placed in service.

Under this method, the cost may also be eligible for a bonus first-year depreciation allowance."

ERPs merit their own chapter.


It is all iterative and relative to the current time: hardware, software, network speeds, access, mobile etc.

Just like the sexy new iPhone 6 looks like the thing, in a few years it will look old and dated. Everything looks good in its time.

The same goes for software, however simple fundamentals do remain and grow stronger. Hopefully over time iteration does simplify, but as a whole, it can look complex to meet the needs of the current market. Everything we do now won't be used in the future, it does expire, but it is useful as a step or else there is no future version without this version.


I think this can apply for some things and not others. I mean, imagine replacing GCC. Obviously, it'd be better to do. I've heard it's one of the worst code bases to contribute to. Or look at OpenSSL even.

For some things you need compatibility, and that means that writing a replacement must be done extremely carefully and will require significant effort.


GCC will probably never be replaced. The current C++ standard is so monstrously complicated that implementing a full compiler would be an enormous task. But it doesn't matter, because eventually few or no new projects will be done in C++. Consider the fate of COBOL, which was once a shiny new language, a real step up from coding business application in assembler. It's now a tired old has-been, used only for patching up legacy systems.

As a system ages, it becomes more and more complicated because it is serving more and more purposes. Accordingly making upgrades while preserving backward compatibility becomes harder and harder. I remember hearing about an old AT&T switch with software so intricate that in the end it required one meeting for every line of code changed. And so progress grinds to a halt.

At some point, you either break backward compatibility or face stagnation. Careful modularization and upgrades can push back the boundary, but not forever. Even good practices, good people, and a good organization can only tame so much complexity.


LLVM would be an enormous task to write, indeed. Thankfully, it's done already :)

Your tired has-been COBOL is so infrequently used that on Tiobe, it outranks D, Erlang, Common Lisp, Haskell, Scala, Go, Lua, ...

In light of that, I'd be careful with predictions like "few or no new projects will be done in C++". It might be a long while before we reach that point.


"Your tired has-been COBOL is so infrequently used that on Tiobe, it outranks D, Erlang, Common Lisp, Haskell, Scala, Go, Lua, ..."

Is that for new projects, though?


There are guys earning enough for early retirement thanks to specializing in mainframes, COBOL and similar systems.

You won't get stuff to post on HN, but the bank accountant will be very happy.


For what I'm aware of, that's all maintenance and extension of existing projects. Is COBOL being used for new projects anywhere? That was my question, and the only thing relevant to the original claim.


You didn't answer his question, though. His question was whether COBOL was being used for new development. Is anyone, when writing a new program from scratch, doing so in COBOL? I hope not.


I guess you aren't aware of COBOL for Java and .NET.

http://www.gtsoftware.com/netcobol-2/

http://www.fujitsu.com/global/products/software/developer-to...

http://www.microfocus.com/products/micro-focus-developer/vis...

Or that since COBOL 2002 there is support for OOP, and the standard just got a new revision this year.

Most corporations of enterprise level, seldom write full new applications, rather extensions to existing systems.

So the language depends a lot of the pool of available skills. If the employees are COBOL guys, the new modules will be COBOL as well.


I guess a lot depends on how you draw the line of "new project".


So LLVM is what, chopped liver?

.

COBOL had a purpose, which was making programs more readable / understandable. We now have much better ways to do that, and it's a primary area of research.

C++ has a purpose, which is to maximize abstraction without imposing any unnecessary runtime cost. But we're not focused on doing better at this, since most applications don't really care about performance. C++ may become somewhat of a niche language only used to build the invisible low-level stuff, but it will never go the way of COBOL until something better at its purpose comes along (I've heard good things about Rust?).


You know, as much as I've heard some variation on that phrase, I've yet to see the benefits in practice--at least not from the last couple of revisions of the language.


Are you speaking of COBOL or C++?


C++.


I tried to hack on g++ as part of project years ago, but really struggled to make any contribution. The codebase is indeed monstrously complex, but the core development team is also very protective. A secondary effect of the growing complexity in an open source project is the decreasing willingness for the core team to bring in new people because of the time/effort cost involved and potential risks. If the team naturally atrophies over time, you end up with increasing stagnation.


I've enjoyed much of Fowler's previous output, but this post didn't grab me. Assuming anyone building a modern web-oriented service is going to use at least some kind of service oriented style architecture (SOA) - eg. service-specific containers - this post is kind of toothless, since individual subsystems are by definition possible to retire, fail, scale, etc. without throwing the baby out with the bathwater.


And on the preface to the 2nd edition of SICP, I noticed this quote from Alan J. Perlis:

> Is it possible that software is not like anything else, that it is meant to be discarded that the whole point is to always see it as a soap bubble?

which seems an echo.


I'm a fans of Martin, his articles give me the big Picture. Happy to share it here




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: