
The Lava Layer Anti-Pattern (2014) - jxub
https://mikehadlow.blogspot.com/2014/12/the-lava-layer-anti-pattern.html
======
lliamander
I sometimes see legacy systems as old cities:

\- The classic old town (the older parts of the system that tend to use older
coding practices and technologies, but have most of the bugs stamped out and
more or less "just work")

\- The slums (the parts that tend to be bug prone, but are impossible to fix.
I.e. no one wants to touch that code)

\- The apartments/row houses (parts of the code that involve lots of the same
types of objects/classes/modules that follow a similar pattern)

\- The art district (the place where someone tried some odd/experimental
libraries or code patterns)

\- The residential district with windy roads and lots of courts and loops
(places in the code where there are lots of objects calling each other with
really deep stack traces; easy to get confused where you are when debugging)

Legacy systems are bound to have inconsistencies, but as you become more
familiar with a system you begin to notice localized instances of consistency.
These localized instances of consistency become the "districts" in our mental
map of the system.

To my mind, if you're working on mostly new code (within a large existing
system) unless it is already a highly consistent code base I would just use
whatever conventions make the most sense to the developers working on it.

~~~
titanomachy
I love this. I think I might add some tour-guide comments to our codebase.

~~~
caf
"You probably shouldn't leave any private methods parked near here."

~~~
Groxx
"private methods are routinely broken into and used for nefarious purposes"

that'd describe quite a lot of code I've seen.

~~~
caf
"Over there is our haunted module - it's been empty for years, and the
callbacks aren't even wired up anymore, but people swear they've seen
breakpoints triggering in the upstairs windows during late night solo
debugging sessions"

------
moring
I often feel lost when reading such articles because they never seem to quite
acknowledge how bad legacy systems can be. The following examples all
originate from the same project I have worked with in the past.

Example 1: You cannot "favor consistent over new/better" because one of the
main problems with the old code is that it is horribly inconsistent already.

Example 2: You cannot "favor consistent over new/better" because the legacy
style you would try to be consistent with is so bad that you cannot even
understand tiny fragments of the code, let alone write new code in a
consistent way.

Example 3: The legacy system was built in a way that makes it impossible to
store the code in any kind of VCS, so being consistent with that means
breaking well-established best practices.

Example 4: For some part of the system, nobody knew how to make the magic code
generators produce code that is consistent with the legacy code (and if you
try to write that code without the generators, you are in for a trip to hell).

That said, the article helped a lot in that I now know the name for a problem
I somewhat recognized but couldn't describe well.

~~~
hinkley

        The legacy system was built in a way that makes it impossible to store the code in any kind of VCS
    

Come again?

~~~
Terr_
Maybe it's in something like Smalltalk?

~~~
floaterpig
Another alternative is one of the PICKs where the source code is (used to be
perhaps) stored in the database, or I think COBOL on some systems didn't
exactly lend itself to flat files from the little I saw (or at least the OS
wasn't embracing CVS/SVN/Git in favour of some hugely expensive and utterly
inferior product...).

~~~
Terr_
Ah, right, or "business logic eval()ed from database text" systems.

~~~
jupp0r
Or, more commonly - stored procedures.

~~~
Terr_
Stored procedures are a little easier to handle provided your deployment
system does a wipe/replace, similarly to overwriting scripts/executables.

I was thinking of a grimmer scenario, where the system has an disquieting
aspect of polymorphic, run-time code editing... central to its "flexible"
production behavior. There are more things in heaven and earth than dreamt of
in sane philosophy.

------
hinkley
Print this out and post it on a wall where you see it every time you leave
your desk or come back:

Refactoring is a bottom up process.

You make local changes, and those reveal the paths of least resistance in the
code. Contiguous refactors start suggesting further refinements or even new
features and it spreads and spreads across the app.

By the time you are making structural changes to the app, the avalanche should
have already started and it is too late for the pebbles to vote. If that isn't
happening for you then put it down to impatience borne of frustration with the
rate of change.

One of the best ways I know to speed this process up without violating the
'rules' is to start with the build scripts and work your way through the
initialization code of the app, chipping away at smells until it starts
looking right. With good code to the 'left' you have a beachhead (and a line
in the sand) you can use to push out into various subsystems improving as you
go.

------
lmm
It's the reality of a large codebase that there will be parts that followed
the best practices of 5 years ago, best practices of 10 years ago, and so on;
that's not an anti-pattern (indeed I'd be horrified if the code from 5 years
ago wasn't noticeably worse than today's code - that would imply that the
industry and the team had made no progress in the past 5 years).

What makes the single supporting example given for this supposed "pattern" bad
is that it's full of churn, parts rewritten in a different way that wasn't
better, just different. The idea that this is an antipattern relies on the
fallacy that all choices are tradeoffs and there's no such thing as a better
way of doing things. Whereas actually e.g. NHibernate is enough of an
improvement over DataSet that an application that's half NHibernate and half
DataSet is much, much nicer to work on than one that's all DataSet, despite
the inconsistency.

The real antipattern in the story is making technology choices without team
consensus/buy-in. If one developer adds a code-generation framework that only
they can maintain, it should be rejected during code review. There's very
little value in one developer unit testing on their own if the rest of the
team doesn't care about maintaining tests. That's the real problem, and none
of the suggestions in the article address it.

------
jknoepfler
in the Mythical Man Month, Fred Brooks asserts that conceptual consistency is
the single most important quality of a large, successful software project.
Although I was skeptical at first (it seemed a little too believable to be
true), experience has led me to believe that Brooks is correct. If a change
makes a project conceptually incoherent, it should be rejected, and a new
project started. I think this article is one very good illustration of this
phenomenon, albeit in an agile rather than a waterfall setting.

As a corollary, I think people often use agile as an excuse for introducing
conceptual incoherence. I think this is almost always a mistake, and
represents laziness, short-sightedness, and immaturity on the part of devs and
managers alike.

edit: I'll add that I think under-investment in quality principal engineers is
what gets one in this mess in general. If you replace your architect and her
copilot with a handful code monkeys and one or two arrogant senior devs, you
get crap stew unless you're careful to manage around your team's lack of
experience and maturity.

~~~
squiggleblaz
I was thinking of making a comment here, describing this as precisely my
experience. I work on an old legacy system. I have done so for some time, from
junior to lead. [Edit: wait, I meant to say "i have discovered this same
principle but you expressed it better than me". instead, i just described my
experience.]

And I can see from experience that every refactor we did that was "this is the
latest and greatest and we'll write the new feature like this and we'll just
start migrating everything over to this eventually" has been an utter failure.

But the changes that were by their nature spread throughout the system
automatically have been so much easier to work with. You don't even notice
them. (Which makes it harder to get credit for it. If I have to train a new
dev up today, I can say "the system is a little crappy, but we're trying to
make it better". But they'll never see that we've made huge progress. But they
will see the four different database abstractions we've got going on and
they'll curse me, because this is where you see some work.

It is better to have a crappy core that you gradually fix, then do have a
crappy core that remains there and five crappy other cores from each different
dev. And whatever problems are caused by the shitty database abstraction some
idiot dev created are always going to be there, so you might as well just live
in that world.

~~~
jknoepfler
i really appreciate your taking the time to write up your experiences in
response, thank you. I think it's really important to continually articulate
hard, possibly unpopular truths learned through experience. Otherwise how will
our children's children have anywhere to look for guidance (lol... but
seriously)

------
dang
Discussed at the time:
[https://news.ycombinator.com/item?id=8772641](https://news.ycombinator.com/item?id=8772641)

~~~
lkrubner
Just curious, but why not automate your link to the previous discussion? Why
does Hacker News rely on people like you to post links to previous
discussions? If there has been 3 discussions of an essay, over the course of 8
years, why not list all 3 discussions, when someone posts it again in the year
2021?

~~~
djur
This already exists -- the "past" link under the submission title.

~~~
danield9tqh
I don't think the 'past' link conforms to current best design patterns. We
should refactor it so that it automatically posts a comment on the article.

------
scarface74
I came into a position where my first job was to make a certain process
scalable. The code had all of the smells of bad design - huge monolithic
classes, in a huge monolithic solution with unrelated projects. The less
mature me would have said this crap needs to be rewritten. Now I would like to
consider myself more practical:

1\. Form the monolithic repo and start taking out unneeded projects and
classes and recompiling often.

2\. Encapsulate the entire executable in a Docker container

3\. Use AWS’s ECS, Fargate, and Autoscaling.

Now we have scalability.

For maintainability, every time you touch a part of the code, extract the
functionality into a lambda microserve.

The code that never changes, doesn’t need to be touched and you slowly start
decreasing the size of the monolith and it’s easier to find bottlenecks and
make changes without affecting other parts of the code. Replace “lambda
expressions” with microservices/separate modules etc as appropriate for your
use case.

~~~
twic
> For maintainability, every time you touch a part of the code, extract the
> functionality into a lambda microserve.

Funny, for maintainability, every i time i touched a Lambda microservice, i
would integrate it into a single codebase.

~~~
scarface74
The end goal is to get rid of servers completely. By keeping the services
small you enforce a culture of small, loosely coupled, single purpose
functionality. Especially helpful when you're either dealing with jr.
Developers or developers who have been at one company for 10 years and never
learned how to properly structure code.

~~~
zdragnar
In my (albeit limited) experience, projects which consist primarily of lambdas
suffer one of two problems:

\- the code is awful, because lambdas were an excuse to keep bad developers in
their own playpens (aka juniors and those who don't learn)

\- the code is just fine, and would be easier to maintain if most of the
lambdas were combined back into one or more "monoliths"

~~~
scarface74
I agree, if you don't have junior developers (or worse outsourced developers),
it is easier to maintain a well constructed monolith that has separate,
focused modules, and clear boundaries between the modules. Microservices help
contain the damage of bad programmer skills.

~~~
zdragnar
I guess my point wasn't that damage containment was a perk. I view it instead
as a lack of support- either cultural or structural- from the more experienced
developers.

Microservice / faas may help mask it, but you're still stuck with a lot of bad
code... Only now, there's less oversight and / or accountability.

~~~
scarface74
Yes but the biggest issue with maintaining "bad code" is it's brittle. You
make a change one place and it breaks something else down stream. With a
microservice, the invariants are well known and the boundaries are clear. It's
easy to know whether you are introducing a breaking change and just create a
new version at a different endpoint.

~~~
zdragnar
You're not wrong, but I think missing my point. This is how we end up with a
hackathon golang microservice saving a company 50k a month when a simple
correction to poorly written code would have done the same thing.

Granted, I only skimmed bits of that article, so I'm not sure those were the
exact details, but it makes for a nice analogy.

------
jupp0r
Interesting article, but it seems to leave out some really important points:

1\. Tests

I really like Michael Feathers definition of legacy code being code without
automated tests. I've worked with some pretty bad legacy code bases, but the
ones with decent test coverage were mostly easier to change and to refactor in
smallish steps.

2\. Documentation

Good documentation describing the reasons for the architectural decisions made
(and, somewhat more importantly the reasons why other ways of solving the same
problem were dismissed) could have prevented most of the bad choices made in
that story.

3\. Management

Where was management in the story? They should have seen the red flags (high
turnover on that project, ...), done at least exit interviews with people
leaving, seen the risk of the tech debt in the code base and taken appropriate
action.

------
flukus
At a previous job I inherited a steaming pile of legacy that needed
improvement (it was broken and clients were complaining) I went with an
approach of being explicit about new layers with classes like CompononentV2.
This is an often maligned approach but I found it works quite well.

Basically all new code get's written against V2, old code get's slowly
migrated as requirements or opportunity allows and sooner or later you hit a
point where only a few places are referring to v1 so you can bite the bullet
and remove them entirely.

Semantic versioning like this within a project get's beat out of us early as
something source control should handle, but source control doesn't handle long
slow migrations of code to new layers.

------
hamilyon2
That article is classic worth rereading. In my view, lava layer antipattern is
lack of architecture. A situation like this could helped by assinging single
role to make architectural decisions and maintaining documents describing
them.

Another aspect is despite good intentions, an urge to use newest techniques
and disregard for bigger picture signals lack of seniority. Everyone loves to
use latest tech, but it takes courage, experience and confidence to slowly
improve big project without using "latest and gratest". You need someone
really experienced in charge to do that.

------
jammycakes
> _TL:DR Successive, well intentioned, changes to architecture and technology
> throughout the lifetime of an application can lead to a fragmented and hard
> to maintain code base. Sometimes it is better to favour consistent legacy
> technology over fragmentation._

Nice idea in theory, sometimes impossible in practice.

A few years ago I came onto a project that had a very clearly defined
separation of concerns, with a business layer, data access layer, presentation
layer, and Entity Framework. This was resulting in a number of SQL queries
that took over a minute to run and caused web pages to time out.

I ended up cutting right through the layers, bypassing Entity Framework
altogether and replacing it with hand-crafted SQL. This ended up cutting down
the query time from six minutes to three seconds.

~~~
TickleSteve
Abstractions and hard-interfaces rarely result in increased efficiency.

Breaking through the barriers and merging layers often allows a more
inefficient solution in the same way as denormalisation increases performance
through ignoring the "rules".

~~~
jammycakes
Well in the example I've just given they reduced query times from six minutes
to three seconds. If that isn't increased efficiency, then I don't know what
is.

The fact is that sometimes you have to ignore the "rules," because the "rules"
were designed to serve a purpose that does not apply in your particular case,
or perhaps never even applied at all in the first place.

The problem with trying to separate your business layer from your data access
layer is that it's often difficult if not impossible to identify which
concerns go into which layer. Take paging and sorting for example. If you
treat that as a business concern, you end up with your database returning more
data than necessary, and your business layer ends up doing work that could
have been handled far more efficiently by the database itself. On the other
hand, if you treat it as a data access concern, you end up being unable to
test it without hitting the database.

You need to realise that software development always involves trade-offs.
Blindly sticking to the "rules" is cargo cult, and it never achieves the end
results that it is supposed to.

~~~
TickleSteve
(I was agreeing with you, not down-voting you).

~~~
jammycakes
My apologies :)

Incidentally I wrote a whole series of blog posts a while ago where I cast a
critical eye over the whole n-tier/3-layer architecture and explained why it
isn't all that it's made out to be.

[https://jamesmckay.net/category/n-tier-
deconstructed/](https://jamesmckay.net/category/n-tier-deconstructed/)

~~~
digaozao
Man... I read your blog. It's all true. I find so hard to explain this to Jr
devs. They read a lot of best practices, and take it as a religion. Lots of
uneeded code is created.

Anyway, I would like more skeptical devs.

------
TickleSteve
Not really a software anti-pattern, as it does not result in any particular
software mechanism.

If anything, this is a development-process anti-pattern (not even software
specific). Its also extremely obvious and non-specific so doubtful that its
worth naming as an anti-pattern.

~~~
squiggleblaz
If it's so extremely obvious, why does it happen time and time again? I think
I've spent all of my time arguing against this process. It's bad. It's better
to improve on a bad design, than to make a bad design worse by adding another
design to it. But it's hard to notice that.

~~~
TickleSteve
Because its extremely difficult to do something about it.

Its normal for people to change what they're working on, this is inevitable.

------
kazinator
Disagree with comments below blog. A Dilbert comic dated 2013 was not yet
"classic" in 2014, and arguably still isn't. Classic Dilbert is 1994-ish
vintage.

~~~
zaksoup
Really, classic dilbert is Scott Adams blog posts about how Donald Trump is
using hypnosis to control the electorate. What happened to that dude!?

~~~
DonHopkins
Whatever went wrong with his brain, it happened a long time before Trump ran
for president.

Scott Adams Poses As His Own Fan On Message Boards To Defend Himself:
[http://comicsalliance.com/scott-adams-plannedchaos-
sockpuppe...](http://comicsalliance.com/scott-adams-plannedchaos-sockpuppet/)

In April 2011, Scott Adams, creator of Dilbert, created an anonymous account
at Metafilter, then proceeded to vigorously & furiously praise himself, and
insult other commenters. It wasn't the first time he'd done this, but it was
the first time he got caught.
[http://mefiwiki.com/wiki/Scott_Adams,_plannedchaos](http://mefiwiki.com/wiki/Scott_Adams,_plannedchaos)

Dilbert creator outed for using sock puppets on Metafilter and Reddit to talk
himself up (he is also plannedchaos on reddit)
[https://www.reddit.com/r/comics/comments/gqzgx/dilbert_creat...](https://www.reddit.com/r/comics/comments/gqzgx/dilbert_creator_outed_for_using_sock_puppets_on/)

As far as Adams' ego goes … he has a certified genius I.Q., and that's hard to
hide. -plannedchaos^H^H^H^H^H^H^H^H^H^H^H^HScott Adams
[https://rationalwiki.org/wiki/Scott_Adams](https://rationalwiki.org/wiki/Scott_Adams)

~~~
zaksoup
Wow. I was just sorta giggling quietly at what seemed to be a mild case of
blogs-about-weird-things. I had no idea how deep the rabbit hole went

