
Managing your manager when it comes to legacy code - henrynothank
http://blog.ndepend.com/dealing-legacy-code-developers/
======
afarrell
> Start adding unit tests around the legacy code

Adding unit tests is often hard to do with legacy code because it wasn't
designed to be granularly testable. It is often better to figure out what
interface that code provides and then add tests of that layer of abstraction.
Then, you can refactor more safely so the code is unit-testable.

~~~
YZF
Right... So add system/integration tests.

After all what you really care about is the behaviour of the system. It can be
counter-productive to unit test pieces of legacy code because you may end up
refactoring things at a higher level and breaking all your unit tests.

If it's clear that a piece of the legacy system stands on its own as a "unit"
and has reasonable interfaces then by all means add unit tests to that. The
biggest value though IMO in a "legacy" system is to add system level tests.
Those are often easier to put in than unit tests, will catch problems
involving multiple components, and will teach you about how the system is
supposed to behave.

~~~
kartan
> add system/integration tests.

For me this is clearly the way to go. If you add unit tests to a bad designed
interface you are only making it more difficult to fix. Now you don't only
need to change the interface but also the tests increasing your technical
debt.

System tests are the best cost/value tests. You only add unit testing to
stable code that is not going to change often.

System tests cover the same code that the unit tests. Its only disadvantage is
that - maybe - it will take more time to know what is failing when something
fails as they cover a broader amount of code.

------
louhike
I'm reading Working Effictively With Legacy Code. I cannot recommend this book
enough. It gives practical advices on how to improve legacy code with small
steps, without rewriting everything from scratch (which is too often
considered as a magical solution which will resolve everything).

~~~
eldavido
+1. Really can't recommend Feathers enough. Along with "Refactoring" by
Fowler, these books have influenced my development style a ton even on non-
legacy code, by providing structured approaches to improving code quality, and
a shared language for talking about it.

Without looking at the book, a few things from Feathers I think about often:

\- The concept of "sensing" is useful when thinking about test design. I
alternate between state-based TDD vs. a more behaviorist/mockist style
depending on whether the code's job is fundamentally about managing state, or
routing requests to other components (a surprisingly useful distinction)

\- Converting methods to "virtual" can be pretty useful when it's too hard to
introduce an interface, but you want to mock/stub out a component

\- "Sprouting" is a nice way to think about how a class can "Grow" out of
existing code, same for methods and parameter clumps (from Fowler)

\- It's also a darned humorous and funny book. "The case of the irritating
parameter", "Help, this code just calls a bunch of other code", "This class is
impossible to get under test", etc. I love the writing style and how he stops
just short of hyperbole while still making me grin.

------
eldavido
Isn't the job of a dev manager to make reasoned judgments about legacy code,
tech debt, etc.?

Do people really work in environments where "managers" aren't capable of
understanding these things? If you do, I'd highly suggest getting a job where
the person supervising you has at least an ounce of development experience. As
developers, we don't have to put up with this.

~~~
x0x0
That's incredibly hard to figure out during an interview.

\-- someone whose boss is king anti-midas: everything he touches is brittle
duct-taped shit that breaks if you glance at it crosseyed. I just spent 4
hours looking for queries in our codebase to finally discover a bash script
loads query templates from a sql db (completely non-source-controlled,
naturally), seds parameters into them, and runs them, and of fucking course,
doesn't escape quotes quite right...

------
partycoder
Imagine you live with 3 people and there's a pile of dishes that need to be
washed. In this analogy, dishwashers do not exist.

There are many approaches to this situation:

\- Eat from a dirty dish, risking to get sick.

\- Wash the dish you are going to use, and leave the rest unwashed. The rest
of the dishes will be harder to wash later.

\- Wash all the dishes regularly, so no piles of dirty dishes accumulate.

\- Toss all the dirty dishes, buy new dishes or use disposable dishes.

\- Hire a cleaner.

In this analogy:

\- a dish is code

\- a dirty dish is code with technical debt

\- eating from a dirty dish is extending technical debt

\- washing a dish is refactoring or cleaning up code

\- hiring a cleaner is delegating the problem to some 3rd party.

\- a dishwasher would be futuristic ai that does your job. that is what will
end up happening.

Now, washing dishes is something that everyone benefits from.

\- When this decision is delegated to more than 1 person, the result is: the
volunteer's dilemma. If everyone competes by being a freerider/bystander,
everyone loses.

\- If there's only 2 people, the result is the chicken game. If both compete
both lose.

The way you manage this situation is:

\- Punish short-term thinking, reward long-term thinking.

\- Create a code review process, a coding standard, and make people
accountable for cleaning up their own mess.

~~~
jonesb6
\- person one soaks the dishes in a basin filled with soap + water

\- person two removes half the dishes and goes over them with a sponge or
towel, then places them in the cabinet.

\- person three removes the last of the dishes, goes over them with a sponge
or towel, then places them in the cabinet.

Communism, it works on paper.

~~~
cname
What you described has literally nothing to do with Communism. I can't find
any meaning in your comment. Is it a joke?

~~~
jonesb6
I was just adding another possible solution. I thought it would beg the
question that their are likely unknown solutions to most problems, which is
applicable to the original article.

That said my definition of communism is off. Sharing work for mutual gain? Not
close enough to Marx to count. BUT how close do governments/societies really
get to true Marxism, yet they call it communism anyways.

------
codingdave
Or, you know... read the legacy code and update it. That will help you truly
understand it instead of holding it up as some twisted pile of magic. Which,
in turn will help you know which parts are truly gnarly, and which are just
old-school enough that it doesn't match your current way of thinking, yet
functional.

And one of two things will happen - you will either learn the code well enough
to effectively re-factor or re-write it over time, or you will learn that it
is not as scary as you first thought, and just maintain it like the
professional programmer that you are.

None of this is meant to justify the maintenance of truly horrendous code. I
have a horror story of a 85,000 line stored procedure that we replaced with a
short, simple, data migration script. Those things do happen. But they are the
exception.

------
joeframbach
At the next grooming/pointing/sprint-planning meeting you have, put down 2-4
hours to "investigate the issue" where the acceptance criteria is "a breakdown
of how long it takes to get this done". This tells the manager that:

1\. you are taking it seriously and not just looking for excuses.

2\. if it is going to take 6 weeks to fix, then spending 2-4 hours upfront to
investigate is well worth it.

3\. they can trust your 6-week analysis at face value without digging into the
details if they see that you spent 2-4 hours working on that analysis.

~~~
calcsam
Even better, don't give an answer right away, but say that you need 2-4 hours
to investigate to be able to give an estimate, because that part of the
codebase is really knotty.

That prepares a manager for bad news.

------
city41
This is just an ad for ndepend. Disappointed to see it so high on HN.

~~~
ngoede
Because stuff like product announcements for Docker ect. which regularly
appear on Hacker News are not?

~~~
city41
A product announcement is far more upfront and honest with its reader. It
wasn't until the last third of this post did any action items emerge, 4 of the
5 being use a tool that starts at $337.

Not to mention Docker is open source and has a valid free (as in beer) tier.

I'd also complain about a Docker blog post that lulled you in and then
recommended you buy a Docker Datacenter account.

------
Too
I found the scyscraper or tower analogy to be very good for describing the
issue. If your 7th floor is built with cardboard you can't "just add" an 8th
floor on top of it and you can't quickly hack the integral structure of
something on floor 2 because if you do you eventually end up with a game of
jenga.

------
tty7
I like to do integration tests and just match expected output.

~~~
iofj
This is brilliant advice. This whole unit test madness needs to stop.

This post advocates:

1) unit test everything

2) complain you can't change anything without breaking a lot

3) complain that the program doesn't work even when the unit tests say "it
should".

Well that's pretty much the expected result, isn't it ? Of course unit tests
break if you change a method, even if the resulting program works better. And
a built program doesn't work if all unit tests pass ? Well that's just not
what a unit test tests ... so of course it doesn't.

An alternative tactic could be:

1) understand the problem domain. Writing an administrative system ? You
should be a better accountant than the worst accountant that works in that
department. If a feature request comes in, and you can't answer "why do they
want this", it's time to spend some time in their team.

2) get an integration test. Something that emulates the software doing what
it's meant to do at a large scale with as many real components as possible.
E.g. an administration system should just be asked to run an administration,
and have the result checked. The result should be checked in the sense that it
should match what the accounting handbook says it should say, NOT if the code
worked as expected.

You should try to do realistic things. E.g. have the test read the production
database, and then enter the first 50000 customers into the test in a random
order, constantly doing things like getting a customer balance in between.
Insert 10 test customers and see how their accounts evolve and if this matches
business rules. Check if the total system still balances out to zero at every
point. Have it load the system, demanding it performs at a minimal acceptable
level.

3) change things.

4) find out what breaks on a functional level. Get a report that when adding
the current customer list one at a time to the software, 5 weren't in the
database after entering them. Fix.

5) Run the integration test again. It passes.

6) Confidently go to your boss and declare this software will work. You can
say that, because you just have demonstrated that it works.

Some bad programmers hate this, because it finds problems unit tests won't
ever find, and won't tell you where the problem is. They don't help
programmers that don't understand what the program is doing. They don't
directly report why things are happening. They won't let developers get away
with "I just changed the part I understand", and don't let them mark the bug
as fixed after that. Entering a new customer takes 50s after your field
addition change ? Integration tests will find this, and create an easily
debuggable situation. Integration tests will catch that code that is multiple
directories and modules apart, maybe even separated by an RPC won't cooperate.
Integration tests, in short, tell you that the program does what it's built to
do.

~~~
louhike
The point of unit tests is to give you confidence into making some changes
without breaking the behavior of the methods and without having to do
integration test or user tests. The advantage of unit tests is they are quick
to do. Their purpose is different than integration tests. You are not supposed
to deliver an application just because unit tests pass.

~~~
iofj
The point is not to tell you that you should never have unit tests. They can
be useful (though rarely). I'm not suggesting that if you write a new
datastructure that you shouldn't have a few tests that only really exercise
one or a few methods. They do not guarantee the program does what it's
supposed to do.

"without having to do integration test or user tests" doesn't exist. It's
fiction. It leads to all the bad places to be in software development. It
leads to you explaining to a manager or PM that "it matches the requirements,
I don't accept that you have a problem", when the software just lost your firm
a million dollars.

Software development in reality : you do not get accurate requirements. Your
design does not match reality (and the 1st version is so far off it's not even
internally consistent: it likely can't be built at all, never mind perform
it's intended function). Getting better at software development, after a
while, stops being about improving your ability to collect requirements. It
does not make your v1 designs better. You simply learn to deal with change, to
provide good places for as-yet-unknown changes in your designs, to go out
weekly and ask for changes in requirements, knowing that either your design
can accommodate them or you'd rather hear them sooner than later.

------
cdevs
I can relate . "Here be dragons".

