
How to avoid brittle code - zabil
https://www.go.cd/2016/03/24/how-to-avoid-brittle-code.html
======
jameslk
I see unit tests recommended all the time, but in practice I've found them
significantly less helpful than integration and acceptance tests. Maybe its
just a nomenclature thing? It's obvious if you change a "unit" of code, its
tests are going to fail. That's not terribly helpful while refactoring. What's
more useful is to find out what else breaks and that's where the other tests
come in.

I'm not saying that unit tests aren't useful. They definitely help for example
if you're using a dynamic programming language or if you're practicing TDD. I
would just emphasize a higher priority for integration and acceptance tests.

~~~
njharman
> It's obvious if you change a "unit" of code, its tests are going to fail.

No it's not. Not at all. The whole point is you can change the guts of a unit
around and know you haven't broken things cause the unit tests still pass.
Because the unit test is testing the "contract" (explicit or implied) the unit
has made with other units about it's inputs and outputs.

Unit test should fail (and require updating) only if the unit's INTERFACE
changes.

But, if your unit tests are failing everytime you touch your code then 1) your
code is not cleanly isolated and independent and/or 2) you have poor
unittests.

~~~
kazinator
If nothing in an application depends on some aspects of the behavior of a unit
interface, what is the point in testing it?

You only need that sort of test in a programming language or public library,
because you're publishing the behavior to unknown numbers of users, and it has
to work, as documented. If they find too many things that don't work according
to the documentation, they will correctly suspect that you aren't testing, and
lose confidence.

But if some API is internal to an application, and no code depends on some
aspect of its interface contract, then that aspect effectively doesn't exist.

It may happen that someone discovers this aspect in the documentation and
tries using it. At that time it is either found broken, or found working. If
it is found broken, it can be fixed. Or else it can be striken from the
documentation (and another way can be found in the application code to meet
its requirements).

If the aspect is found correct, or fixed to reflect the documentation, and is
then used in the application, then that use constitutes a test. The feature is
tested indirectly, through some regression test case which targets something
in the application, which depends on that aspect of the API contract lower
down.

If you design units with all kinds of detailed aspects in their interface
contracts which end up not being used at all, and test all of them, it can be
argued that you're wasting time: you're basically working on your library
design hobby, at the expense of the project, which just serves as a vehicle
for it.

~~~
prodigal_erik
Unit tests can verify assumptions that the implementation is internally
relying on. E.g., say a greedy algorithm relies on a collection being sorted,
even though the public interface does not return results in any particular
order. If I make a mistake such that an edge case leaves the collection not
sorted correctly, a test of the public API can't tell unless it happens to
trigger the edge case whose importance might not be obvious.

~~~
kazinator
> _a test of the public API can 't tell unless ..._

 _" Can't tell unless"_ means exactly the same thing as _" can tell"_. You
have a way to get to that corner case through the public functionality.

If there really is no way to tell through the current public API, there the
bug doesn't exist.

~~~
prodigal_erik
I think I'm arguing for letting white-box unit tests make assertions about
implementation details in a simple and clear way, rather than having to
somehow devise a test using only the black-box API that happens to fail if the
assertion doesn't hold. In other words, you can only guarantee parts of the
internal state are valid if you know how to infer them from outside.

Are you arguing that all unit tests should be black-box?

------
3pt14159
This is a great post and I agree with almost everything in it 100% except for
this part:

> And—yes—you can read this as my being unlikely to use Rails the next time
> I'm building a traditional multi-page application.

Really? The way I do it is I write a client library for the API and run the
tests in Rspec on top of the client library. Sure I'm going through the whole
stack, but the thing operates as fast as you can serve normal users! Surely
your large project can handle 5000 concurrent requests? While that might seem
like a lot of buildout (scalable testing servers), I'd argue the opposite is a
lot of buildout: writing an application _without_ Rails.

Granted my approach might be most feasible for Rails API + Ember, but I really
don't see the speed of my tests passing as what's holding me back.

The other thing that is nice is that once you start needing to speed up
endpoints by pulling them into a compiled language, you can keep your tests!
Everything is going through the API anyway! So feel free to Nim or Rust the
CPU heavy endpoint or Redis-cache the write-once-in-a-while-and-keep endpoint.

~~~
edwinnathaniel
It will hurt when your end-to-end tests (minus UI) must be run per check-in
and as your code gets bigger.

Some companies invest in huge infrastructure to speed things up (paralleling
until you can't...) but in that situation, suddenly Devs can no longer build
locally without biting their tongue.

Setting up end-to-end test "per test-case" is also tedious (should user be
logged-in in order for this test-case to run? what should be the state before
the tests are run and what should be the state after the tests are run? what
if the state of the system have the expected _and_ more e.g.: side-effect?)

~~~
3pt14159
> Should user be logged-in in order for this test-case to run?

Why not? This is a simple helper function or easily parallelizable block. Same
with creating a sub-resource before modifying it. People act like this is the
end of the world, but it isn't. And if you like, you can still do your (in my
controversial opinion) silly unit tests and you can still parallelize those
too!

Whether a dev can build a project locally or not does not relate to whether or
not the tests can be run in parallel.

~~~
edwinnathaniel
Oh, for sure, it's a simple helper function but the ramification is huge!

1\. Which user? which role should the test written as?

2\. Suddenly bug showed up because expectation between roles differ (usually
the bug is in the test framework)

No, it's not the end of the world, just programmer's productivity go
down...people can still work for the project just isn't fast as it used to be
anymore...

~~~
3pt14159
Compared to unit tests where the concept of the user doesn't even exist or
needs to be shimmed in with a proxy object, I really don't think the user
differences in ACL are the deciding factor between the two.

As for programmer productivity, proper parallelization of these tests results
in sub second test suite runtime. Maybe more if there is a new dependency that
the testing servers need to install, but it really isn't that bad.

------
derFunk
I liked the part about "technical debt". Yet I don't like this statement: "You
should aspire to upgrade your dependencies and frameworks all the time". It's
too generic.

I'm thinking about third party dependencies here (eg via package managers). In
my opinion it's best practice to stick with one certain stable version and
then stop updating it - to avoid wasting time on possible needed refactoring
due to changed dependency behaviour. Monitoring the change log of the
dependencies to see if security issues are fixed and reacting on that is of
course still reasonable. Yet upgrading just because " upgrade as often as
possible" is not.

Maybe it's different for projects lasting 10years or more, maybe there are
other rules to make. My experience is with 2.5yr-projects in average.

Maybe I'm working with dependencies which get updated every week or so, maybe
the author's dependencies are updated every half-a-year.

~~~
sokoloff
If you're not tracking the changes of your dependencies, you're moving that
work (finding and fixing the problems) to your future self/team. That might be
a win if future you won't need to continue to support/develop the thing. I
think it's a loss in other cases. IME, you end up trying to swallow a sea of
changes all at once, sometimes triggered by something that you can't control.
"Oh, I need to take the new rev of foo; that needs a newer version of bar and
baz. Baz needs a new quux..." Suddenly, you've got a lot of fixing and
regression to do, and if the need for the new foo comes at a bad time, you may
have lost control over your schedule predictability entirely.

Moving an expense from today to the future is debt.

~~~
shoo
The point about the need to upgrade dependencies as being sometimes forced by
events beyond our control is a good one.

Debt (technical or otherwise) in itself is not a bad thing if used sensibly
for some gain. Allocating resources to maintain dependencies is not
necessarily the best short or long term investment.

These are all heuristics.

~~~
derFunk
Documented technical debt is not a bad thing. If it's undocumented (aka
everybody except the "debt creator" is unaware), it's hell.

------
ktRolster
After you get done writing a section of code, take a few minutes to look at it
and see if there is any way it can be improved. Move functionality around,
rename some variables. Do it right away, while it's still fresh in your mind.

~~~
collyw
I find the best time to do this is when I come back to code that I wrote a few
months ago. If it doesn't make sense immediately, then it probably needs
refactored (I wrote the code I ought to understand it quickly).

Ideally it should be refactored right away as you say, but often it takes
forgetting the code, then rereading it to see how unintuitive it was the first
time.

~~~
kentt
I like that tactic. I will steal it.

In a similar vein, I like to take a break if possible, before a larger commit,
then come back to it and see if I should refactor first.

Two benefits of this are (1) a fresh mind to see what you might have missed
especially since when I'm done with a piece of functionality I often just want
to commit it and move (2) it gets me back in the flow of development quickly.

------
gshrikant
While the advice seems well-reasoned and complete, I am not sure how the test-
before-you-implement philosophy extends to GUI/graphical development.

I work on high-level embedded applications where we design and simulate the
application on the development machine before flashing it on the target. The
usual test method would be to compare the expected/actual images and check to
see if everything matches the requirement.

In this case though, usually it is not possible to test something
automatically before implementation since there is nothing to compare the
results (images) with.

Can anyone suggest how TDD-like methods work in this case?

~~~
khattam
You will find this video by Uncle Bob helpful:
[https://www.youtube.com/watch?v=HhNIttd87xs](https://www.youtube.com/watch?v=HhNIttd87xs)

The gist is, make sure your UI does nothing other than merely displaying the
view model. The business logic can then be tested without having a GUI. Please
watch the full video if you are interested in learning more.

------
mchahn
I have often wondered if a total rewrite of an app every few years would take
less effort that putting in the work in the beginning to make it upgradeable.
This design to make it upgradeable often fails anyway, especially when the
team changes.

~~~
gumby
That's not the first choice you should make. First, a fresh code base is going
to have new bugs. Second, you risk the second system syndrome
([https://en.wikipedia.org/wiki/The_Mythical_Man-
Month](https://en.wikipedia.org/wiki/The_Mythical_Man-Month)) where you over
engineer it, make a bunch of new decisions, and may never ship.

Not to say that it isn't sometimes time to throw the old infrastructure away,
but it's not a decision you take lightly.

~~~
mchahn
> you risk the second system syndrome

But a major cause of that syndrome is adding new features to legacy code. If
the code was new that wouldn't be a factor.

------
partycoder
Software development is not a ponzi scheme. Employees that sabotage new
employees suck. Making yourself 10x more productive than the average by making
everyone else 0.1x as productive as you is not in the best interest of the
company.

Feeding a self-perpetuating circle of unethical sabotaging losers is not in
the best interest of the company. It's cancer that needs to be extirpated as
soon as possible and made an example of by firing them to the sound of a
trumpet.

Hire people that see their job as something more than a paycheck. Hire people
that care.

Hint: people that talk about their weekend, past-times, vacations, cars, etc.,
and change topics or moods when engaged with work related challenges, are
people that are most likely checked out. Those are the people that won't care
and will drown your company in their ego.

~~~
cowardlydragon
What you described at the end is a perfect middle manager Machiavelli, who
"works" by sitting at their desk all day to make others look bad.

Either that or they are a soulless drone who hates all people around them and
because of their excessive (uncreative and diminishing returns be damned)
obsession demands untoward shares of the "rewards" and is surrounded by
"morons". These people also worship Ayn Rand.

------
a_imho
There is some good points in there, but there has to be a limit on how many
times you can sell unit tests. I prefer advice that goes like: we did it this
way and our results were the following, over the you should totally do this
generic idea we renamed once again consultant speech.

------
reledi
> Upgrade everything, all the time

This is dangerous advice. There's some good advice underneath. Updating often
addresses pain points iteratively and is easier than doing major upgrades
every year, for example.

But you should not be blindly updating your dependencies every week. It's a
distraction and some updates require dev time to fix things (hopefully your
integration tests catch some of these).

It's also dangerous if you update all without looking at changelogs. Maybe the
new version of one dependency brings in ten new dependencies, or maybe it
brings in security vulnerabilities, or maybe it's a completely different
codebase or package was removed like what happened with the recent npm fiasco.

~~~
andy_ppp
Dangerous to do it and dangerous not to do it. The React ecosystem for example
is a particularly fast moving fish; I think major releases in backend
languages are far easier to be up to date with. I wonder if this applies to
infrastructure too; always being on the latest version of Ubuntu for example
seems overkill. Having said that, I think I generally agree with the sentiment
of this article.

------
wellpast
> Upgrade everything, all the time

In my experience, rationality is better. Everything we do to our code assets
has an associated cost. Therefore, everything we do should have a clear
justification -- unless of course you have countless world and time to spend.
It would be pointless to upgrade from your old MVC framework if there's no
clear and prioritized problem it would solve. Professional developers acquire
tools and instincts over time--not blind rules like these--and always use
their brain when solving problems.

~~~
njharman
> Everything we do to our code assets has an associated cost.

Indeed. Not upgrading (incrementally in small, manageable chunks) is one of
the highest costs to incur. Almost everything in software dev is done is
small, iterative chunks for a good reason.

~~~
wellpast
> Not upgrading (incrementally in small, manageable chunks) is one of the
> highest costs to incur.

Absolutely not true as generalizable statement. Not even close. I've worked on
several large software projects, from systems dev and full stack distributed
web systems that have ran solidly for years and years on non-latest versions.
Our productivity would have severely been hampered by a slavish adherence to a
rule like this.

------
bigethan
This article is a great.

Similarly Working Effective with Legacy Code by Michael Feathers
([http://amzn.to/1UxwVdL](http://amzn.to/1UxwVdL)) is a great programming
book. I appreciate it because It's really nothing but patterns for dealing
with bad code (mostly Java, but most of it translates to other languages).
Very little why (which I already know), lots of "how to fix X", aka, great
signal to noise ratio.

