
Reusable Software? Just Don't Write Generic Code - molf
http://josdejong.com/blog/2015/01/06/code-reuse/
======
Fargren
I believe the only reliable way to make reusable code is to reuse it as you
write it. If your team is working on several projects at the same time which
have some overlap, then by all means if something could be useful for three or
more projects, get together with everyone else and strive to write something
you can all use. It will be reused immediately, so that's clearly reusable.
But don't try to write something you'll reuse in the future; you don't have
enough information to know if you are building the right thing, and you won't
know if you did until you try to reuse it.

~~~
JohnSz
I have regrets most times I did not follow this advice.

------
paol
Premature generalization is the root of all evil. (With apologies to Don
Knuth)

The original formulation may have been a real problem at the time it was made,
but in our time running into ugly code due to misguided optimization seems
pretty rare. Code that attempts - and almost always fails - to be generic, and
instead ends up overcomplicated and awkward to use, on the other hand...

It's not just 3rd party code either, in my own teams it's not rare I find
myself arguing for KISS and YAGNI. Sometimes I prevail, sometimes the
premature generalizers do.

~~~
arethuza
There used to be a fashion in some parts [1] that rather than building an
application to do X you would build a framework for applications of that type.

Of course, the vast majority of these in-house application frameworks ended up
being used for precisely one application or are overtaken by frameworks that
succeed in the wider market.

1\. Particularly the enterprise Java world in the bad old days.

~~~
userbinator
_rather than building an application to do X you would build a framework for
applications of that type._

...and then someone comes up with the idea of building a framework for
building frameworks to do X:
[http://discuss.joelonsoftware.com/default.asp?joel.3.219431....](http://discuss.joelonsoftware.com/default.asp?joel.3.219431.12&)

~~~
arethuza
Indeed, I managed to control myself by _not_ posting that link as it sometimes
upsets people.

NB I did comment on that thread on Joel On Software.... :-)

------
userbinator
I've always found the idiom "reinventing the wheel" to be somewhat amusing,
since the wheels on a car are very different from the wheels on a bike, office
chair, train, etc., and you certainly can't substitute one for the other (or
you can try, but it usually doesn't work out so well.) I see the fact that a
"generic wheel" does not exist in the real world to be evidence that it is
impossible to have such a thing in software too. Every application is unique,
and good software engineering depends on being able to see what is unique and
what can be reused from something else.

~~~
steverb
I think the distinction between reinventing the wheel and implementing a
domain specific wheel is important.

------
Zibulon
Weird. To make software more generic, I usually reduce the number of
assumptions and edge case handling that goes in the APIs. If you start adding
options to handle more cases, that's just not "generic". But in any case, we
should start from the use cases, and avoid mission creep... and an interesting
question becomes: how much do you need to design for future use cases, and how
much do you need to design for the use cases currently at hand?

~~~
the_af
Indeed. I think the article uses the term "generic software" in a way
unrelated to, say, "generic programming" (which is a good thing and, like you
say, it actually means _reducing_ assumptions). What the author probably means
is "big ball of mud software that must have catch-all solutions and extension
points to handle all possible cases". Unfortunately the terminology is
confusing.

~~~
Zibulon
Sounds to me that the real issue is either mission creep, or a bad
programmer/designer who doesn't see that the "generic" piece of code now needs
to be split into components? In my practice, an issue that comes back often
and seems to be related though, is trying to anticipate future use cases.
Maybe a failure to anticipate reasonable future use cases is what leads to the
mess: the first solution is not "generic" enough, and therefore more and more
options are added as new use cases are discovered. But I find it hard to
balance future use cases and immediate requirements.

------
rumcajz
Rule of 3 should apply to interfaces. Don't make an interface unless there are
at least 3 implementations.

~~~
josephcooney
I disagree. I think interfaces can be a powerful way to think about what you
want to do without having to focus too much on how it will be done,
compartmentalise that 'part' and then move on to something else.

------
jokoon
I agree. What's the point of making generic code ? Do you really think your
code will be re-used in the future for similar cases ? If not, better do what
you've been asked and not dwell into abstraction.

Time goes by pretty quickly, things change, and so much code is being made
obsolete every year. Remember, it's easier to write code than to read it.
Thus, write code that is readable and that does the job, not code that does
something abstract and "lead the way for similar cases".

To me inheritance and object oriented programming should let people make
libraries, but not make end user applications. Application are imperative and
follow scenarios and use case, they're not OOP. Most programmers should just
use OOP and inheritance when they use libraries, not in other cases. You don't
need to make a library very often, thus, don't try to use inheritance or to
write a library.

~~~
Zibulon
Maybe that's too short sighted. Constantly rewriting code has a big cost. A
huge cost actually. I see it at my work everyday: there is a pervasive
(fashionable?) attitude that no code lives more than 3 years, and we have
incessant "migrations" to new and better systems. You wouldn't believe how
costly those migrations are, including through losing good people who don't
want to suffer through them (and I am sympathetic to their plight, it's really
horrible)...

------
amouat
The first half of the post was great, but I don't fully agree with the points
about Java.

Whilst inheritance is a problem, it seems to me we've been preferring
composition over inheritance for a long time now. Single inheritance is
mentioned as if it is a bad thing, but I'm not sure anyone that has seen (or
worse, tried to implement) multi-inheritance in C++ would agree. Finally, I'm
not convinced having a interface for a single concrete class is a bad thing.
Sure, it adds some boilerplate (Java's main problem) but it forces the
developer to think about the public API and makes testing easier.

------
finishingmove
The first part of the article mixes genericity with feature-richness
seamlessly. I don't necessarily agree that creating generic interfaces means
bloatness, but I think a lot of the pains that the author mentions come from
the way OO languages, especially the older ones, do generics (yes, inheritance
abuse). Haskell is one example of a language that tries to achieve abstract
and generic interfaces that are elegant (and type safe!). Another beauty of
such high-level languages is that you can always create a DSL to match the
abstraction level of your problem domain.

~~~
seanmcdirmid
Haskell's type system is great, unless you need semi unification (equivalent
to nominal subtyping and imperative assignment), then it's not so great
anymore. Unification is pretty limiting.

~~~
tome
Could you explain a bit more? A quick search suggests that semi-unification is
equivalent to Milner-Mycroft typeability, which is polymorphic recursion,
which Haskell does indeed have.

[http://homepages.dcc.ufmg.br/~camarao/SUP/](http://homepages.dcc.ufmg.br/~camarao/SUP/)

[http://en.wikipedia.org/wiki/Polymorphic_recursion](http://en.wikipedia.org/wiki/Polymorphic_recursion)

~~~
seanmcdirmid
Right, but not without type inference. One of the reasons Scala's type system
is quite complicated is that it can do semi-unification to achieve local type
inference, at least (semi-unification is undecidable in general, and Scala
makes a best effort, I believe).

How does Haskell deal with refs?

~~~
tome
I'm not sure what you mean. I'm not aware of any type inference difficulties
with IORefs or STRefs. Polymorphic recursion, on the other hand, does often
(or maybe always) require an explicit type signature.

~~~
seanmcdirmid
If you were to do type inference across mutable reference assignments, you
would have to do some semi unification. I don't think haskell does, even ML
treats refs very strangely.

------
jrochkind1
> _Then, a new, fresh alternative arises and moves your library into oblivion.
> This new library is awesome. It doesn’t suffer from a bloated API, it just
> works. It does more or less the same as your library but is faster and
> easier to use. Over time though, this new library will go through the same
> cycle as yours. It gets bloated too and will be replaced with something
> fresh at some time._

I have been recognizing this cycle more and more, I think it's almost a
fundamental law of software engineering -- the war between flexibility and
over-engineering/over-abstraction.

~~~
thibauts
As the maintainer of such a library I think it's your duty to refrain from
adding every requested feature. If your users need more flexibility then break
down your library into building blocks that will allow them to build exactly
what they need by composition.

Reusable code isn't generic, reusable code is focused and built out of
(re)composable parts. _Reusability IS composability_.

~~~
jrochkind1
I have had the experience where I _think_ I (or a distributed open source team
on a project I commit to) am doing what you suggest -- but then
retrospectively, I see it still resulted in too much complexity through over
abstraction/engineering.

I think you've got the right idea, but it still doesn't always make it easy.

The places I have found the most success in my design, even looking back, are
when I constructed a library for a domain I was already _well_ familiar with
(often having experienced how another library handled it), _and_ where I had
the time to do it slowly and carefully, deliberately considering each design
choice and the simplest possible way to achieve each desired path while still
allowing for flexible reconfiguration. When working with other developers,
this often, as well, involves some initial disagreement about the best
solution that accomplishes without adding unneeded complexity. And sometimes
even I start getting impatient with the slowness of the deliberate process and
consensus building -- it's easier to do by myself (which takes even more
time).

These are rare circumstances that allow for such slow coding though, in the
pressure to get things out quick and get them used and then quickly iterate.
We have a (hard-won, reaction to other harmful development ideologies) overall
ethos in our industry(ies) of getting things out quickly without needing to
get them perfect or fully understand where it will lead you, and then iterate
based on feedback. That works pretty well for UI, although it can still
contain the same pitfalls. It is even harder for infrastructural architecture
though, it's probably still possible as long as you are careful and deliberate
with your iterations (although knowing when to break backwards compat is still
a trick), but, anyway, I don't think it's easy.

~~~
thibauts
I completely share your experience. It almost always takes me two complete
drafts before getting an architecture right anyway. This probably looks awful
to people who like to "ship" more than they like to "build". I like to build.

Converging towards the simplest representation of a problem's solution takes a
strong ability to let loose on your mental pictures in order to escape local
maxima, and time. What we do is essentially rewiring our brains around a
problem. In this respect we are bound by biological processes. There is no
shortcut. Every path that looks like a shortcut will lead to conceptual
pollution that will only grow like the square of the team size and seed what
we use to call technical debt.

~~~
jrochkind1
Yes, you said it well.

It requires a very thorough understanding of the domain you are working to
create a solution in. And time and effort.

The actual context most software engineers find themselves in does not usually
encourage (or even allow?) that kind of development.

Creating quality software really is more 'expensive'.

~~~
thibauts
Yes it tends to be more expensive at first. But look, as a community we can't
even (yet) agree on what makes quality. It is hard to sell something you can't
clearly explain.

------
jonpress
I think this applies to almost everything except for software 'environments'
(such as operating systems, virtual machines, etc...) and frameworks (to some
extent).

Software environments and frameworks need to be generic because they have to
support a very wide range of software behaviors on the layer above.

Another point; people tend to think that the terms 'monolithic' and 'modular'
are mutually exclusive when describing software - But it's not the case.

Most operating systems are monolithic by nature, but many of these OSes are
also modular in the sense that you can replace or customize various parts of
them without breaking the system.

The Linux kernel does a lot of different things - It's monolithic by design.
It worked out in this case - Otherwise we'd all be using MINIX. Monolithic
isn't always a bad thing.

------
j_m_b
"Islands of functional code in a sea of imperative code." \- Paraphrased Erik
Meijer

I've found that my most reusable code is composed of pure functions. The
imperative stuff is always changing depending on the circumstances but my
functions are a solid base upon which I build.

------
sarciszewski
This reminds me of a blog post by Anthony Ferrara:

[http://blog.ircmaxell.com/2014/10/an-open-letter-to-php-
fig....](http://blog.ircmaxell.com/2014/10/an-open-letter-to-php-fig.html)

------
Rexxar
Concerning generic code, I think we should use a rule inspired from the quote
attributed to Einstein _" Everything should be made as simple as possible, but
not simpler"_.

The generic code rule : _" Everything should be made as generic as possible,
but not more generic than necessary"_. That means that every time we see a
pattern in the code, we can try to write a generic version. Be that we should
never try to do a generic version of something if we don't need it somewhere
in the code.

This rule respect, IMHO, both DRY and YAGNI principle.

~~~
jrochkind1
I wonder if the converse guideline would actually be more useful: Everything
should be made _no more_ generic/abstract than actually neccesary.

Phrasing it as "should be made as generic as possible" is just what leads us
down the primrose path to over-engineered complexity, no?

Abstraction maybe doesn't at first seem to use like it should be the enemy of
simplicity -- certainly we can think of cases where abstraction leads to
beautiful simplicity, and that's in fact a large part of what makes software
engineering work, and what we enjoy about it.

But in actual life, the pursuit of "as generic as possible" (even if qualified
by 'not more generic than necessary") seems to often be the primary motivation
leading to conflict with "as simple as possible but not simpler."

~~~
chriswarbo
You've made a subtle change of meaning there. By going from:

> should be made as generic as possible

To:

> should be made no more generic/abstract than actually neccesary.

You've introduced a subjective judgment call. The result is a tautology, since
we could rephrase it to:

> should be made no more generic/abstract than it should

Whether or not we agree with the quotes from Einstein or the parent, at least
they're (mostly) objective, and tell us how to proceed: if anyone finds a way
to simplify or generalise a result (without sacrificing correctness), they
should do it.

~~~
jrochkind1
I don't know if it's any different in objective/subjective, I think, as
applied to software engineering, anyway, there's always a subjective element.

I suppose the corresponding operationalization to the way I've phrased it is:
Don't add genericness unless you are _unable_ to find a way to proceed with
what is needed without it.

I think (and I think the OP thinks) the problem we have more often is people
adding too much abstraction, not too little. If so, we need a rule that
discourages, rather than encourages, the problem.

------
jwl
I have fallen into this pitfall myself a couple of times. Trying to write
reusable generic code on the idea that we would be able to reuse it later for
a similar project, but in practice it almost never happens. New projects end
up being way too different anyway and in the end it took more time to try to
make generic reusable code for usuable by two projects rather than simply
writing the two projects with completely seperate code bases.

~~~
zo1
Until you realize that you have X different implementations of the "same"
functionality. All because no one bothered to look at existing items, or never
bothered to make their code _a little bit_ generic so that those that look at
it later on know how to adapt it to their needs.

In essence, they're both partially to blame. There are times when generic code
is unneeded, and other times when it will get used in the future. The trick is
to know the difference, before it's too late and you have X different
variations of similar functionality.

------
cubano
_The component will have a more extensive API and configuration. This requires
more knowledge on the clients side, and makes debugging harder as more things
can go wrong. Developing and debugging the component itself becomes harder as
one needs to account for more and more edge cases._

Back "in the day", devs and managers used to call this _job security_ , and
was a preferred outcome for most of the people I worked with.

~~~
pc86
It's difficult to tell what your tone is, but it is horrible working with
people who are concerned only or primarily with job security. It leads to
people holding on to information, being hesitant or outright refusing to
document procedures and processes, and absolutely denying that there may be a
better way to do something than the way they happen to be an expert in.

~~~
cubano
Yes, I couldn't agree more.

But, unfortunately, this was the mindset in corporate IT back 10-15 years ago,
and in fact the last onsite contract job I worked a few months ago seemed to
contain this attitude, so I was thinking in some ways, things are still the
same.

I couldn't really stand it back then, which is why I moved to
contracting/consulting.

~~~
pc86
I've only been in the job market for 6/7 years, and only half of that in a
very corporate environment. Luckily it's (in my limited experience) an
individual character trait and not any systemic cultural issue so far.

~~~
cubano
I, too, have noticed a big change along those lines, which is why I was very
careful to use the hated "back in the day" phrase.

There is no doubt that due to the pace of technological change nowadays, the
idea that devs and managers can hide behind a bloated solution and rake in
profit! are pretty much long gone.

------
chipsy
I treat software like gardening. Don't abstract until it's mature enough.

~~~
PeterHorne
How does your second sentence apply to gardening?

~~~
ealexhudson
Let it all grow before starting to remove things.

------
marcosdumay
> Maximizing genericity complicates use

Not exactly. Maximizing genericity leads to a turing-complete language. An API
is a set of restrictions you place over a language, completely generalyzing it
leads to no API at all.

------
jontro
Sometimes it's necessary to use interfaces in java even if you will just have
one implementation. AOP and proxy creation requires this in some cases.

IDE support for this is great so you do not have to type it twice.

~~~
chriswarbo
I think the point is to use interfaces where necessary, but not where
unnecessary. If proxying/AOP/etc. require interfaces, then use interfaces (or
don't use proxying/AOP/etc.; the same argument can be applied there too!)

------
xamde
Sometimes generic code is clearer than non-generic code. Although this is
rare.

~~~
the_af
I'd say the opposite is actually true. Often generic code is clearer and less
error prone that non-generic code. This is because generic code reduces the
number of assumptions you make (e.g. if you can't know the actual type of the
elements of a container, there are fewer operations you can apply to them. A
function that concatenates two lists cannot accidentally try adding 1 to each
element, because it doesn't know whether the elements are numbers. Hence, a
whole class of mistakes is eliminated before you get the chance to commit
them).

For example, (barring side-effects for simplicity) there is one and only one
possible implementation for a function with signature:

    
    
        f: T -> T
    

Where T is a generic type. Contrast this with:

    
    
        g: Int -> Int
    

which has many implementations, many of them possibly not what you intended.
If what you intended was the identity function for integers, better use _f_ ,
because it has fewer possible implementations than _g_. It's even easier for
the reader to understand: since _f_ doesn't mention integers, there's no point
in checking whether it does any operation with them. Of course this is a
trivial example, but the same point applies to more complex functions.

