
Invented here syndrome (2015) - delian66
https://mortoray.com/2015/02/25/invented-here-syndrome/
======
imagist
The longer I work, the more NIH I get. There are two reasons for this:

1\. The ease of creating a library these days is resulting in a proliferation
of utterly crap libraries, so bringing on a new library is more and more of a
liability, both to future reusability of code and to security.

2\. As I get better at programming, I more and more often realize that I can
write a better version of the library, or just one that better fits my needs.

These combine to create a situation where I'm less and less likely to want to
import something.

~~~
MaulingMonkey
I'm the opposite.

1\. The proliferation of libraries enables devs to get very, very picky about
their libraries - the slightest wart or lack of understanding will cause them
to label a library "utterly crap". But anything they create to replace it will
run afoul of another programmer's picky tastes even worse! (And to what degree
they may have a point, exactly 0 devs will be previously familiar with the
warts and misfeatures of this new code.)

2\. As I get better at project management, I realize that myself and my fellow
devs are constantly underestimating the ongoing maintenance burdens of new
code. It's hard enough getting us to make sufficiently pessimistic time
estimates for the initial _implementation_ that PMs don't have to smile and
multiply by large integers - nevermind accounting for the next few years of
adding additional debug logic, logging, fixing edge cases, supporting legacy
decisions, adding a full set of fuzzing and regression tests, etc.

I wonder if we're regressing toward the same mean, or away from it...

~~~
imagist
> 1\. The proliferation of libraries enables devs to get very, very picky
> about their libraries - the slightest wart or lack of understanding will
> cause them to label a library "utterly crap". But anything they create to
> replace it will run afoul of another programmer's picky tastes even worse!
> (And to what degree they may have a point, exactly 0 devs will be previously
> familiar with the warts and misfeatures of this new code.)

Ah yes, the "it's all subjective" argument. The thing is, it's not subjective.
Sure, it may be hard to objectively measure the qualities of a library, but
that doesn't mean we should give up and assume there's no underlying objective
truth.

My "picky tastes" are for libraries that are battle-tested, performant, and
scalable. If that runs afoul of someone else's tastes, frankly, I don't care.

> 2\. As I get better at project management, I realize that myself and my
> fellow devs are constantly underestimating the ongoing maintenance burdens
> of new code. It's hard enough getting us to make sufficiently pessimistic
> time estimates for the initial implementation that PMs don't have to smile
> and multiply by large integers - nevermind accounting for the next few years
> of adding additional debug logic, logging, fixing edge cases, supporting
> legacy decisions, adding a full set of fuzzing and regression tests, etc.

I definitely agree that the ongoing maintenance burden of new code is
consistently underestimated, but it doesn't follow from that that we should
prefer third-party libraries to our own code. A new third party library is
still new code, and it's not often a reliable assumption that someone else
will maintain that code forever. Further, general-purpose third party
libraries aren't tailored to our needs, so they often contain _more_ code than
we would write. The result? We often end up with a larger maintenance burden
by using a library. And finally, even if someone else maintains your library,
that doesn't mean they will maintain the integration point between your code
and theirs. Keeping up with changes in a library that doesn't value reverse
compatibility highly can be just as costly as maintaining a library yourself.

~~~
crpatino
> ... but it doesn't follow from that that we should prefer third-party
> libraries to our own code.

Agreed. The big tragedy of Open Source is that after its initial wave of
success it good flooded with freeloaders. If you allow me to repeat myself:
The one flaw of the GPL is that it did not offer provisions to prohibit
distribution in binary form. "You want free? Better you know how to unpack the
tarball and build the project from scratch... all by yourself (No build
scripts beyond Makefile permitted, thank you very much)".

Of course, such overzealous GPL would need to also make provisions to accept
dual licensing with nice, commercial, royalty seeking licences. That way,
everybody gets what they value the most: You want freedom, you pay with sweat;
you just want access, you pay in cash.

~~~
imagist
That would pretty much defeat the entire goal of free software. Setting up a
technical barrier to usage is counter to free software goals.

~~~
crpatino
And what are those goals? Provide free labor to corporate interests? Give away
free products to cheap people that cannot care to recognize for one second the
personal effort invested by the producers?

Creators are going to create, they cannot help it. I do not say that you
cannot provide altruistic value to others, but my opinion is that you should
care about your own people first. And the big mistake of programmers as a
profession is that we do not see other fellow programmers as our own people.

------
V-2
> _Perhaps a library covers only 95% of the requirements_

95% isn't "only". That's pretty darn high. I'd wish that in-house code covered
95% of requirements more often.

If we're talking about open-source (I can't see why not), you can just fork
the library in question and adjust it to fulfill the missing 5%.

> _Being available in a package manager, being downloaded by thousand of
> people, or having a fancy web page, are no indications of a good product._

1\. Being used by thousands of people isn't by itself a guarantee that it's
good, but it surely makes it more likely.

2\. Obviously noone sane would pick third party libraries based on how fancy
some web page is. That's a strawman argument.

~~~
imagist
> 95% isn't "only". That's pretty darn high. I'd wish that in-house code
> covered 95% of requirements more often.

> If we're talking about open-source (I can't see why not), you can just fork
> the library in question and adjust it to fulfill the missing 5%.

The percentage isn't relevant if working around the design of the library to
fulfill the missing 5% takes 30x as long as just writing the code yourself.
Additionally, libraries often do a lot of things you _don 't_ need, so you get
a bunch of bloat along with the stuff you want.

> Obviously noone sane would pick third party libraries based on how fancy
> some web page is. That's a strawman argument.

We can argue about the sanity of people, but the fact remains that people on
teams I've worked on have chosen libraries based on the fanciness of their
webpage.

~~~
V-2
> _The percentage isn 't relevant if working around the design of the library
> to fulfill the missing 5% takes 30x as long as just writing the code
> yourself._

Of course, but one should be aware of a tendency to _overestimate_ the effort
required for tweaking third party code (understanding somebody else's code is
always more difficult), and _underestimate_ the effort required for writing
the thing from scratch.

It's closely related to the infamous "Big Rewrite" problem.

> _the fact remains that people on teams I 've worked on have chosen libraries
> based on the fanciness of their webpage_

I believe you, but I find it difficult to believe these would be the same
people who tend to introspect on their profession a lot, eg. by following
software engineering blogs, so the author is kind of preaching to the choir in
my view.

~~~
imagist
> Of course, but one should be aware of a tendency to overestimate the effort
> required for tweaking third party code (understanding somebody else's code
> is always more difficult), and underestimate the effort required for writing
> the thing from scratch.

It's not just the cost of tweaking, it's the cost of keeping it up-to-date,
too. I agree with the general gist of what you're saying, just wanted to point
out there's more to it.

> I believe you, but I find it difficult to believe these would be the same
> people who tend to introspect on their profession a lot, eg. by following
> software engineering blogs, so the author is kind of preaching to the choir
> in my view.

On the contrary, I think people who don't introspect about what they're doing
are _more_ likely to read tech blogs. How else would they find out what the
latest fad libraries are so they can treat them as best practices? :)

~~~
pkolaczk
I agree that understanding someone else's code is typically easier than
writing that code from scratch. But that's just part of the equation. Another
part is how much of that code you have to understand / write. Understanding 1
MLOC framework with all its quirks may be much harder than writing 1 thousand
lines of code implementing a feature that you want.

So if you need only 5% of functionality of a huge general-purpose library, it
may be still actually faster to write that 5% functionality by yourself, than
to understand, tweak and later maintain the library.

~~~
imagist
I think we're agreeing. :)

------
mamon
One reason of "invented here syndrome" is proliferation of crappy developers.
I've seen that many times in the companies are worked for (as senior dev /
architect): the code that is required by some web framework or library is
quite clean and well structured, code that is at core, business logic is a
mess.

Tell some junior to mid-level developer to write any significant piece of
application and the host of bugs and NullPointerExceptions will follow.
Finding a library for everything and only leaving simple gluing them together
as a task for developers seems like the only way of ensuring minimum required
code quality.

~~~
sgift
If all they are allowed to do is gluing code together they will never learn
and will never get better at anything than .. gluing code together. Sounds
like a bad idea.

~~~
mamon
Maybe part of the issue is that I personally never work as full-time employee.
I'm more like consultant, hired on per-project basis, so training less
experienced developers is neither in my job description nor in my schedule.

~~~
Nursie
I work as a consultant on a project-basis too.

I would tend to consider training up of any permanent staff on the team to be
part of my job. Perhaps not on basic technology, but certainly on what we're
doing. That way, when I leave, there's still knowledge in house.

This may seem to limit business opportunities, but I consider it part of a job
well done.

------
whatever_dude
I take a more selfish approach on this debate. I nearly always write my own
code to do something, even when a library already exists... just so I can
understand the problem well. Then, when I finally decide to use a specific
library, I understand why they're doing something in a specific way more
clearly.

Maybe not the most advisable route for commercial work, but one that keeps me
satisfied.

~~~
moron4hire
You're going to need to understand how the system works regardless. So you can
spend the time learning how the internals of someone else's library works, or
you can learn how to solve the problem and write your own. In my experience,
there is rarely a difference between the two. But when I write my own code, I
know how to make it integrate with all my other code better.

------
headcanon
Naturally, like all engineering, its a balancing act with tradeoffs. We are
responsible for all the code that runs in our application, but we are also
responsible for shipping. It doesn't make sense in this day and age to write
every line myself, but at the same time it is irresponsible to include buggy
dependencies.

One of the first things I do when asked to solve a problem or implement
feature X is to check around to see if there is an existing solution. Its very
likely that I will find something that at least is close to what I want, and
from there I evaluate the library, and ask these questions:

1) how complex is the problem that this library solves? How difficult would it
be to implement it in my app?

2) How close is this library to what I want? Is it easy to integrate or do I
have to jump through a lot of hoops?

3) How bloated is the library? Do I have to include a bunch of stuff in the
production build that I'm not using? (not always a problem depending on how
the language imports libraries)

4) Is it well written? Is it actively maintained? Github stars and issue
backlogs don't say everything but can provide good heuristics.

5) Do I expect that I will need to customize or optimize the underlying
behavior in the future?

Ultimately, I think the biggest question here is "Is the library working for
me, or am I working for it?" If the latter, maybe its worth considering
writing it yourself.

It also depends on the language and ecosystem for me, not to mention
individual libraries. I'm working with javascript primarily right now, and
there are a lot of npm libraries that are so small it makes more sense to just
copy/paste the code into a utility (after doing due diligence it of course)
and iterating on top of it. But it doesn't make sense to rewrite, say, JQuery,
or React.

edit: grammar

------
alephnil
In addition that the code you consider including should have high enough
quality, it is also important to consider external dependencies. If the
library require 35 external dependencies, it is likely to be a maintenance
burden in the future, so you better avoid it. Otherwise you may end up in the
situation where A upgrade to use version 2.0 API of library C, while library B
continue to use version 1.3, and now you sudenly have a compatibility problem.
Other factors can be if the library is hard to build, or is only for windows
while your product is for windows, mac and linux.

The real question is what will create the least maintenance burden in the
future.

------
Roboprog
I think a related, contributing problem is an inability to create and describe
abstractions.

Time and again, I see projects where the developer(s) is/are unable to do
anything beyond one line after another of calls into low level library
primitives, or maybe filling up a bunch of directories named after design
patterns which plug into a framework.

Beyond that, though, and there seems to be a widespread inability to recognize
repetition and then refactor it out, or to make layers to separate
domain/business logic from low level details.

Also, the fairy tale of "self documenting code" is widespread, vs say
"literate programming", and even if many of your in-house staff made their own
libraries, the interfaces would be hard to understand. It's amazing how the
act of trying to document an interface forces you to keep it clean and
orthogonal.

Mixing developers with different "styles" (inline vs abstraction) is going to
be frustrating for both extremes.

I tend to agree with the author in his opinion, but much like "The Big Ball of
Mud", you need to understand the forces that cause these patterns to emerge.

------
dzhiurgis
[https://en.wikipedia.org/wiki/Inventor%27s_paradox](https://en.wikipedia.org/wiki/Inventor%27s_paradox)

------
bitwize
Iron law of open source: you either use a solution with extensive community
support, or you forge your own path, assuming total responsibility for
maintenance and support of any code that comes from off the beaten track
(whether developed in-house or not).

For code that falls outside the company's core competency, is it any surprise
why companies are more inclined to use an external library?

------
erikb
Certainly a big problem of mine, or at least in the past. Just as the not-
invented here syndrome people will eventually learn that there is good code
out there, people like me, eventually learn that you need to code some modules
yourself, and sometimes the 400 line if-else-for crap is exactly what is
needed to just get a step further and realizing that the customer actually
wanted another feature to begin with.

------
dqv
I came to this realization when I had to make too many concessions to avoid
reinventing the wheel. Between the huge list of dependencies and having to
change a major part of how I do business, I decided to trade one "good enough"
for another.

------
justinlardinois
My first exposure to anything programming related was some web publishing
courses I took at a community college when I was a teenager. The classes were
intended for people with a graphic design, rather than developer bent, so the
classes were mostly focused on HTML, CSS, and web design-relevant Photoshop
tricks.

I credit those classes with sparking my interest in programming, but they
didn't actually cover it to any extent. Instead there was this attitude that
you could just find a Javascript file or Perl CGI script that would do what
you need, and when necessary you could shoehorn it in, despite having little
to no understanding of the involved programming languages. GitHub didn't exist
yet, so there wasn't a centralized, trustworthy source of free Javascript
modules; instead you had to find them on sketchy ad-riddled sites and hope
they did what they said they did. I remember there were even sites that would
_sell_ scripts, a concept that feels totally alien to me now in 2016.

tl;dr Invented here syndrome is a big thing for web designers who aren't
developers

------
Pica_soO
The library will seem to withstand a quick and dirty test on performance and
scale-ability, and then collapse when near release production puts pressure on
it. Then you will rewrite it, and to distribute the crunch, will do so from
then on.

------
_RPM
Do you think this guy could pass the hazing rituals of bay area companies?
Reverse this binary tree for me, that'll tell me if you're a good programmer.

~~~
mortoray
I (the author) can certainly do the silly algorithm tests, but whether I can
contain my snide remarks enough to get the job is uncertain. :)

~~~
_RPM
The funny thing is, when you're interviewing, your blog, open source projects,
etc won't count. I know this might not seem real, but it's the truth.

