
Quit Covering Up Your Toxic Hellstew with Docker - cyberpanther
http://blog.neutrondrive.com/posts/235158-docker-toxic-hellstew
======
scrumper
I read the opening paragraph and thought 'ah well, another boring didactic
angry developer rant.' Just as I was about to close the tab, my eye caught the
start of the second paragraph:

 _This reminds me of my days in the Space Shuttle program._

Which, to put it mildly, is something of a credibility boost. So I finished
the article.

~~~
rsync
"Which, to put it mildly, is something of a credibility boost. So I finished
the article."

Yes, but then when he repeated the line with "this reminds me of my days with
perl", suddenly I started hearing the entire post in grandpa simpsons voice...

~~~
jrockway
"I used an onion as my scripting language, which was the style at the time."

------
dkarapetyan
Yes, simplicity leads to understanding and I don't understand why more people
don't get this simple concept. I've dealt with codebases with such a
horrendous build process that it doesn't matter what kind of sugar you
sprinkle on top because making any change is practically impossible. That
complexity has to live somewhere and if you offload it to the docker buildfile
it's still in the buildfile. The problem at the end of the day comes down to
the fact that most developers either don't understand enough to build proper
build pipelines or they are lazy or they don't think complexity in the build
pipeline is anything to worry about. Docker does not change those things.

~~~
bdamm
What if the problem actually is complex? Let's say that I'm building an
embedded firmware module for a chipset that can only be compiled with a
proprietary compiler under Visual Studio - and you're a Unix shop with Bamboo
as the continuous integration server?

Then that means the Unix bamboo server needs a Windows agent that can execute
the cross-compiler inside a MS VC environment. It certainly is complex, but
what piece are you going to change?

~~~
fragmede
Honestly? Port it to GCC. (Easier than LLVM, IMO)

That, or switch to a different chipset, this time taking care to select
something that includes unix build tools.

Neither of those is trivial, nor are they cheap. But rewriting their simulator
in C, as described in the article, couldn't have been cheap either.

~~~
new299
Perhaps the instruction set is undocumented? (it happens). Alternative
chipsets cost x10 (also happens, and is probably correlated to documentation
quality).

Price is an important factor, and the messy choice is often not unreasonable.

------
r0s
This reminds me of the "Every dev should be senior" mindset that is all too
common in this industry.

I don't see how supposing every devOps specialist is replaceable by average
engineers is a real solution.

~~~
yummyfajitas
I think it's something different. "Junior devs shouldn't reinvent the wheel in
production [1] until after they understand the solutions of the senior members
of their field." I.e., don't use MongoDB until you fully understand SQL and
don't build your own fancy planning algo until you you understand linear
programming.

[1] When learning, reinvent all the wheels.

~~~
jrockway
Also, don't use SQL until you fully understand MongoDB.

~~~
yummyfajitas
No. SQL is the culmination of years of research into data storage systems.
It's a well established solution to a large number of problems. MongoDB simply
isn't.

~~~
jrockway
So just use something you don't understand because someone said so. Got it.

~~~
xorcist
No. Trust in what's battle tested in production.

Use MongoDB for all your pet projects, all your development. Joining data from
documents is good way to learn where SQL came from.

Just don't do it in production before you master it.

------
contingencies
The article is essentially discussing frustration at the hiding - by
abstraction - of technical debt incurred in adopting poor software
architecture and/or development process.

Some potentially relevant quotes[1]:

 _Zymurgy 's First Law of Evolving Systems Dynamics: Once you open a can of
worms, the only way to recan them is to use a larger can._

 _Ducharm 's Axiom: If you view your problem closely enough you will recognize
yourself as part of the problem._

 _The organization of the software and the organization of the software team
will be congruent._ (paraphrasing of Conway's Law)

 _Separation of concerns ... a necessary consequence of loss of resolution due
to scale ... a strategy for staying sane._ (Mark Burgess, _In Search of
Certainty: The Science of Our Information Infrastructure_ , 2013)

[1] Taken from my fortune clone
[https://github.com/globalcitizen/taoup](https://github.com/globalcitizen/taoup)

------
ilaksh
Actually the more complex it is, the more beneficial encapsulation can be.

I think maybe I know what his actual problem is. It didn't detect a change to
the requirements.txt, or it is _always_ detecting a change.

In the Dockerfile you want to ADD your requirements.txt first, then RUN the
pip install, then ADD .

[http://stackoverflow.com/questions/25305788/how-to-avoid-
rei...](http://stackoverflow.com/questions/25305788/how-to-avoid-reinstalling-
packages-when-building-docker-image-for-python-project)

Also check for a --no-cache

~~~
jessaustin
Thanks for this comment! I read this whole thread primarily to find this
information.

------
gioele
I have been working recently with a PHP application whose installation
instructions are "Run this VirtualBox image inside your network, and without a
proxy in front of it because we are not properly configured".

This PHP application is not trivial, but also not very complex. Yet its
developers do not provide this application as a PEAR package, and replied that
this kind of things are superfluous these days, VM are simpler.

People like these developers fail to understand that there is a lot more in a
VM than just their software (from the kernel to all the exposed services) and
that by distributing a VM they are also becoming the maintainers of a very
complex set of dependencies. Not that they care: it took them two months to
release a VM not vulnerable to heartbleed. Let's see how long before they
release a VM not vulnerable to shellshock.

------
killertypo
this makes sense, i work in a pretty convoluted SharePoint environment. It is
completely impossible to spin up a development environment without dozens of
scripts and knowing exactly what lists to manually create and what data must
be present inside of them.

This means that new-hires are handed a cryptic and seriously out-of-date
document with instructions on how to setup a proper VM environment.

They check out their code.

They deploy...but wait the deploy fails because of missing data, missing
document types, missing lists...

Open up SharePoint, enable some doc types not turned on by default, turn on
some more features, add a list. Deploy again, no wait it died again, oh now
it's a different doc type and a different feature that the deployment doesn't
turn on by default ...etc

The end result is a mess that requires days to get up and running, not hours.

~~~
hawkice
> i work in a pretty convoluted SharePoint environment

For some reason I intuited that you were not going to talk about how well the
organization has dealt with said complexity. I'm curious if there is any way
to FFI into sharepoint and start using tools that don't actively sabotage
their users.

~~~
bdamm
I'd like to know if anyone has anything nice to say about Sharepoint. As a
user I find it miserable in all browsers, and have obviously avoided doing
anything with it as a developer.

~~~
hawkice
It's better than staring into the computer as if it were a dark abyss,
screaming and longing for a way to build something. It's easy to discount how
useful this is for people who really have no tools or means otherwise (not
that more specialized services like, without loss of generality, Wufoo, aren't
way better). The general discontent has to do (mostly) with 'Sharepoint
Engineers', which is a bit like having someone build a house with legos.

~~~
xorcist
I guess a regular wiki is too much to ask for...

(For the omg-cryptic-markup crowd there is always Google Docs. I seriously
wonder why that hasn't crushed all trivial Sharepoint instances yet.)

------
jradd
Very valuable concept regurgitated with [flavour–of–the–year] Docker as the
focus.

The article seems to portray Docker as the _cause_ of this anti–pattern
without any sort of context for why Docker and not xyz or why not your
implementation of xyz.

------
Rapzid
I realize a lot of people use docker for a lot of different reasons.. But the
technologies seem more geared toward(and seeing as dotcloud and not docker are
PaaS companies) solving deployment concerns..

------
mrfusion
I don't know much about docker. What is he saying I makes easier?

------
opendais
The problem is the carpenter, not the hammer. Please stop blaming the hammers
in your headlines.

The tool was used incorrectly and it resulted it badness. Bringing Docker into
it is pointless. This would have happened with Salt, Fabric, Chef, Puppet,
etc. with the same team.

~~~
andrewflnr
Even in the headline, it was obvious that Docker wasn't the problem. In the
article itself it was abundantly clear that Docker was just the misused hammer
closest to the front of the author's mind. It's almost as much a defense of
Docker from abusers as anything else.

------
merb
Testing of environments is needed on / with docker as well...

