
The sad state of sysadmin in the age of containers - Spakman
http://www.vitavonni.de/blog/201503/2015031201-the-sad-state-of-sysadmin-in-the-age-of-containers.html
======
blfr
This bothers me as well. Even tasks as simple as adding a repository are now
being "improved" with a curl | sudo bash style setup[1].

However, installing from source with make was (and remains) a mess. It may
work if you're dedicated to maintaining one application and (part of) its
stack. But even then it usually leads to out of date software and tracking
versions by hand.

Many people have this weird aversion to doing basic sysadmin stuff with Linux.
What makes it weird is that it's really simple. Often easier than figuring out
another deploy system.

(The neckbeard in me blames the popularity of OSX on dev machines.)

[1] [https://nodesource.com/blog/nodejs-v012-iojs-and-the-
nodesou...](https://nodesource.com/blog/nodejs-v012-iojs-and-the-nodesource-
linux-repositories)

~~~
stephenr
I agree that the "just curl this into bash" instructions are nightmare - on
any platform.

I think a lot of this is a result of what I like to call the "Kumbaya approach
to project/team management":

This is where you have a team (either for a single project or a team at a
consulting agency, etc) that is effectively all development-focused staff,
possibly with some who _dabble_ in Infrastructure/Ops. In this environment,
when a decision about something like "how do we get a reliable build of X for
our production server deployment system" needs to be made or a system needs to
be supported, no idea is "bad", because no one has the experience or
confidence to be able to say "that's a stupid idea, we are not making `curl
[http://bit.ly/foo](http://bit.ly/foo) | sudo bash` the first line of a
deployment script"[1]

[1] yes this is an exaggeration, but there are some simply shocking things
happening in real environments, that are not far off that mark.

Edit: to make it absolutely clear about what I was referring to with [1]:

The specific point I was making was running something they don't even _see_
(how many people would actually look at the script before piping it to bash/sh
?) from a non-encrypted source, and relying on a redirection service that
could remove/change your short url at any time.

Unfortunately I was stupid enough to ddg it (duckduckgo it, as opposed to
google it) and apparently this _exact_ use-case was previously the
_recommended_ way of installing RVM[2]

[2] [http://stackoverflow.com/questions/5421800/rvm-system-
wide-i...](http://stackoverflow.com/questions/5421800/rvm-system-wide-install-
script-url-broken-what-is-replacement)

-

~~~
DannoHung
I think part of this is because there aren't any trusted, fully open source,
artifact repositories that work with the various package indices out there.

Like, most of the way deployment should work is that you come up with some
collection of packages that need to be installed and you iterate through and
install them. Bob's your uncle.

Thing is, all the packages you need live out in the wild internet. Ideally,
you'd just be able to take a package, vet it, and put it in your local
artifact store and then when your production deployment system (using apt or
yum or pip or gems or maven or whatever) needs a package, it looks at your
local artifact store and grabs it and goes about its business. Never knowing
or touching the outside world.

And your developers would all write their apps to deploy through the normal
packaging methods that everyone and their mother is already familiar with and
they could just put them into the existing package index as well.

But you've gotta lay out pretty serious moola (from when I last looked into
available solutions to this) or set up a half dozen different artifact stores
if you want to do things that way. And good luck managing your cached and
private artifacts if you do. And on top of that developers don't necessarily
know how to set up a PyPi or a RPM index or whatever so that the storage is
reliable and you've got the right security settings or whatever else. (I know
I sure don't and I'm not really interested in reading all of the ones I'd end
up needing).

~~~
tracker1
With docker, as referenced in TFA... you can simply vet a base image, and use
that for your application... upgrades? create a new/updated base image and
test/deploy against that.

~~~
retrogradeorbit
And how do you "simply vet a base image"?

~~~
garthk
Same as everything: look at how it was built. Many of the images are built by
CI systems according to Dockerfiles and scripts maintained in public GitHub
repos. Audit those, then use them yourself if you're worried about the
integrity of the services and systems between the code and the repository.

------
skywhopper
I agree that many of these convenient setups are embarrassingly sloppy, but
it's the sysadmin's responsibility to insist on production deployments being
far more rigorous. No one can tell you how to build hadoop? Well, figure it
out. Random Docker containers being downloaded? Use a local Docker repo with
vetted containers and Dockerfiles only.

I don't even allow vendor installers to run on my production systems. My
employer buys some software that is distributed as binary installers. So I've
written a script that will run that installer in a VM, and repackage the
resulting files into something I'm comfortable working with to deploy to
production.

If a sysadmin is unable to insist on good deployment practices, it's a failure
of the company or organization or of his own communication skills. If a
sysadmin allows sloppy developer-created deployments and doesn't make constant
noise about it, then they aren't doing their job properly.

~~~
jetpks
> it's the sysadmin's responsibility to insist on production deployments

What decade are you from? No startups are hiring sysadmins to do any kind of
work anymore. They're hiring "dev-ops" people, which seems to mean "Amateur
$popularLanguage developer that deployed on AWS this one time."

That's the whole problem with the dev-ops ecosystem. None of these dev-ops
people seem to have any ops experience.

~~~
thaumaturgy
> _No startups are hiring sysadmins to do any kind of work anymore._

Then maybe people should be willing to work for more grown-up businesses.

HN tends to get a distorted view of what's important in the tech industry. The
tech industry is way, way, way bigger than startups, and there are still
plenty of companies that recognize the value of good sysadmins.

Let the startups learn their lesson in their own time.

~~~
dikaiosune
The alternative is that many of the startups don't learn this in their own
time, and they go on to become bigger, more successful companies who can set
the tone and shift the market. Of course, if they're actually able to succeed
by doing so, then that says something too. Although the trend of many data
breaches certainly wouldn't decline in that case.

~~~
rudolf0
>Although the trend of many data breaches certainly wouldn't decline in that
case.

Exactly. Successful and profitable are not mutually exclusive with "secure" or
"well-architected". At least until those last two come to bite you later and
start eating into your profits.

~~~
Lord_Zero
Sony is a great example of this.

~~~
jjoonathan
Did the PR hit actually translate into a monetary hit and eat into their
profits?

~~~
thaumaturgy
I don't know about the cost of the negative PR, but the compromise itself cost
them $15 million in real costs
([http://www.latimes.com/entertainment/envelope/cotown/la-
et-c...](http://www.latimes.com/entertainment/envelope/cotown/la-et-ct-sony-
hack-cost-20150204-story.html)) and potentially much more
([http://www.reuters.com/article/2014/12/09/us-sony-
cybersecur...](http://www.reuters.com/article/2014/12/09/us-sony-
cybersecurity-costs-idUSKBN0JN2L020141209)) once you count the downtime
involved and potential lawsuits, settlements, and other fallout over the
breach of information. IIRC there were some embarrassing emails released
regarding some Hollywood big-wigs, for example.

It _should_ be a huge cautionary tale for any big organization that doesn't
have good internal security, but unfortunately this isn't the first such case
in history, and it almost certainly won't be the last.

But that doesn't mean there aren't other smart businesses out there.

~~~
_yosefk
$15M sounds like a rounding error for Sony. It sounds like a rounding error as
well when compared to the cost of brand-name IT solutions when deployed in a
company of Sony's size.

------
lmm
make is the least-auditable build tool imaginable. You don't have to obfuscate
a Makefile, they come pre-obfuscated; you could put the "own me" commands
right there in "plain" Make. Not to mention that it's often _easier_ to tell
whether a Java .class file is doing anything nefarious than whether a .c file
is. How many sysadmins read the entire source of everything they install
anyway?

Maven, on the contrary, is the biggest single source of signed packages
around. Every package in maven central has a GPG signature - the exact same
gold standard that Debian follows. The problems Debian faces with packaging
Hadoop are largely of their own making; Debian was happy to integrate
Perl/CPAN into apt, but somehow refuses to do the same with any other
language.

> Instead of writing clean, modular architecture, everything these days morphs
> into a huge mess of interlocked dependencies. Last I checked, the Hadoop
> classpath was already over 100 jars. I bet it is now 150

That's exactly what clean modular architecture means. Small jars that do one
thing well. They're all signed.

Bigtop is indeed terrible for security, but its target audience is people who
want a one-stop build solution - not the kind of people who want to build
everything themselves and carefully audit it. If you are someone who cares
about security, the hadoop jars are right there with pgp signatures in the
maven central repository, and the source is there if you want to build it.

~~~
geocar
Makefiles don't really enter into it and getting software signed by the
developer isn't that valuable or useful.

The value of debian is not that they package (or repackage) everything into
deb files but that they resolve versioning and dependancy conflicts, slip
security fixes into old versions of libraries (when newer version break
API/ABI), and make it possible to integrate completely disparate software into
a system. They also have a great track record at it.

Maven does not do any of these things; Maven does nothing to protect the
system administrator from a stupid developer, it just makes it easier for
their code to breed and fester.

You must understand that the sysadmin has an enormous responsibility that is
difficult for programmers to fully appreciate: You don't feel responsible for
your bugs, you don't feel responsible for mistakes made by the developer of a
library you use, and you certainly don't feel responsible for the behaviour of
some other program on the same machine as your software, after all: Your
program is sufficiently modular and scalable and even if it isn't, programming
is hard, and every software has bugs.

But the _sysadmin_ does feel responsible. He is responsible for the decisions
you make, so if you seem to be making decisions that help him (like making it
easy for you to get your software into debian) then he finds it easier to
trust you. If you make him play whackamole with dependencies, and require a
server (or a container) all to yourself, and don't document how to deal with
your logfiles (or even where they show up), how or when you will communicate
with remote hosts, how much bandwidth you'll use, and so on: That's what Maven
is. It's a surprise box that encourages shotgun debugging and using ausearch
features to do upgrades. Maven is a programmer-decision that causes a lot of
sysadmins grief a few months to a few years after deployment, so it shouldn't
surprise you to find that the seasoned sysadmin is hostile to it.

~~~
mike_hearn
Debian has a terrible track record. Just look at the OpenSSL/Valgrind
disaster. As a former upstream developer myself (on the Wine project), all
Linux distros found unique ways to mangle and break our software but Debian
and derived distros were by far the worst. We simply refused to do tech
support for users who had installed Wine from their distribution the level of
brokenness was so high.

You may feel that developers are some kind of loose cannons who don't care
about quality and Debian is some kind of gold standard. From the other side of
the fence, we _do_ care about the quality of our software and Debian is a
disaster zone in which people without sufficient competence routinely patch
packages and break them. I specifically ask people _not_ to package my
software these days to avoid being sucked back into that world.

As a sysadmin you shouldn't even be running Maven. It's a build tool. The
moment you're running it you're being a developer, not a sysadmin. If there
are bugs or deficiencies in the software you're trying to run go talk to
upstream and get them fixed, don't blame the build tool for not being Debian
enough.

~~~
vacri
I find it weird that you consider 'packaging' to be something a sysadmin
should do, but 'building' to be something they should not do. Aren't they both
forms of 'prepping code for use'?

And then state that you don't want your own software packaged. So, if a
sysadmin is not allowed to build and not allowed to package, how are they
supposed to get your code into production? "curl foo | sh"?

~~~
mike_hearn
I don't consider packaging to be a sysadmin task. On any sane OS (i.e.
anything not Linux/BSD), packaging is done by the upstream developers. That
doesn't happen on Linux because of the culture of unstable APIs and general
inconsistencies between distributions, but for my current app, I am providing
DEBs and woe betide the distro developer who thinks it's a good idea to
repackage things themselves ...

~~~
vacri
Well, we're going to have to agree to disagree there, because I think Windows
packaging is fucking insane.

One of the things I loved about my move to linux and .deb land was that if I
uninstalled something, I _knew_ it was uninstalled. I didn't have to rely on
the packager remembering to remove all their bits, or even remembering to
include an uninstall option at all. Or rely on them not to do drive-by
installs (which big names like Adobe still do, out in the open). And not have
every significant program install it's own "phone home" mechanism to check for
updates. The crapstorm that is Windows packaging is a fantastic example of a
place where developers love and care for their own product, but care not a jot
for how the system as a whole should go together.

------
oskarth
At my last job we used a micro-service architecture on AWS (EC2 and RDS).
Using Ansible playbooks for various types of servers and roles for each
service, we created a new server instance every deploy. All servers were
running FreeBSD and using daemontools to control services. For testing,
hotfixes, and manual checking of logs, it was easy to complement with manual
ssh. Save old and new instance in case something goes wrong. Ansible is just a
thin layer on top of shell scripts, and reasonably straightforward to
understand and parameterize. Worked wonderful in most cases (possible
exception of build server because of shared libraries and a complex workflow
with git pull/trigger, but I don't think that was the fault of the overall
architecture).

That said, I agree that sbt is an abomination and doesn't lend itself to a
sane and secure workflow, unfortunately.

[http://martinfowler.com/bliki/PhoenixServer.html](http://martinfowler.com/bliki/PhoenixServer.html)

[http://www.ansible.com/](http://www.ansible.com/)

~~~
ploxiln
I also learned from a place with good practices, which used daemontools to run
services and a custom deploy system in bash and python which actually did the
right kinds of things. (and I was fully-manually admin-ing my own linux
systems for years before that)

As an early employee at a new place, I'm now using ansible and docker because
nowadays people want to use that stuff, and it is a lot faster to get started
with than writing a new proper deploy system from scratch.

But I build all the docker images we use and version them with the date. I
also don't use ansible roles from ansible-galaxy, and I don't even organize
tasks into roles, just into task-include files. Our ansible tasks use bash
helper scripts wherever necessary to do things the right way, because the
built-in modules are often too granular / not connected enough to fully check
state. I also replaced the docker plugin for ansible, to manage state a bit
better. So overall it's not too bad.

I guess my point is that, having done it all from-scratch first, using some of
the modern automation stuff isn't too bad. But you have to know what not to
use. People new to "devops", using all the fancy stuff now available, who
didn't have the introduction I did... it's not surprising they end up in a
mess and don't even recognize it.

~~~
NathanKP
This is the key. The OP's major complaint is with prebuilt containers from
potentially untrustworthy sources, but he passes this off as a fundamental
problem with containers themselves.

The reality is that you can (and probably should) build your own container
rather than using a public one from docker hub. You know exactly what is in
it, and can trust it completely.

~~~
zobzu
in reality a dev will pass a prebuilt and non updatable container to the
sysadmin tho. so the op is exactly right! it doesnt matter where its coming
from if you cant verify,rebuild or update it.

------
mercurial
I love a good rant as much as the next guy, but unfortunately, rants are
rarely actionable.

> Maven, ivy and sbt are the go-to tools for having your system download
> unsigned binary data from the internet and run it on your computer.

The root of the problem is that out of the total number of libraries available
in language X, only a small subset is packaged in Debian/RHEL. This may be
more egregious with large, Java enterprisy software, but you could easily end
up with the same problem in Ruby or Python.

You cannot reasonably expect developers to package and maintain all their
dependencies properly. The least worse solution would be to:

\- still use maven to manage dependencies

\- create a Debian/RHEL package incorporating the dependencies (effectively
vendoring them in the package)

Unfortunately, it is not that simple, because you need to make sure that your
vendored-in-the-package dependencies are somewhere where they will not
conflict with another package with the same idea and the same dependencies (or
better, the same idea and a different version of the same dependencies). Which
means you need to keep them out of /usr/share/java and make sure the classpath
points at the right location.

However, it seems that developer tend to avoid this kind of rigmarole and
instead go for the "install dependencies as a local user" for certain classes
of application (eg, webapps) because packaging is not fun.

~~~
sagichmal

        > You cannot reasonably expect developers to package and 
        > maintain all their dependencies properly.
    

What? With appropriate tooling, of course you can.

~~~
mercurial
I don't know any tooling that turns (recursively) a Maven pom file into a
Debian repository of Debian policy-abiding .debs, which are magically updated
when the pom file changes.

~~~
lmm
If you mean Debian policy-abiding in the sense of "signed by a Debian
developer in the Debian WoT" then no, but that's not something you could ever
do automatically. But for the rest, I've done all the individual pieces
before: it is trivial to generate a .deb from a maven pom and put it in a
debian repository, it's trivial to do some operation on all the dependencies
of a maven project, and it's trivial to hook something into a maven repository
to happen whenever a new artifact is uploaded. You absolutely could do this if
you wanted to, and it wouldn't be more than a few days' work.

(Of course it wouldn't provide any value, which is why no-one does it).

~~~
mercurial
If they are not public, that's obviously not the same thing, though you're
still going through a lot more complexity than "here is this .war, put it on
the server and reload the webapp".

------
zenlot
"Stack is the new term for "I have no idea what I'm actually using"." \- this
made my day!

~~~
skratlo
Same for "framework" which is: I have no idea what I'm doing

~~~
highmastdon
Same goes for 'abstraction', it hides the essence of what is happening.
Therefore every abstraction is evil.

~~~
mercurial
Hopefully this is sarcasm. Code without abstraction can also very efficiently
hide what is happening by having a disastrous signal-to-noise ratio, combined
with all the potential for errors you get when repeating the same pattern many
times.

~~~
Lawtonfogle
Code is abstraction. With no abstraction you have to build your systems with
lots of nand gates and a clock.

------
rsanders
I was doing sysadmin the "right way" a long, long time ago, and I don't see
much difference. Maybe the author regularly does full audits of the source
code of every package he downloads, and of course disassembles every
executable and library in the underlying OS, but most of us don't. There's no
wisdom or security to be gained from the act of running "make", much less
"make install".

~~~
rlpb
There's a huge difference.

> Maybe the author regularly does full audits of the source code of every
> package he downloads, and of course disassembles every executable and
> library in the underlying OS, but most of us don't.

What matters is that the source code _is_ auditable. It only takes one person
to investigate something suspicious, raise a flag and get it fixed.

This is certainly still true for Debian - not being able to build from source
is considered a release blocking bug.

~~~
kibibu
Assuming you:

\- trust your compiler and linker

\- trust your tar extractor / package manager / whatever

\- trust your editor

\- trust your http library (or whatever you used to download/distribute the
code)

~~~
rlpb
It comes down to trusting two key things. Trusting your initial distribution
download (which contains the package signing keys), and trusting the toolchain
(in a Reflections on Trusting Trust way).

But the deeper you go, the harder it is for malicious code to reside there. In
theory it's possible, but in practice I'd like someone to show me some code
somebody could have written into the toolchain a decade ago, without
hindsight, which could still exist today.

Whichever way, it's clearly far tougher for a malicious actor to compromise a
system by injecting something into a distribution ecosystem than it is to
inject a signed-by-unknown-reputation binary-only package into the Maven
ecosystem.

------
duggan
The contract between operations and dev (as concepts, not as people) is in
need of renewal.

To my mind, that was what "devops" was supposed to be, but it's been a bit of
a dogpile in the years since the term gained popularity.

Systems are opaque to most developers, and many developers wish to make their
software opaque to the system on which it runs. This is a failure on behalf of
our entire profession, not any one group.

Infrastructure software is in a bit of a renaissance period, but it's very
early days. Packaging software is a total mystery to most developers. I don't
even need to back that up with examples, most of us can recall the last time
we can across a well packaged piece of software with joy due to sheer rarity.
I'd be very surprised to find the average age of a Debian maintainer was
trending anything but upwards, and steeply.

Containers are being misused, but that's because the alternatives we've been
building for ourselves have not kept up with the strong user experience
narrative of web and mobile software.

We need to do better.

~~~
nvarsj
It would be nice to have a well known 'devops manifesto'. I google'd it and
came across this: [https://sites.google.com/a/jezhumble.net/devops-
manifesto/](https://sites.google.com/a/jezhumble.net/devops-manifesto/). Which
I think is actually pretty decent - the emphasis on cross functional product
teams, for instance.

In my mind, that is largely what devops is about - team ownership of the
entire product, which includes infrastructure. Instead of having a silo'd
'ops' team writing ansible scripts and doing deployment, this should be part
of the team (which could mean having an opsy guy on the team).

Anyways, as it pertains to containers, I think containers are more a practice
than a principle. It tends to happen naturally when you want reproducible
builds and continuous delivery. It's not really about making systems opaque to
software, imo, but rather making your product artifacts reproducible (if you
rely on running ./configure; make at deploy time, you never know what you'll
end up with since dependencies are dynamically determined).

~~~
kungfudevops
IMO here's the best "devops manifesto" out there:

[https://github.com/chef/devops-kungfu](https://github.com/chef/devops-kungfu)

[https://www.youtube.com/watch?v=_DEToXsgrPc](https://www.youtube.com/watch?v=_DEToXsgrPc)

------
dysinger
This 1 page poorly titled wrong rant is the #2 story on this site?

"Ever tried to security update a container?" lol. you are doing it wrong.

"Essentially, the Docker approach boils down to downloading an unsigned
binary, running it, and hoping it doesn't contain any backdoor into your
companies network." nope [https://blog.docker.com/2014/10/docker-1-3-signed-
images-pro...](https://blog.docker.com/2014/10/docker-1-3-signed-images-
process-injection-security-options-mac-shared-directories/)

"»Docker is the new 'curl | sudo bash'«" no it's not. most intelligent
companies are building their own images from scratch.

People that care about what's in their stack take the time to understand
what's in there & how to build things.

~~~
chrissnell
I think you're wrong. I think _most_ users are _not_ installing trusted builds
from their OS vendors. Piping curl to bash is incredibly common--many popular
software packagers are doing it [1].

About a year and a half ago, I was playing around with Docker and made a build
of memcached for my local environment and uploaded it to the registry [2] and
then forgot all about it. Fast-forward to me writing this post and checking on
it: 12 people have downloaded this! Who? I have no idea. It doesn't even have
a proper description, but people tried it out and presumably ran it. It wasn't
a malicious build but it certainly could have been. I'm sure that it would
have hundreds of downloads if I had taken the time to make a legit-sounding
description with b.s. promises of some special optimization or security
hardening.

The state of software packaging in 2015 is truly dreadful. We spent most of
the 2000's improving packaging technology to the point where we had safe,
reliable tools that were easy for most folks to use. Here in the 2010's,
software authors have rejected these toolsets in favor of bespoke, "kustom"
installation tools and hacks. I just don't get it. Have people not heard of
fpm [3]?

[1] [http://output.chrissnell.com/post/69023793377/stop-piping-
cu...](http://output.chrissnell.com/post/69023793377/stop-piping-curl-1-to-
sh-1)

[2]
[https://registry.hub.docker.com/u/chrissnell/memcached/](https://registry.hub.docker.com/u/chrissnell/memcached/)

[3] [https://github.com/jordansissel/fpm](https://github.com/jordansissel/fpm)

------
bshimmin
So much truth in this.

We've been doing some work with Elastic Beanstalk lately, and - while it
certainly does one or two things that are extremely clever and useful - in the
end it just feels like this bizarre mix of complete magic and incredibly
convoluted arcana. Everything feels very out of our control and locks us into
an ecosystem that considerably limits our choices and flexibility (unless we
invest the time in becoming experts in EB, which really isn't particularly
something we have the time for). And, as the author of this post says, the
security ramifications, while orthogonal, are also deeply troubling.

------
cgb_

        This rant is about containers, prebuilt VMs, and the incredible mess they cause because their concept lacks notions of "trust" and "upgrades".
    

Prebuild VMs? Sure, I wouldn't touch them except for evaluating a project, and
for commercial software you may not have a choice.

But docker containers at least usually provide a dockerfile that describes
exactly how a binary image is built. You just clone the source repo, audit the
few lines of build commands and then build your own private registry. It's
nearly no more trust than following the instructions of README or INSTALL.
Just because fools are pulling down pre-build images and running in their
datacentre, doesn't mean that's what way _you_ should do it. And the problem
with 'old-school' sysadmins is they are often far too quick to reject new
practices, citing tired excuses based on misunderstandings of the
technologies.

    
    
        Ever tried to security update a container?
    

Yeah I have. It's easy if you have already built your 'stack' to scale
horizontally (which means you have at least 2 or more of everything in a HA or
LB config). You rebuild against a fully patched base-OS container, spin-up,
send some test load to it & validate, then bring into service. Repeat for rest
of nodes at that tier.

If you are trying to be an old-school sysadmin that expects to console or SSH
in and run 'yum upgrade' or 'apt-get upgrade' your containers then you are
doing containers wrong...

~~~
vacri
_then you are doing containers wrong..._

The old-school sysadmins I know scoff at Docker's idea of 'containers'. Linux
containers were already a thing, and don't need an entire copy of an OS ported
around with them. To them, containers are a way of enveloping a process to
limit it, not a way of distributing packaged software. They may or may not be
doing 'docker' right, but they certainly know what 'linux containers' are.

~~~
eropple
_> Linux containers were already a thing, and don't need an entire copy of an
OS ported around with them._

Neither do Docker containers. You can build off scratch and put the literal
bare minimum you need in it. I've done it a few different times. It's rarely
done because the time and effort almost never makes up for the complexity and
cost, but if your old-school sysadmins are scoffing it's on them.

~~~
vacri
Response to exactly that idea from one of these guys: "Why do you need docker
to just build an executable?"

~~~
eropple
I don't. But it helps make a straightforward deployable of highly coupled
libraries and tools in a way that's more comprehensible to other people.

But I'll get off your lawn now.

~~~
vacri
Ain't my lawn; I consider myself a mid-range sysadmin. I stepped out of
support and into sysadmin land about 4 years ago. But I know some 'from-the-
birth-of-linux' guys, who live and breathe this stuff in a way I never will.
When I get home from staring at terminals, I want to watch movies and play
video games, not swear at something on a breadboard :)

------
eranation
As a Java / Hadoop / Spark / Scala fan, all I can say is, it's a little
embarrassing, not sure how the Java ecosystem around hadoop became so sloppy
(I witness it first hand on a daily basis). I wish more people who are
concerned with security / ease of build would turn into contributing to maven,
sbt, ivy and the hadoop project. Instead of hating the Java ecosystem, why not
join it and make it better? Hadoop is ubiquitous, maven (and ivy / sbt) are
the de facto dependency management and built tools for that ecosystem, and if
it's broken, (or alienating people who are used to just have make / rpm / deb
for anything) then those people should join and try to make it better.

Whether you like java / maven / ivy / sbt or not, good chances you'll end up
forced to work with Hadoop (Java) or Spark (Scala), both of which use maven /
sbt for dependency and build.

I say, It's all open source, if it's broken, and you know where it's broken, I
think the Hadoop / Java community will be happy to get suggestions / pull
requests to improve it.

~~~
fs111
In theory you are right, unfortunately at least the hadoop "community" is
difficult to work with. Hundreds of JIRAs with patches in limbo state for
month/years b/c nobody of the paid developers at cloudera/horton bothers to
take a look. Also the political things happening behind the scenes are way
more complex than you might think. It is frustrating...

------
fab13n
It seems rather easy and effective, for NSA-like agencies, to hide crude
exploits in complex projects. An unintended effect of Snowden's whistle
blowing is that it has become easier, because it has let them know that they
don't need plausible deniability requirement anymore.

Until Snowden, they were very cautious not to be caught, because, you know,
what might happen if public opinion knew what a bunch of crooks they were?
Now, they know that public opinion doesn't really care, and that if they're
caught, they can mostly shrug it off, with politicians' complicity.

So, shoving a rather crude and detectable exploit in a messy product has
become practically doable. If I were in charge of subsidies distribution for
some 3-letters agency, I'd pour more money on Docker, Maven etc. than on TLS.

------
omnibrain
If you read german I can recommend you
[https://plus.google.com/+KristianK%C3%B6hntopp/posts/gPpHx5T...](https://plus.google.com/+KristianK%C3%B6hntopp/posts/gPpHx5Trec6)
and
[https://plus.google.com/+KristianK%C3%B6hntopp/posts/54v3MNX...](https://plus.google.com/+KristianK%C3%B6hntopp/posts/54v3MNX8ud7)

------
upofadown
Every once in a while someone figures out that we could entirely solve the
dependency problem by packaging all the dependencies with the application.
Everyone gets excited. After a while everyone gets unexcited when the problems
associated with this approach become obvious.

Docker is merely a more extreme example of the "package everything with the
application" idea...

~~~
parasubvert
I'll bite. What's obviously the problem about it? Or, "if it's good enough for
Google...."

Vendoring dependencies and static linking is quite popular in executables, not
just docker. Dynamic linking and shared libraries seem to be becoming a relic,
deservedly.

BTW, the extreme example of "package everything with the app" is the unikernel
movement.

------
tobz
An interesting point that I didn't see the author bring up is the concept of
how Docker images can be built in a layered fashion, and the potential for a
false sense of security.

For example, you start with some sort of base image -- say phusion/baseimage-
docker[1] -- and proceed to layer your application on top of it. You "trust"
Phusion. They do Phusion Passenger, it's a real piece of software you heard
of, and it's not some random person on the internet.

At some point, there's a bug, a problem, a security flaw, and you're waiting
on them to fix it... nothing, nothing. Maybe they get hacked and their base
image is now infected. I haven't bothered to look, but I'm guessing it would
be a trivial amount of work to start the process of culling the most popular
base images used by public Dockerflles, looking for the biggest trojan horse.

It seems like the whole model is ripe for pushing an understanding of what is
actually running on a machine -- soup to nuts -- to the way side, and
establishing a non-existent trust on the building blocks you're using, lulling
people into a false sense of security about their containers. A lot of people
already believe that they're already doing something much more secure by
running containers, and arguably, they are... except for all of the places
where malicious software can be added in, and the potential container breakout
techniques.

[1] [https://github.com/phusion/baseimage-
docker](https://github.com/phusion/baseimage-docker)

~~~
andrewvc
FWIW you can easily recreate a base image by just copy/pasting the Dockerfile
for that image at the top of your own.

I did this for the Jruby images we base our stack on.

I've been doing both dev and ops work for nearly a decade. I feel for what the
guy is saying, but these aren't tech problems, they're process problems.

Relying on apt packages for everything makes using more recent features
ridiculously hard and slows up the works in pushing features out. I'll trade a
little security to be more nimble. I say that because as someone who's worn
the hats of operations, development, and co-founder, I realize that you can't
have it all. There simply isn't enough time and bandwidth in most companies.

~~~
tobz
Sure, and that's all reasonable stuff. I mostly posted this because while
encouraging people to use wildly insecure installation processes like 'curl
... | sudo bash' is terrible, it's easily recognized as being terrible. To me,
the Docker ethos is, perhaps, deceptively bad in terms of security. Deceptive
enough that it it can lull people into a false sense of security, etc etc.

I mean, we'll see if it happens. My fears might be entirely unfounded, or
phusion/baseimage-docker might get trojaned. Who knows. :P

------
exelius
System administration is as important as ever. Docker and other containers
just simplify system administration across many different machines. The
standard Unix user land tools are excellent and very flexible, but they are
fucking god awful at configuration management. Docker solves the problem of
"how do I make sure I have the same versions and configurations of everything
on all 500 of my compute nodes without having to lock them down completely?"
This question is meaningless if your base system image sucks, so you still
need a proper sysadmin to build your Docker images.

Many places are rolling this type of sysadmin work up into DevOps. This scares
graybeard sysadmins, because they see DevOps automating them out of a job.
What they fail to see is that DevOps is a step up for them: it's an explicit
admission that system administration is as important as software development,
and needs to be integrated into the software development process and managed
through whatever management processes and tools the core dev team uses.

The ultimate driver behind this is a shift in the way technology organizations
are managed. A few years ago, you would have functional silos: development,
operations, product, etc. that would all contribute to one or more products.
Employees reported up through the functional lead, and incentives were doled
out based on cost effectiveness. This didn't work well. So what started
happening is that engineering executives began building product-focused silos
instead. A development manager is no longer in charge of just software
developers, but also QA, scalability and deployment. If the operations folks
fuck up the deployment, the development manager gets chewed out about it. So
the dev manager is going to bring as much of that under her control as she
can.

Docker/Maven/etc. are the abstraction layer between the teams that manage the
infrastructure (physical servers, VMWare pools, storage, network, etc) and the
teams that manage the applications. This is no excuse for bad sysadmin
practices; you still need good sysadmins in the DevOps role. But here's the
kicker: DevOps often pays more than system administration! And if you're a SME
in a very specific thing (say, Cassandra administration) you can be in a
support role across a number of different teams, making sure their DevOps
folks deploy Cassandra in a sane way.

(Yes, I realize all of this is centered on huge companies with massive
engineering organizations. Small organizations have always required sysadmins
to wear multiple hats, so none of this is new.)

~~~
KaiserPro
_Many places are rolling this type of sysadmin work up into DevOps. This
scares graybeard sysadmins, because they see DevOps automating them out of a
job._

Nope, not really. just wait till you move to a new job, and you inherit a
docker/rockit/etc system. You need to patch openssl/glibc/etc however, half
the containers are built with an old build system that's been replaced. You've
got 15 containers based on fedora20 which is EOL, one of your apps relies on a
bug in fedora21 which is also EOL.

Oh yeah, its just you, you have no resources, and you're on call to fix it
when it fails (yeah devops is a nice way of saying unpaid overtime)

Oh and you need to replace two of three physical hosts, but you can't hot
migrate containers, and you loose quorum on your cluster if you take one of
the hosts down.

Looks its as simple as this. I'm a sysadmin, I know I know, you think I can't
code, you think I know nothing about programming. This is bollocks. Two
things: One I've seen this all before. Containers? yeah thats just fancy batch
processing. Two, you know how when ever you log in to a new machine and all
your files are there, not only that its faster than your laptop? thats me,
making things fast. Its my job.

 _DevOps is a step up_ Not really, it seems to be a way of getting devs to do
out of hours. Or allowing people with no experience of programming doing
programming, or no experience of infrastructure doing infrastructure. A decent
system admin does all of the "devop" things already. If they don't have a
build system, git/svn controlled config management, they they arn't real
sysadmins, they are over reaching helpdesk monkies.

 _fucking god awful at configuration management_

dunno what you've been using, but I can configure 5000 machines inside 15
minutes with 10 lines of code and one ssh command.

 _A few years ago, you would have functional silos_ Only in certain companies.
If you have politics, you'll get silos.

 _Docker /Maven/etc. are the abstraction layer between the teams_

can't use technology to overcome procedural problems. If your teams arn't
talking, your infrastructure is going to be shit. If your teams don't think
about others when they produce their products, then things will fall through
the cracks.

An example, Say you want 10 VMs of x size. If you have a coheesive system, you
could email/phone/talk to a guy and you'll have some machines. If your
provisioning team had thought ahead they'll have made an API that spins up
machines, ties them to your accounting code, and configured them to your
environment. That's not technology, that just good practice.

~~~
exelius
Bad DevOps is bad. But bad DevOps is basically no worse than what people were
doing before: you'd just have a bunch of VMs running fedora20 with no way of
easily patching all of them at once. Except some of the VMs may be running
fedora23 because they were part of an expansion that happened 2 years after
the original set and the guy who deployed them couldn't find a fedora20 image.
And at least with a container, you can more easily use AWS for spare
capacity/redundancy while you migrate servers. DevOps doesn't fix every
sysadmin problem, but it gives you a lot more options that can be
developed/deployed in a small amount of time.

DevOps is bad when you take your worst developer and say "do sysadmin tasks
and still write application code". It works much better when you take an
experienced sysadmin and embed them into a dev team. Make them do code reviews
on deployment scripts with a developer, assign tasks within sprints, etc. Code
reviews aren't because you don't know how to code -- IMO the primary benefit
of code reviews is the education of the reviewer. Likewise, the sysadmin's
struggles become the developers' struggles, and the developers are more likely
to write applications that are easy to support if they have some role in
supporting them.

Every company over a certain size has politics. It's unavoidable. Maybe Google
doesn't -- I don't know. But not every company can be Google. You can't use
technology to overcome process problems, but you can and should use technology
as a part of a redesigned, better process. DevOps gives you more options, and
has a positive effect on the culture of a development org. It asks them to
think of portability and supportability as a concern.

------
lighthawk
> Update: it was pointed out that this started way before Docker

Yes, like in the 90's, at least, when people started using Java. Even prior to
Maven there were jars, and we didn't really know what was in them. And prior
to that, I didn't understand how every piece of software or hardware worked.

I was a big proponent of Gentoo when it came out because of building
everything from source, but the fact is: I don't have time to look through and
understand every line of code. Even compilers can and have injected malicious
behavior in the past. Firmware cannot even be trusted.

Some level of trust and reliance on others needs to be there. While it is true
that there will always be people that betray that trust, without the trust, we
would be hermits living alone off the land- which may not be so bad, but
that's another story.

------
pjmlp
> »Docker is the new 'curl | sudo bash'«.

Fully agree with this.

Maybe I am just only another grey beard grumpy developer, but the new
generations that grew up with GNU/Linux instead of UNIX, bash the security of
other OSes and then go running such commands all the time.

~~~
antocv
Sad really.

docker and any user in the group docker, or lets say, any user capable of
sending commands to the docker daemon running as root - is root on that
system.

docker -v /:/f -w /f yourimage /bin/bash -c "echo root:and:so:on >
/f/etc/shadow"

------
markbnj
I think the author is mixing up a few different topics. If you're going to
blame container frameworks for people sharing software in insecure ways you
might as well blame the fact that executables are portable between compatible
systems. Might as well blame the fact that there's a network while you're at
it. We run docker throughout our infrastructure, but it is a deployment and
dependency management technology, not a vector for infection. We run only our
own images, which are all built from source or validated binaries. So what do
insecure or unreliable practices have to do with containers, specifically?

~~~
xofer
The article is not an attack of Docker, but the way it's being used by many.

~~~
markbnj
>> This rant is about containers, prebuilt VMs, and the incredible mess they
cause because their concept lacks notions of "trust" and "upgrades".

Oh, ok.

------
myth17
For people who are interested in learning more about the problem. This is a
really great paper : [https://www.informatik.tu-
darmstadt.de/fileadmin/user_upload...](https://www.informatik.tu-
darmstadt.de/fileadmin/user_upload/Group_TRUST/PubsPDF/BNPSS11.pdf)

------
DomreiRoam
"Maven, ivy and sbt are the go-to tools for having your system download
unsigned binary data from the internet and run it on your computer." You
should setup a maven repository (Nexus, Artifactory) for your organisation if
you want to have more control on binaries. Seems that artifactory can host
docker files:
[https://www.jfrog.com/confluence/display/RTF/Docker+Reposito...](https://www.jfrog.com/confluence/display/RTF/Docker+Repositories)

~~~
rickette
Right, do folks really belief Maven, Ivy, Gradle, Sbt are tools you use in
production? These are developer tools for use on workstations and CI servers.
If you want to promote your stuff to other environments like production use
your own private repository (Nexus, etc).

~~~
caw
They may be if you have a team without much sysadmin experience. The way you
develop could be the way you deploy to production.

These are the same teams that have overprivileged accounts for the database or
sudo-enabled users running applications or chmod 777 all over the place.

Even things like Chef cookbooks have this going on. If you want to build from
source because it's not in your repository, then you're necessarily going to
need to drag in sbt or gradle. (see [https://github.com/hw-
cookbooks/kafka/blob/develop/recipes/d...](https://github.com/hw-
cookbooks/kafka/blob/develop/recipes/default.rb) as an example). Sure you
could figure out the mirrors and download the correct binary from the website.
You could also use this recipe to compile everything and then package it up to
host yourself. (Both of these actions require writing custom recipes). Not
everyone has time to do this, and this magical recipe you found online works
great on the development server! Just add it to the production server and now
we've just used sbt in production on a software team.

------
patsplat
Is it a coincidence that all the technologies the OP complains about are Java
(Hadoop, Apache Bigtop, Maven, ivy, sbt,
HBaseGiraphFlumeCrunchPigHiveMahoutSolrSparkElasticsearch)?

~~~
derefr
I don't think it's a coincidence. The Java ecosystem is intentionally isolated
from the Unix ecosystem, because one of Java's goals was portability in an age
when Windows, Mac, and Linux were all very different operating systems with
very little in common. Java has its own Java-y build infrastructure, which
relies much less on the concept of "trusting the source", and much more on the
simple fact that the JVM is a sandbox that can be tuned to whatever security
requirements the sysadmin desires.

Running Java apps (especially Docker-ized Java apps) is less like installing a
Unix package (even if it's masquerading as doing so), and more like starting
an instance of some untrusted VM image on your (software-defined-)network. It
can use some of your computer's resources, but it has no permissions to touch
any of your data or services unless you grant them to it. It really is like an
app, or a web page.

------
swills
I completely agree with this except I think of it more as a problem of release
engineering rather than system administration.

The trouble is, the sysadmin's job is to deploy things. The developers job is
to write code. Often release engineering isn't thought of at all or if it is
it's given to the last qualified or least suspecting folks without any
requirement from operations.

Developers aren't taught about release engineering or deployment in school at
all. In fact, it seems to me most university curricula do everything possible
to hide all that from students.

Compounding that is the developers desire to get new code out conflicting with
the sysadmins requirement to keep things stable in the face of limited QA
automation. This leads to the common conflict between dev and ops.

This is to me a large part of what has led to the DevOps movement. This gives
the developers information about the deployment and perhaps even access to it
or a version of it and/or a voice in deciding how things are deployed.

Hopefully we can standardize things widely enough that universities can teach
this without fear of focusing on useless technologies that will be discarded
in 3-5 years.

------
__mp
As an ex sysadmin I really like the container infrastructure. Manage the whole
configuration on the main machine with puppet and deploy the blackbox
applications (everything ruby and java related) with docker/rocket.

~~~
mrweasel
It's nice to have the option. Containers are awesome for many things/projects,
but sometime you just want to run the damn application on a server of your
choice, without any container stuff.

I can't remember what the application was, but I've seen an application where
the only installation instructions where for Docker. That's just plain silly.

My concern with containers is that the wrong people will use it. There is a
ton of software out there with just barely runs and make all kinds of
assumption about it's environment. I fear that rather than design better, more
correct software, these people/companies will start packing up their
development environments as containers (more or else) and just ship those. Of
cause that's no reason to discourage the use of containers, we just need to be
critical of what is inside them.

------
moonbug
Today I learned that the "curl | sudo" idiom is actually a thing people really
do. Truly, everything is awful.

------
parasubvert
If you want to have guaranteed runtime linkage built from trusted source, you
might want to give BOSH ([http://bosh.io](http://bosh.io)) a look for
config/release management - it insists/prefers on compiling all dependencies
from source, from trusted links, with signature checks. For example with
Hadoop, here is the build script:

[https://github.com/cf-platform-eng/hadoop-
boshrelease/tree/m...](https://github.com/cf-platform-eng/hadoop-
boshrelease/tree/master/packages/hadoop)

Learning curve is a bit steep but it's another approach to this immutable
infrastructure trend that's built for large production, enables rolling canary
upgrades, etc.

------
fromtheoutside
And before "curl | sh" it was download from freshmeat and run "tar xfz; cd;
make install".

It's not really better. Fact is we are running huge and complicated frameworks
with lots of dependencies. These technologies are new and evolve fast. Distros
don't have enough volunteers to decouple this mess and thus fail to provide
stable packages. There is a good chance nobody wants an old version of hadoop
anyways.

Containers are a whole other problem. Always bothered me that no one cares
about building these images themselves. The documentation is there, you can
build your own docker/vagrant/... containers and vms. It's just nobody seems
to care anymore? Sometimes I don't even know where these images come from,
distro, community, ...?

------
octref
I think it's because people are getting worse at explaining things and writing
docs.

As a student many times I wanted to learn how things work, but most
tutorials/docs just ask you to type in a few magical lines without much
explanation. Maybe the authors think their audience won't understand anyway,
but I think it's the authors' ineptitude if they can't explain what their
programs do in an accessible way.

I really hope there could be more projects like i3[1] and flask[2].

1: [http://i3wm.org/docs/userguide.html](http://i3wm.org/docs/userguide.html)
2: [http://flask.pocoo.org/docs/0.10/](http://flask.pocoo.org/docs/0.10/)

------
SrslyJosh
It's not just system administration...

Minecraft is a meta-game about downloading unsigned JARs from the 'net and
running them with your own user account.

------
thebouv
I've often thought about just offering my services as a sys-admin to the
multitude of small startups that popup locally. Many, many developers --
almost no sys-admining skills amongst them. Just fire up a server on AWS and
away you go.

------
devy
The 'curl | sudo bash' mention reminds me of OS X Homebrew. The one-liner
installation script is still published in the home page front and center
(though it's running from a trusted source Github).

[http://brew.sh/](http://brew.sh/)

One might argue that that easy of oneliner installer script is exactly what
make Homebrew gains popularity. And dev machine is different from production
environment in terms installation packages. Still, I agree that proper
container management in local network and perhaps new security features from
upstream container vendors would help the situation.

~~~
squar1sm
I agree, I think accessibility made it popular. Security and ease of use are
usually opposing forces.

The article has some interesting discussion points. I don't understand the
absolute fear of | bash installers. It's open source, read the script. That's
the argument people make for `./configure; make; make install` programs. I
think it's because it's new, or it's too easy.

But the article does have a point about trusted containers. But security isn't
a download or a product anyway. Security isn't even guaranteed.

------
NhanH
So, asking the obvious question: what's the solution to that?

~~~
prottmann
The obvious question is: what's the real problem with that?

A container is a container, as long as docker itself has not bug, the
container can only harm the containers content.

Most problems exists in the custom created software in the container (e.g.
web-services with bugs, backdoors, ....), this will be a problem for Docker,
VMs, Real-Servers, whatever too.

The real problem is the interoperability of different container, if you link
the whole data, without any audit, to another container, you can have a
problem, but this problem is not docker specific.

~~~
Nursie
>> A container is a container, as long as docker itself has not bug, the
container can only harm the containers content.

Presumably a container has network access of some sort? Malicious code could
start probing and attacking anything exposed that way.

>> this will be a problem for Docker, VMs, Real-Servers, whatever too.

The implication is that you wouldn't get into this situation with a 'Real-
Server' so easily, because you wouldn't just download an image and run it,
without having an update/patch strategy or having much more idea of what's
going on inside it.

~~~
prottmann
But you assume that a container HAS full network access. A firewall must be
configured, but a firewall must be configured for a VM too. My point is, that
their is not so a huge difference for production systems.

~~~
Nursie
>> But you assume that a container HAS full network access.

No, I'm presuming it has some sort of network access, a malicious container
could (for instance) still probe other containers for vulnerabilities, serve
malware etc etc without full network access.

>> A firewall must be configured, but a firewall must be configured for a VM
too. My point is, that their is not so a huge difference for production
systems.

If you're downloading VM images from somewhere and running them without
checking what's in them you'll run into the same problem, sure.

The problem being pointed out here is that when applications are bundled
outside of the purview of a packager like debian you -

    
    
      - don't have as much trust in the origin of the app
      - don't have an easy way to keep up on library patchlevels etc for security

------
halfcat
As a Windows/VMWare/Exchange/Cisco admin, this is all completely foreign to
me. Does this indirectly make a case for paying a vendor real money to manage
their product properly? All vendors we work with provide installers that
handle installing any dependencies. Updates are a few clicks. Occasionally we
run into a vendor who provides installation/upgrade instructions, that involve
manually copying files and hand editing config files. We replace those
vendors. Error-prone people should not be doing manual file copying/editing or
dependency checking, tasks that computers are dozens or orders of magnitude
more competent at. This is B2B stuff where businesses should manage their
product or risk getting sued out of business. The current environment seems to
be that using free or opensource products is "free, with purchase of a team of
consultants". Why not just pay the money to a vendor to provide, and support,
a real product? It seems backwards to call this a sad state of sysadmin. This
is like Boeing providing its leftover parts and a 9000 page manual on 747
assembly, and people complaining about the "sad state of mechanics". That's
backwards. Buy a 747 from Boeing if that's what you need.

------
pkrumins
Give me the command line and I'll build anything!

------
ExpiredLink
> _Maven, ivy and sbt are the go-to tools for having your system download
> unsigned binary data from the internet and run it on your computer._

Not Maven.

------
zupa-hu
I'm a bit puzzled. Let's say I decide to not download the binary but build it
from source. Unless I actually _read_ the source, I'm trusting the community
to have read it, which consists of other people thinking I have read it.

In my view, this is true for the OS itself. So unless I read everything, I'm
fucked. And I don't. Thus I'm fucked.

Do I miss something?

~~~
Gigablah
This is also pretty much what happened with OpenSSL. Which is why I'm amused
by the holier-than-thou attitudes in here.

------
vorg
> And then hope the gradle build doesn't throw a 200 line useless backtrace

This is more the fault of the language Gradle chose for its build
configuration language. Most build scripts are between 20 to 50 lines long,
but reading through those Groovy stack traces eliminates its supposedly write-
one-read-many-times benefits.

Hopefully Gradleware will fix this problem for Gradle 3. They've already
enabled Gradle to be configured on the fly by Java code, and could by working
towards allowing any dynamic language to be a build language through an API.
Alternatively, they've just employed one of the ex-Groovy developers recently
made jobless by Pivotal pulling funding from Groovy and Grails last month --
they might get him to write a better lightweight DSL from scratch that parses
the existing syntax, but isn't weighed down by all of the present cruft.

------
jheriko
A lot of big projects are terrible to build.

Once upon a time minimising dependencies was considered good practice. Now I
get a pasting if I write clean code without reusing someone else's library...
even if the suggestions don't solve my problem directly or at all and comes
complete with a sloppy 'no one click build/deploy' configuration... the kind I
was embarrassed to produce on my stand alone projects in my teenage bedroom
days.

Shame on developers everywhere for tolerating this mess. I (am lucky enough to
enjoy the freedom of choice that I) would leave a job if not allowed to start
fixing such a situation from day one.

That being said, good sysadmins and developers should work out these problems
properly instead of shortcutting through someone else's half arsed effort via
Google.

------
smutticus
On the one hand it's nice that developers are using more libraries and writing
less from scratch. On the other hand dependencies are out of control. The one
thing that really bothers me is code that depends on a particular IDE. That
shit drives me up the wall.

------
failathon
Is this the sad rabbit hole reality of attempting to abstract every last
component?

------
patsplat
Regarding curl PACKAGE | sudo bash...

Just what do you think happens when you run `yum update` or Windows Update?

If you don't trust DNS or the network then you have serious challenges which
frankly aren't even solved by air gapping file transfers.

------
quonn
There is some truth in this, yes. On the other hand, maven (mentioned by the
author) clearly was very successful at abstracting from the tools we use. I
can remember how much time I wasted with build scripts and dependency
management and all that before. (And I still do on some other platforms.) The
problems only arise if the abstraction is not working well enough. This might
indeed be true for containers - there is maybe too much complexity in there
that currently can't be properly encapsulated.

------
cyberpanther
Because Docker is good at containing crap, it is used to cover a multitude of
sins. Before using Docker, please simplify your install and upgrade processes.

------
msane
Containers and VMs are part of the solution to this problem, not a cause. Try
managing the same dependencies across N platforms rather than one container!

------
Fastidious
Fully agree with everything written there, with the exception of "apps." I
believe it is not a Microsoft term, but an Apple term. Is it not?

------
bayesianhorse
Incidentally I feel like my admin "skills" have never improved faster than
since I started working with docker.

Docker lets me iterate on system configuration faster than ever, and that
means learning the details and quirks of certain software faster. Then again,
I usually don't use prebuilt VMs and containers, but have to prepare them for
people who don't want to pay for good sysadmins..

------
tux
Nicely said, I thought I'm the only one who noticed this ^_^ This is one of
the reason why I tried Docker/Vagrant images few times and said no thanks :) I
would rather spend my time and install everything on separate server my self
then have unknown set of packages or security holes. As few articles on HN
shown this containers not secure at all.

------
taude
I don't think any experienced organization is going to just download
containers off the internet to use on their servers. Which is why there is
self-hosted Registry applications that corps and big companies buy to host
their own, where they build their images to support their applications and
that are vetted through traditional corp policy

------
sorin-panca
This "working out of the box" phylosophy is the direct result of devs using
Windows and OSX platforms for creating those programs. They now think "Linux
and *BSD should be as easy to use as Mac.". Indeed, the majority of devs is
mediocre amateur sysadmins. They know next to nothing beyond their preffered
language.

------
api
70s system software is really showing its age. Containers are just a hack to
make its complexity sometimes easier to manage.

~~~
quotemstr
Emperor Joseph II: My dear young man, don't take it too hard. Your work is
ingenious. It's quality work. And there are simply too many notes, that's all.
Just cut a few and it will be perfect.

Mozart: Which few did you have in mind, Majesty?

------
jasonsync
Sys admins got relegated to tech support so web devs could add sys admin to
their work flow.

------
vbezhenar
Looks like description of the projects with bad build system, not like problem
with e.g. maven. Maven downloads binaries from the HTTPS server. You can
always get those libraries and rebuild them from sources into your internal
repository.

------
bmoresbest55
So I like this article and want to learn more about sys admin/dev ops/whatever
but where do I go. Is docker bad? What is a good starting point? What are best
practices?

------
jasonwocky
Can someone tell me what realistic security problems this mode of operation
introduces that can't be mitigated/avoided with sensible network and backup
configurations?

~~~
darkstar999
Precompiled binaries from random sources is a major security concern.

------
gibsonje
I thought this post was overly cynical and full of generalizations. I don't
really understand what point is trying to be made here.

"Everybody" "Nobody" "Nobody" "None of" "everything got Windows-ized" every
sentence is a broad generalization on top of cynicism so it's hard to find any
value in the point trying to be made.

------
reeboo
This is the price of devops.

------
martinp
Previous discussion:
[https://news.ycombinator.com/item?id=9190955](https://news.ycombinator.com/item?id=9190955)

------
dschiptsov
What is really ironic is that none of this "tools" solves any fundamental
problem of so-called version hell and none of this containers are
fundamentally different from

    
    
      ./configure --prefix=/xxxx && make && make -s install
    

with or without following

    
    
      chroot /yyyy
    

The big "innovation" of having so-called "virtual env" (they call it
"reproducible [development] environment) for each "hello world" (a whole
python/ruby/java/etc installation with all packages and its dependencies in
your [home] project directory) solves no real problem, only pushes it to the
next guy (what they call devops).

Some idiots even advocating to have a whole snapshot of an OS attached to your
"hello world", and even to make it what they call "purely functional" or even
"monadic" (why not, of someone pays for that).

Unfortunately, there is no way to ignore complexity of versions and package
dependencies or easily pushing it to "devops". Creating a zillion of
"container images" with just your "reproducible development environment" or a
whole "OS snapshot" just _multiplies entities without a necessity_.

Programmer _must_ be aware of which version of what API implemented with what
version of package or library he using and explicitly assert and maintain
these requirements, like all the very few sane software projects (git, nginx,
redis, postgress) do.

btw, the GNU autotools (which gives us ./configure) is somewhat evolved real-
world solution - you have to explicitly check each version of each API _both_
at the compile (build) time and refuse to build in case of unsatisfied
dependencies _and_ at the install time (and package manager must refuse to
install if case of mismatch). This is the only way back to sanity, however
"painful" it is.

~~~
TeMPOraL
> _Programmer must be aware of which version of what API implemented with what
> version of package or library he using and explicitly assert and maintain
> these requirements, like all the very few sane software projects (git,
> nginx, redis, postgress) do._

Except that when you're doing anything that looks like an actual end-user
application (as opposed to infrastructure), you end up using dozens of
libraries which themselves have dependencies, so suddenly you're supposed to
"explicitly assert and maintain" hundreds of different library versions,
_none_ of which is in any way relevant to the application you're building.

I myself see Docker containers as the only reasonable way for giving a service
application to people to deploy on their machines, because even the
programming runtime I need is _5 years out of date_ on Debian/Ubuntu, and
installing that stuff manually is a) pain, and b) different on every operating
system.

~~~
dschiptsov
> as the only reasonable way for giving a service application to people to
> deploy on their machines

Take a look how git or nginx compiles from the source on any machine
imaginable.

There is absolutely no fundamental problem with _. /configure; make; make
install_.

~~~
lmm
That would be the same git that's still basically unusable on windows? And
have you ever tried to cross-compile it?

> There is absolutely no fundamental problem with ./configure; make; make
> install.

The fundamental problem is incompatible versions of dependencies. Arguably
it's in linux's dynamic linker rather than a problem with configure/make. But
if you need to run something that depends on libfoo 2.3 and something else
that depends on libfoo 2.4 on the same machine, you need something like
docker.

------
mahouse
Gentoo and BSD ports exist for a reason.

------
michaelochurch
I don't have a strong opinion either way about Docker, but I understand the
OP's gripes.

 _Stack is the new term for "I have no idea what I'm actually using"._

This was great. It leaves me to wonder what "full-stack" means.

For one thing, we have a culture of trust inversion. I wrote about it in a
blog post about a month ago:
[https://michaelochurch.wordpress.com/2015/03/25/never-
invent...](https://michaelochurch.wordpress.com/2015/03/25/never-invent-here-
the-even-worse-sibling-of-not-invented-here/) . The "startup" brand (and it
_is_ a brand) has won and most companies trust in-house programmers less than
they trust off-the-shelf solutions. This tends to be a self-fulfilling
prophecy. Because few corporations will budget the time to do something well
(make it fast, make it secure, make it maintainable) it only makes sense to
use third-party software heavily and use one's own people to handle the glue
code, integration, and icky custom work. (That, of course, leads to talent
loss, and soon enough, when it comes to build vs. buy your only option _is_ to
buy, because your build-capable people are gone.) At some point, however, you
end up with a large amount of nearly-organic legacy complexity in your system
that no one really understands.

Although it's not limited to one language or culture, this is one of my main
beefs with Java culture. It has thoroughly given up on reading code. Don't get
me wrong: reading code (at least, typical code, not best-of-class code) is
difficult, unpleasant, and slow and, because of this, you invariably have to
trust a lot of code without manually auditing it. But I like having the idea
that I _can_. The cultures of C, OCaml, Haskell, and to a degree Python, all
still have this. People still read source code of the infrastructure that they
rely upon. But the Java culture is one that has given up on the concept of
reading code (except with an IDE that, one hopes, does enough of your thinking
for you to get you to the right spot for the bug you are fighting) and
understanding anything in its entirety is generally not done.

------
devmonster
The problem is you old sysadmins are so passé. Software has replaced you, and
you need to get over it. Developers are finally liberated to move at full
speed without hearing "NO"

~~~
lafar6502
there certainly are sysadmins that build their authority and power only on
having exclusive access to root account.

------
antocv
So much truth spoken in the linked text. Thanks.

It has to be said. Damn the containers and windowization of Linux.

~~~
madez
To isolate closed-source programs containers are very helpful. I don't like to
run steam within my normal debian system.

------
morgante
Except that Docker explicitly allows and encourages signing of core
infrastructure containers.

~~~
TheDong
Not even close. Docker now has some terribly attempts at signing images on
their registry iirc (docker inc signs them for the docker client).

There is no option for me, as a user, to build and sign my own image with my
own pgp key afaik. My organization might already have a chain of trust, and
docker is asking me to ignore that and just trust their signatures (which also
only work on dockerhub as of docker 1.5... don't know about 1.6 because you
can't use docker for at least a month after a release else security holes
galore).

Docker did nothing to encourage signing containers. At 1.0 they had no
capability to do any signature, verification, whatsoever. It's being added as
an afterthought, and poorly.

If you look at the AppContainer specs, signatures (pgp based) were built in
from the very beginning, it lets me create my own chain of trust (including
incorporating other's keys), sign my own images, trust someone elses
signature, does not trust the transport or storage medium, and has integration
with the clients.

If you want to convince me docker cares, you're going to have to give me
examples of where they didn't fuck up...

Tell me how I can use docker's tools to sign my own images, optionally trust
my friend Alice, and securely download images that she uploaded to her own
registry or dockerhub but signed with her gpg key _without me having to trust
docker inc_.

To my knowledge, all docker has right now is doing a 'tarsum' of images which
assumes the registry is trusted and, even given that, can be downgraded for
backwards compatibility reasons fairly trivially.

~~~
amouat
Docker didn't fuck up when they hired the square guys.
[http://blog.docker.com/2015/03/secured-at-docker-diogo-
monic...](http://blog.docker.com/2015/03/secured-at-docker-diogo-monica-and-
nathan-mccauley/)

I agree that security and provenance is a real issue in Docker. It is however
being worked on, and it will be solved. Presumably we will end up with some
sort of app-store like framework with proper signatures and verification.

Docker can't do everything at once. Give them a chance. The new version of the
registry is a major step forward in this regard.

In the meantime, what you can do is take redhat's advice. Rather than using a
registry to get your images, operate a download site which stores archives of
docker images that you can import with `docker load`. You can then also store
signatures and check them yourself.

~~~
TheDong
Cool, they hired some people, but I haven't noticed better security for it
yet.

Security is a real issue in docker and it is being worked on, but I don't
think "give them a chance" is a justifiable response. They're not focusing on
it strongly. They already should have focussed on it and didn't. Their entire
codebase was written without a security design in place, so there's likely
deep-seated refactoring that'll need to be done before any new security-
related features should be trusted.

They're working harder on monetizing and pushing docker as a production-ready
standard as far as I can tell... I can understand not doing security before
functionality, but it absolutely should be there before 1.0 or before you
encourage others to use your software.

Docker has already lost any chance of me trusting their security with their
lack of focus on it and I don't think it's excusable.

And if I'm doing what you say at the end, why the hell would I be using docker
anyways then? I can already turn a tarballed fs into a linux container without
docker (ty lxc); I thought the whole point of docker was sharing images and
building on them and ... and having massive security flaws. Right.

~~~
amouat
I agree that security should have been in place before they went 1.0. However,
if you look at the work on the version of the registry (docker/distribution on
github), they are taking things more seriously and trying to get the basics
right.

I find your last point a bit strange. We all know the Docker development
experience is a lot better than raw lxc. I'm saying you can (and probably
should) be more careful about provenance than the Docker Hub is. Note that
there are alternatives to the Hub with better provenance stories e.g:
[https://access.redhat.com/search/#/container-
images](https://access.redhat.com/search/#/container-images) (from
[https://securityblog.redhat.com/2014/12/18/before-you-
initia...](https://securityblog.redhat.com/2014/12/18/before-you-initiate-a-
docker-pull/)) This might make things a bit more awkward than it was before,
but it's still not the same as raw LXC.

I feel your anger and I think it's understandable, but that doesn't mean
things won't get better.

------
confiscate
haha you're my new hero

YOU ONLY LIVE ONCE MAN! trust the (maven) system

