
Show HN: Dockerfiler: declarative management of images built from Dockerfiles - jbergknoff
https://github.com/jbergknoff/Dockerfiler
======
jbergknoff
Hi, I'm the author. Thanks for taking a look. This is basically a tool to help
manage a "Dockerfile" repo (along the lines of
[https://github.com/jessfraz/dockerfiles](https://github.com/jessfraz/dockerfiles)),
where you build any tools you want into images that you control. This can be
really useful for personal use or within a company.

Why build tools into Docker images? Love it or hate it, there are many senses
in which Docker is currently the best medium that we have for distributing and
running dev tools. Here's an article making that argument:
[https://jonathan.bergknoff.com/journal/run-more-stuff-in-
doc...](https://jonathan.bergknoff.com/journal/run-more-stuff-in-docker/).

~~~
tuananh
this description is a lot clearer to me than the one from readme :)

~~~
jbergknoff
Thanks for that feedback! I'll update the readme.

------
fermienrico
We've made all this so complicated. Instead of the beautiful simplicity based
on UNIX philosophy (arguably), modern devops feels like a mishmash of ideas
plopped together, tied with a bunch of yaml tape and god forbid if you ever
want to look inside the stack of mess. Containerization is a great idea but
flawed in its interface.

I like what Jim Keller (chip architect) says about complexity - that we need
to throw away everything and start from scratch to which the interviewer asks,
how often? Jim responds that currently chip architectures are re-done every 10
years but it should be more like every 5 years[1].

Just like any evolutionary process, there comes a point of diminishing returns
because mistakes made cannot be corrected due to many other things that get
piled up on _top_ of it. So, it is difficult to track back. What happens is
more shit gets piled up on top just to patch up old mistakes. Like our
laryngeal nerve that loops around from the brain, all the way to the cervical
area and goes back to the voicebox[2]. It is even more evident in a Giraffe. A
good architect wouldn't design anatomy like this. The reason why it is the way
it is, is because evolution has no hindsight and marginal cost of _undoing_
the nerve is higher than just slightly increasing the length of the nerve.
This is what we do in software. A good architect wouldn't design software like
this. Sorry for the diversion, but I just feel so much pain with Docker,
Kubernetes, Terraform and a whole load of AWS complexity. Holyshit.

[1]
[https://www.youtube.com/watch?v=1CSeY10zbqo](https://www.youtube.com/watch?v=1CSeY10zbqo)

[2]
[https://en.wikipedia.org/wiki/Recurrent_laryngeal_nerve#/med...](https://en.wikipedia.org/wiki/Recurrent_laryngeal_nerve#/media/File:Recurrent_laryngeal_nerve.svg)

~~~
mixologic
Sure it seems complex, but we're solving insanely complex problems that didnt
even remotely exist during the halcyon days of "beautiful simplicity" of the
ol' Unix philosophy. That philosophy was borne out of tools to process text
files. It's closest proxy in the ops world might be microservices, but if you
compare the amount of labor it used to take to set up a datacenter in even
something so recent at the first dotcom era, its _insane_ what we can
accomplish with a cloudformation yaml file or a terraform configuration file
in a similar amount of time. The reproducability, the change management, all
of it is light years better than it used to be. Every one of these 'yaml
messes' all evolved out of drastically shittier shellscripts, duct tape, hope,
and weekend pagers blowing up.

It might seem complex, and obtuse, but its turtles all the way down, and
everything all the way back to to the original von Neumann architecture could
definitely be looked at again and redesigned, but otoh, we're still doing
weird shit like dividing our day into 24 hours and 60 minutes and 60 seconds
for no other reason than 'its always been that way' and if we had a chance to
actually design a sensible time system we could eliminate a lot of the
complexity that has piled up on top of that system.

But yeah, no way would I ever want to go back to the way things were before
all these layers existed.

~~~
pydry
There were just as many tools that existed during those "halcyon" days that
didn't adhere to the UNIX philosophy. They just didn't tend to survive.

The one thing that hasn't changed is commercial (& ego) pressure to make the
"one tool to rule them all", which has been pretty much immutable throughout
the years, while on the other hand there is the reality of trying to string
all of these things together.

Small, self contained tools with good APIs have persisted and will stand the
test of time while the monoliths always die eventually (maybe with the
exception of excel, which has really managed to cling on).

I don't think kubernetes or docker really have a bright future because they
both have an ambition to be "the one tool to rule them all" and they simply
can't.

------
digitalsushi
I am growing my anxiety as more layers to the Docker onion grow faster than I
can stay current. In the past 2 years my company adopted kubernetes to manage
our already confusing Docker infrastructure design, and then they added
Rancher to it because no one was making sense of kubernetes.

Meanwhile we're shipping on five year old containers that everyone is afraid
to update and no one remembers how to build them. We're building a skyscraper
on a floating pier and trying to reach the moon before the tide changes.

I know this is the way, but I am having a hard time because it just doesn't
make sense. This doesn't feel like process. It feels like compounding of
reactions. Is it just me? My company? Or is this a general feeling?

~~~
crb002
Maybe I am stupid, but is there a problem in Docker build design? By default
you should get both a Docker image and an artifacts.zip of binaries that it
downloaded so you can pin them? The cache isn't just for speed. It allows for
reproducible builds.

~~~
crb002
Also, Docker should have a reproducible flag that creates binary reproducible
images. Timestamps and local system info would have to be nerfed to some
default or fed in statically with a config file.

~~~
joombaga
You can already make reproducible images of e.g. NixOS. I don't know if
there's a good way to tackle the problem at the docker level without
sacrificing container OS agnosticism.

~~~
pknopf
My OS is a docker image, booted bare metal.

[https://godarch.com/](https://godarch.com/)

------
rsa25519
See also Nix, which can declaratively generate Dockerfiles (example on home
page) :-)

[https://nixos.org](https://nixos.org)

~~~
TheDong
> declaratively generate Dockerfiles

This is inaccurate, assuming you are referring to the
'dockerTools.buildLayeredImage' function in the nixpkgs repository.

That builds docker _images_ declaratively, not dockerfiles.

------
leetrout
So this is for me to easily manage all my linters, formatters and such in one
image from a group of upstream, published images, right?

~~~
pacifika
Looks like it. When developer tooling causes conflicts to install globally and
tricky to configure with the project (say phpunit) then this can all be taken
care of by building it within a docker container. However if the linting also
dependent on a specific version of a programming language to test its code now
you are no longer testing the production environment.

------
techntoke
From the article:

> Docker is an excellent means of distributing those sorts of tools.

No, package managers are excellent means of distributing and managing
installed tools. Docker is an excellent way to package the tools, but it's
distribution and management are terrible. There isn't even a command to simply
show which containers are considered "outdated" without having to repull all
your images, which can take 10+ minutes on 30 images.

------
pacifika
I guess we are fixing our lack of compatibility story by encasing our tools in
fixed environments. Then write new tooling when this is no longer usable.

That seems like a short term view of things.

------
aasasd
Pretty sure the name is ungoogleable.

------
techntoke
That is what I use Skaffold for

------
gavinray
I think I may be a bit slow -- I don't get it =/

~~~
joombaga
You write declarative manifests that include source repos/tags and destination
repos/tags and build arguments, and this thing figures out what commands are
needed to get your container registry in the desired state. It outputs those
commands. You'd then pass then to an environment that has access rights to
make the necessary changes.

~~~
gavinray
Ohhhh.

So the example in the repo being, you have some set of tools which you may
want to run in CI/CD, or maybe every developer to have the same version of
(linters, deployment tools, test runners, etc) and this will figure out the
optimal way of pulling them so they're available?

Thanks, makes sense now I think.

~~~
joombaga
Yep! Not only pulling them, but also building them and pushing them if needed.

