
Deck-build – A powerful and tiny bash framework to build custom Docker images - d3ck
https://github.com/flexos-io/doc/wiki/deck_build
======
tln
I took over a Dockerfile that called out to ansible for all the heavy lifting,
presumably because that's what the authors knew and liked.

Translating to individual RUN lines removed 100s of lines of code, and vastly
sped up the build process. In addition the build process should be more
familiar to future Dockerfile devs, who can see the entire build in one file
and don't need to learn an extra tool.

Wrt this project, how well does it use Dockers build cache? And is it really
better than idiomatic Dockerfile when you account for future developers?

~~~
d3ck
Build cache is supported ([https://bit.ly/2S8Z3oi](https://bit.ly/2S8Z3oi)),
it's the job of the user to structure his code wisely (same as with docker RUN
commands). And: No, it's not "better", it's a different approach. Like
most/any other community solution it will not replace the standard Dockerfile
process :)

~~~
tln
Thanks for the link

------
jacques_chester
This combines two of my least-favourite things.

If anything, Dockerfiles are already too powerful[^]. It's trivial to zoom
into an exciting world of asset opacity and rotting bits.

 _Images_ are very useful, though. Insofar as they are liberated from the sins
of Dockerfiles, there's a bright future ahead.

I don't see bash as such a liberator. Swapping one mess for a different mess
still leaves me in a state of resentful squalor.

Disclosure: I work on Cloud Native Buildpacks
([https://buildpacks.io](https://buildpacks.io)), so I have a horse in this
race. We're getting near to our first beta release.

[^] The original, pre-dang-editing title was something like "Dockerfiles not
powerful enough?"

~~~
heavenlyblue
>> If anything, Dockerfiles are already too powerful

How? Why doesn't docker allow me to arbitrarily copy files, create any layers
and execute shell commands from command-line instead, as opposed to a
Dockerfile?

~~~
dvtrn
Can't you via `docker exec` or do I misunderstand the ask?

~~~
deathanatos
You can't `docker exec` unless the container is running, so you first need to
`docker run` something. (Like an infinite sleep loop, like `bash -c 'while
true; sleep 60; done'`.)

You can create images via just `docker`, but it's a wee bit hard to do, IMO.
IIRC, the docker command that captures the resulting image will by default
also change the entrypoint to whatever the `run` was, which isn't helpful.

~~~
heavenlyblue
>> You can create images via just `docker`, but it's a wee bit hard to do,
IMO.

Also if you do that, then you can't edit the ENTRYPOINT without adding another
layer because docker can't edit the respective json in the .tar file directly.

As in - it has the documented functionality to do so in the "POST /commit"
API, but that would never work and that parameter will be simply ignored.

------
tdurden
You can already run whatever commands you want when creating a Docker image,
including your own shell scripts. It isn't clear to me why one would want to
use this.

~~~
d3ck
You are right, deck-build doesn't have any magic regarding the build process
(this is part of the concept). But:

1.) It bundles a lot of useful functions
([https://bit.ly/2N9mAEu](https://bit.ly/2N9mAEu)) in the "kit": Have a look
at this gist: [https://bit.ly/2Nby0HZ](https://bit.ly/2Nby0HZ) \- Four lines
to create a container with two users (foo and root) including their own python
packages installed in their home directories (PIP_USER=1).

2.) It's very easy to expand the kit and the plan/artefact concept helps to
structure and reuse code ([https://bit.ly/2BEIcnH](https://bit.ly/2BEIcnH))
stored on your local disk or in repositories.

~~~
dvtrn
Why would I use this tool for 1) instead of Make or just throw CMD useradd -m
foo into my Dockerfile?

~~~
d3ck
Yes, your are both right, of course you can RUN "useradd ...". But we have
often made the experience that we need to run more complex processes during
docker builds (with if...else conditions etc.) resulting in complex Dockerfile
RUNs. Bundling and reusing this code in plans/artefacts is very useful. In the
end we use docker-build to create our images, of course it's not intended to
be a substitution for the great (docker hub) image concept.

~~~
dvtrn
_But we have often made the experience that we need to run more complex
processes during docker builds (with if...else conditions etc.) resulting in
complex Dockerfile RUNs_

That's fair, I've come across those types of scenarios as well, but trying to
solve them during docker build seems like you're giving yourself, and
inheritors of your docker image a lot of unnecessary extra work. Shouldn't
those processes be solved independently of the docker daemon, and the results
fed to Docker when it's ready for them?

~~~
d3ck
I am not sure if I understand your comment correctly, but we all know that
there is a very large range of image requirements. Some of them need to be
solved during builds (e.g. sometimes you want to distinguish between dev and
production setups), other will be processed during container setup.

You are right regarding inheritors. Creating a docker-hub image has differnent
requirements than creating images for your private environment.

~~~
dvtrn
I guess I'm not entirely understanding how the scenarios you bring up to
myself and others aren't sufficiently solved in the current Dockerfile
implementation, along with other command-line tools that I can use immediately
or retrieve trivially. A 'framework' for bash + docker feels a bit anti-
pattern-y/or at least the benefits just aren't obvious for the way my team
uses Docker presently.

That said, if what you've put together has solved problems for your team,
maybe it will for others, so don't let my incredulity spoil anyone else's
curiosity who reads these comments later.

~~~
d3ck
In the meantime I have realized that the term "framework" is very
missleading... ;) But you're pointing out a crucial point. IMO, the need of
many Show-HN solutions is often not fully understandable in the first moment.
But when processes and requirements change over time, sometimes these
solutions are remembered again and can be a starting point for own ideas.
That's one of the reasons why I read HN :) Thank you for your constructive
comments!

~~~
dvtrn
_when processes and requirements change over time, sometimes these solutions
are remembered again and can be a starting point for own ideas._

You have a good point, here. Thanks for the responses and helping me
understand your work a bit better!

------
Confiks
There are three aspects of Docker that aren't too bad in itself (and have
obvious or logical reasons), but taking them together leads to close coupling
and code duplication:

\- There is no composition of images, but only inheritance

\- Shell scripts can't define any environmental variables that will survive
beyond the script itself

\- Docker doesn't have any way to take or template values from other
Dockerfiles

That means that you might be able to separate RUN logic out into shell script
(like the method deck-build proposes), so that you can compose multiple tools
into the same image. Still any ENV variables that your tools might need (if
only a simple $PATH) need to be manually copied into the composing Dockerfile
and maintained when changes occur.

~~~
d3ck
Thx for your constructive summary. Not sure what you mean with "tools might
need" but deck-build only needs some well defined ENVs
([https://bit.ly/2TSTImv](https://bit.ly/2TSTImv)). Regarding shell scripts
used in RUN commands: Why not COPY a conf file that will be sourced by every
script?

------
peterwwillis
If you find yourself using complicated custom methods with Dockerfiles, you
should probably be building Linux packages, publishing to a local repo, and
installing them from the Dockerfile. You gain the idempotent, immutable,
versioned, system integrated, dependency-mapped, cryptographically verifiable,
remotely distributed, cacheable benefits, and you don't have to adopt any new
software or systems.

Your organization can follow any workflow/lifecycle it wants to build and
publish the packages. Once they are published, developers can just install and
use them with no learning curve whatsoever.

~~~
makapuf
And to create those packages, I've found fpm[1] a joy to use. [1]
[https://github.com/jordansissel/fpm](https://github.com/jordansissel/fpm)

------
seifertm
This reminds me a bit of the container image builder "Kubler"
([https://github.com/edannenberg/kubler](https://github.com/edannenberg/kubler))
which is also configured with Bash scripts.

Just creating a cross-reference for the interested here :)

~~~
d3ck
Thank you, I didn't know and will have a look :)

------
j0057
Check out buildah, it's a tool that builds container images using real shell
commands, with no big fat Docker daemon necessary. So instead of a COPY
command, you can just use cp.

------
marmaduke
Why not just use buildah?

~~~
warmwaffles
Came here to mention this. I loved rkt for the fact that I didn't need a
special syntax for a file. Instead, just a shell script that built the image I
needed. Honestly when it comes to deployments, it was much better for us to
just get everything into one big layer and move on.

------
zallarak
I will be sad if I ever need to spend significant amounts of time configuring
docker, or any deployment system.

