Hacker News new | past | comments | ask | show | jobs | submit login

See my sibling comment to your post: The problem is exactly that it is just a mounted filesystem on which you can run programs. A container should not be described by a Dockerfile which is basically a glorified shell script with a caching layer on-top (and no concept of cache invalidation), but by a dependency file for a proper package manager.

Docker is my go-to example for worse-is-worse, because of that. They have solved only the easy problems and gotten a phenomenally approachable UI as a result (everyone who has used the console to install dependencies can write their own Dockerfile). But in the process they have occupied the niche in which a better packaging solution could have evolved and grabbed all the mindshare with enormous marketing effort (aided by an easy-to-use product).

A Dockerfile does not describe a container. A Dockerfile describes how to build a container image. Running that image creates a container.

You can create images in other ways than Docker or Dockerfiles (ocra-build, img). Other programs can run container images (runc, containerd).

For example, Google's BLAZE/BAZEL build software can directly output a container image (and upload it to a registry) and then you can run that with runc on any platform and you haven't touched Docker or a Dockerfile once.

I disagree. You seem to prefer a strong docker-centric idea of package management. I think that by drawing the line where they did they made it possible for anyone to use whatever package management scheme made sense for a given application. There are already so many alternatives for driving the file system to a particular state that I really fail to see how docker taking an opinionated position would have helped.

I think we have very different ideas about the problem docker is the solution for.

From the docker.com website ( https://www.docker.com/get-started ):

> Building and deploying new applications is faster with containers. Docker containers wrap up software and its dependencies into a standardized unit for software development that includes everything it needs to run: code, runtime, system tools and libraries. This guarantees that your application will always run the same and makes collaboration as simple as sharing a container image.

Basically I understand that as "write it on my machine, deploy it anywhere". "Everything it needs to run" are the dependencies in my lingo. So for me, all of this is dependency management. I have never asked for a way to drive a file system to a particular state, in the same way that I don't particularly care how a 'node_modules' folder is structured, as long as I can `require` whatever I want inside my programs.

(My point is muddied by the other task docker fulfills for me: Software configuration by creating a directory structure with the following access rights here, writing an nginx config file there. But for me, the ideal scenario would be to reduce the accidental complexity involved in the configuration (I don't care where exactly my program is stored and how exactly it is called, I just want to run it at much reduced privileges and the way I know how to achieve that is to create a custom user and run my program under that user) and define the rest declaratively.)

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact