Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We’re using docker for development, but we still have to take the leap into production. The whole build/push/pull part is rather confusing somehow. I tried docker hub or docker cloud build as it’s now called(?), but the build itself takes forever... what are people using these days??

Also for development machines, how do you sync things between developers. I can commit a docker file change, but unless I explicitly tell docker compose to rebuild my images and containers, it will happily stick to the old version. I have to keep nagging our (3) developers to do this from time to time... what am I doing wrong?? Sorry if these are dumb questions but we’re still stuck with the basics it seems.



If you're still struggling with the build workflow, it's probably not yet the right time to take that leap.

It's not rocket science, of course. You build an image somewhere (your local machine, a CI server, anywhere), push to a registry, and when you want run the image, you pull from the registry and run it. ("docker run" will, by default, automatically pull when you ask it to run something.)

I don't quite understand what your Compose problem is. Is the Compose file referencing images published to, say, Docker Hub? If so, the image obviously has to be built and published beforehand. However, it's also possible to run Compose against local checkouts, then run "docker-compose up --build", e.g.:

    version: '3.2'
    services:
      mainApp:
        build:
          context: .
      service1:
        build:
          context: ../service1
      service2:
        build:
          context: ../service2
and so on.

There's a whole ecosystem of tools built around Docker for building, testing, deploying and orchestrating Docker applications. Kubernetes is one. If you're having issues with the Docker basics, however, I wouldn't consider any of these systems quite yet, although you should consider automating your building and testing with a CI (continuous integration) system, rather than making your devs build and test on their local machines.

As with anything, to actually use Docker in production you'll need an ops person/team that knows how to run it. That could be something as simple as a manual "docker run" or a manual "docker-compose", to something much more complex such as Kubernetes. This is the complicated part.


The problem I was referring to with docker-compose:

let's say I update my Dockerfile and change from `FROM ruby:2.3.4` to `FROM ruby:2.5.1` and commit the Dockerfile change, merge it to master, etc.

Our developers have to remember to manually run docker-compose --build, or to remove their old containers and create new ones, which would get them rebuilt... I couldn't find something that would warn them if they're running off of stale images, or better, simply build them automatically when the Dockerfile changes.

Part of the benefits of docker is creating a repeatable environment with all sub-components on all dev machines. Isn't it?

Maybe our devs should only pull remote images and never build them, but then wouldn't I have the same problem that docker-compose won't force or remind the developers to pull unless they explicitly tell it to? And also, isn't this detaching the development process around the Dockerfiles/builds themselves from the rest of the dev process??


If you run with "docker-compose up --build", it should automatically build. This requires that any app you want to work on references the local Dockerfile, not a published one, the same way as in my paste. I.e. "build: ./myapp" or whatever.

Edit the code, then restart Compose, and repeat. It will build each time. If you want to save time and you have some containers that don't change, you can "pin" those containers to published images — e.g., the main app is in "./myapp", but it depends on two apps "foo:08adcef" and "bar:eed2a94", which don't get built every time. This speeds up development.

Building on every change sounds like a nightmare, though. It's more convenient to use a file-watching system such as nodemon and map the whole app to a volume. Here's a blog article about it that also shows how you'd use Compose with multiple containers that use a local Dockerfile instead of a published one: https://medium.com/lucjuggery/docker-in-development-with-nod....


We're not building every time. But some times, like the example above, we do need to build. The problem however is this becomes a fairly manual process. If a developer forgets to do it, they will keep running with an older base image. So all the consistency benefits across developers is gone.

In any case, thanks for your suggestions. I think it's some misconception on my part about how docker-compose should behave.


So to me it's starting to sound like "developers forgetting" is your problem. Not Docker or Compose.

The solution I've used in the multiple companies I've started is to maintain a developer-oriented toolchain that encodes best practices. You tell the devs to clone the toolchain locally and you build in a simple self-update system so it always pulls the latest version. Then you provide a single tool (e.g. "devtool"), with subcommands, for what you want to script.

For example, "devtool run" could run the app, calling "docker-compose --build" behind the scenes. This ensures that they'll always build every time, and never forget the flag.

If you have other common patterns that have multiple complicated steps or require "standardized" behaviour, bake them into the tool: "devtool deploy", "devtool create-site", "devtool lint", etc.

We've got tons of subcommands like this. One of the subcommands is "preflight", which performs a bunch of checks to make sure that the local development environment fulfills a bunch of checks (Docker version, Kubectl version, whether Docker Registry auth works, SSH config, etc.), and fixes issues (e.g. if the Google Cloud SDK isn't installed, it can install it). It's a good pattern that also simplifies onboarding of new developers.


That's a great suggestion! Thanks. We're doing parts of it, but I just need to expand it to work with docker-compose. As I mentioned, I probably had the wrong preconceptions about it "figuring out" when components were stale... I guess a few simple bash scripts can work wonders to make it more intelligent :)


We build a microservices-based tool, hosted as containers in AWS, and have a very developer-friendly workflow. My team's workflow might not work well for yours, YMMV, etc, but here's how we do it:

- When we make a PR, we mark it as #PATCH#, #MINOR#, or #MAJOR#.

- Once all tests pass and a PR is merged, CI uses that tag to auto-bump our app version (e.g. `ui:2.39.4`, or `backend:2.104.9`) and update the Changelog. [0]

- CI then updates the Dockerfile, builds a new image, and pushes that new image to our private repo (as well as to our private ECR in AWS).

- CI then updates the repo that represents our cloud solution to use the newest version of the app.

- CI then deploys that solution to our testing site, so that we can run E2E testing on APIs or the UI, and verify that bugs have been fixed.

- We can then manually release the last-known-good deployment to production.

The two main keys to all of this is that our apps all have extensive tests, so we can trust that our PR is not going to break things, and our CI handles all the inconvenient version-bumping and generation + publication of build artifacts. The best part is, we no longer have to have 5 people getting merge conflicts when we go to update versions of the app, as CI does it for us _after_ things are merged.

0: We use pr-bumper (https://github.com/ciena-blueplanet/pr-bumper), a tool written by my coworkers, for our JS apps and libraries, and a similar Python tool for our non-JS apps.


My first recommendation would be to separate in your head the Docker development environment from the Docker production environment. They can be very different, and that is OK.

For production you want the Docker image to be built when PRs are merged to master (or whatever your flow is). Google Container Builder makes that very easy, you can set up a trigger to build an image and push it to the registry when there are changes to git (code merged to a branch, tag pushed, etc.). Then you need to automate getting that deployed, hopefully to Kubernetes, but that is a different issue.


> They can be very different, and that is OK.

This feels odd to me. Isn't one of the major selling points of docker development-production parity?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: