Hacker News new | past | comments | ask | show | jobs | submit login
Working on Multiple Web Projects with Docker Compose and Traefik (georgek.github.io)
173 points by globular-toast on Oct 3, 2023 | hide | past | favorite | 69 comments



The article mentions:

> What if that compose.yaml file is checked in as part of the project? Does the whole team have to agree on a set of port numbers to use for each project?

That's only if you choose to use hard coded values. You can use environment variables instead.

You can change `- "8000:80"` to `- "${DOCKER_WEB_PORT_FORWARD:-127.0.0.1:8000}:${PORT:-80}"` and now any developer can customize the forwarded port however they see fit in a git ignored `.env` file. This is what I've done in all of my example Docker web apps at: https://github.com/nickjj?tab=repositories&q=docker-*-exampl...

No Traefik or override file is needed, at least not for allowing a user to customize the forwarded port.

I like the override file and used it for years but I stopped using it entirely about 6 months ago. It's too much of a headache to commit a `docker-compose.override.yml.example` file to version control and then have folks copy that to a git ignored `docker-compose.override.yml` file. You end up with serious config drift, especially if you have a team with a few developers. It's been a source of so many "oh yeah, I forgot to update my real file" type of issues.

Between environment variables and Docker Compose profiles[0] you can have a single committed `docker-compose.yml` file that is usable in all environments for all users.

[0]: https://nickjanetakis.com/blog/docker-tip-94-docker-compose-...


By using the !reset tag I've tried to make use of the override file completely optional and therefore no example override file is needed.

I did also consider using environment variables instead of the override file. It's not a bad idea and perhaps gets around some of the limitations with !reset.

I still think using traefik to get names instead of a load of different port numbers is cool, though, especially if you want to share these links with others on your network. For that I think you would need the override file, or you could just commit the labels to the main file as they wouldn't cause any harm, but then you've you a load of noise that people might not care about (and what if they have their own label based tool of choice?)

Thanks for the link to your work, it looks very useful.


Note that it is no longer called docker-compose.yml, now they are just called compose.yml and compose.override.yml. Another worthy mention are profiles. You can have a dev profile in there that is configurable with environment variables.


The comment you are replying to mentions profiles (and links to their own blog about it). Do you mean the same feature or something different?


I used traefik a lot, but man those labels get tedious. I still don’t get all the middleware stuff. I switched to using caddy, a caddyfile feels like a huge improvement. Much less lines for the same results. No routers no middleware just define a port mapping to the container:port.

If you go to a server you get https for free, no extra config.


I have had a great experience with using this: https://github.com/lucaslorentz/caddy-docker-proxy

It combines caddy with docker-compose labels, making it super easy to spin up new projects that can immediately be exposed.


Being able dynamically configure Traefik routes from docker compose labels is the whole point. It is a very useful feature. In most cases I get full overview in a single compose file, and I do not have to configure or restart the http proxy separately, `docker compose up -d` does everything.


Perhaps what traefik does with the labels is great, but then it's probably just too detailed for my needs.


I personally use a file provider for the dynamic traefik configs (yaml files) loaded from a bind mount in the same folder I keep my compose. Auto-reload on changes, and it makes it clear what I'm routing to and from by having proper indentation for my router, service, and middleware fields. And since everything is in the same network, I can just define the container name as hostname -- the DNS entries are automatically created.


Same frustration. I just ended up writing a set of shorthand macros in my compose.yml labels, then have a little script that blows them up into whatever their associated long-winded traefik labels are in the compose.override.yml.

Most of the time, I just have to set the port from the container and a host, the rest gets expanded, but I have macros to add the tedious middleware lines


I do something similar, but even managing a separate configuration just for the reverse proxy gets tiring. I have plans to move to something kubernetes based and use an ingress controller to automatically set up everything based on a deployment chart, but I never get around to it...


Seriously, don't put traefik in front of your localdev if.you don't explicitly have to. It is way too much complexity and introduces an added layer of noise to bedug.

Also, does local host subdomain resolution work on all OSes OOB now? Iirc this was an apple exclusive feature in the past.


Note that the article uses http://traefik.me/, one of those sites that resolves all subdomains to localhost (like lvh.me, http://readme.localtest.me/, or http://local.gd/), so you don't need any "localhost subdomain resolution".


I thought Let's Encrypt wouldn't allow SSL certificates with public private keys. It defeats the purpose.

Edit: indeed, it says this:

> You warrant to ISRG and the public-at-large that You have taken, and You agree that at all times You will take, all appropriate, reasonable, and necessary steps to assure control of, secure, properly protect, and keep secret and confidential the Private Key corresponding to the Public Key in Your Certificate (and any associated activation data or device, e.g. password or token).

https://letsencrypt.org/documents/LE-SA-v1.3-September-21-20... via https://letsencrypt.org/repository/


If you click what looks like the link to the privkey it shows a warning encouraging you to not revoke the key with LetsEncrypt. I suppose a *.traefik.me is as untrustworthy as any other you don't the owner of, so it doesn't really matter that anyone can sign a key for it, anyone can already get a key for any untrustworthy domain name they could come up with via letsencrypt.

So it's probably only a problem for people who use traefik.me, they might be tricked into thinking they're visiting their own locally hosted site, and instead might be man-in-the-middled. Though that's a fairly specific attack coming from an APT or as a means to escalation from someone who already gained access through other means.


A phishing site with https://coinbase-support-172-67-207-118.traefik.me/ would be similar to some phishing domains I've seen, and have the lock. With the lock I'm sure it could trick some people.


> local host subdomain resolution

Works fine on current ubuntu, both with systemd stub-nameserver and even if I replace it with a real one in /etc/resolv.conf.


I normally use nginx and works just fine tbh, easiest way to emulate an ingress service imo (I deploy to k8s afterwards).


I am presently developing a project that runs five back-end servers and two front-end servers. I ended up solving the same issues of ports with traefik, like the author. But it must be said that traefik's configuration is a nightmare to get working as you need it to. Also ssl certificates are still humongous pain in the ass to get going on local machine even in 2023. And you cannot ignore them on local because of those retarded CORS that only bring pain and suffering to the developers and literally nothing to the consumer.

I do not use docker compose because each server is in active development so I cannot compile it into an image and I have to spin them all up manually. But I will use docker compose after i am done for front-end developers so they can keep developing the UI locally.


> I do not use docker compose because each server is in active development so I cannot compile it into an image

That's something I've seen many times said by many people but I don't understand. Your code can be mounted from the host into the container. There's no need to rebuild images with each change. Inside such "development" container some form of watcher/auto-rebuild can run and recompile & restart your program after changes are made in code. Isn't it standard practice?


By definition containers are static. Of course I could use them but it would essentially only add one step into the build process and I would gain absolutely noting from it. And I have no reason to use containers only for the sake of using them.


> I would gain absolutely noting from it

What about isolation? Of runtime, ports, dependencies?


I already have that. Plus, you might have missed that I am talking about development environment.


> I do not use docker compose because each server is in active development so I cannot compile it into an image

I think a lot of people here can't imagine how this is possible. Docker is literally just the same thing as what's running on the server except in a chroot environment. And it gives you a ton of benefits like isolation, portability, infrastructure as code, a layer of security, etc.


> I do not use docker compose because each server is in active development so I cannot compile it into an image

I’m curious to know, are you actively developing all 5+2 codebases, or could you theoretically run half of them in docker and another half (the ones you’re actively changing) on the host?


It is one code base with multiple servers in it. It is a 5x Go + 2x Quasar. Mostly two Go servers are actively developed while the remaining three are quite static with sporadic changes. The UI servers are in active development with hotreload.


Aha nice to know thanks!


I run some (20+) services using docker compose on my home server, and Traefik is great.

Cloudflare manages my domain and it allows Traefik to get letsencrypt certificates even for internal services not exposed to the outside world.

I also have multiple Traefik entrypoints for internal and external services. And cloudflared tunnel container set up to manage access to the public resources.

Then on the home router level I set/override DNS entries for internal services so they would connect directly to Traefik, instead of going through Cloudflare.

Incredibly these Cloudflare services cost exactly 0$ for now.

But I do not use compose overrides, don't really see the benefits.


I recently realized the value of Traefik, been avoiding learning about it for years because I'm a skeptic about new stuff.

But the first thing that struck me was "ok, cool auto-discovery", and then "wait, it needs access to all my containers?".

Maybe I'm from the old school but we used to separate services into their own service users so that if one service falls it can't take others with it as easy.

Now we just accept that all services are under the same user because we use containerization. Well I'm still separating them so I really can't take advantage of the amazing auto-discovery.


Seems like Docker doesn't support fine grained permissions out of the box, but it should be possible to write a proxy for docker unix socket so it would limit what information gets shared and what commands are available.

This way it would allow the use of Traefik and similar services which depend on reading/writing labels while severely minimizing attack surface.

EDIT: Found this on Github https://github.com/Tecnativa/docker-socket-proxy


I thought that the Cloudflare tunnel was the very expensive pay-per-gb "argo route". Really nice to know that they offer a simple and free reverse proxy, thanks!


They made it free a couple years ago and rebranded as Cloudflare Tunnel. Currently an excellent free option though technically you're not supposed to use it for anything other than basic HTML sites.


I recently set it up with a variety of apps (frontend/backend) behind it. Where did you see it should only be a basic html app? Don't recall reading that in their documentation.


I just did a bit of research. They updated[0] their ToS a while back. At the time I asked[1] if people thought this change would apply to tunnels. Based on the answer to that I would say it's no longer prohibited, which is great.

See also this interaction I had with Cloudlfare's CEO a couple years ago on this topic: https://news.ycombinator.com/item?id=30285554

[0]: https://blog.cloudflare.com/updated-tos/

[1]: https://news.ycombinator.com/item?id=35963143


Just looked up and the Argo thing still exists and the pricing looks the same, couldn't find anything related to basic HTML sites on tunnel, though.


"it allows Traefik to get letsencrypt certificates even for internal services not exposed to the outside world"

But something has to be exposed to get these certs to the internal services?


You don't have to expose any ports. It uses ACME DNS challenge, which is supported by Cloudflare and other large nameserver providers. https://doc.traefik.io/traefik/https/acme/#dnschallenge


you can also use your home router to do this if you run something like OPNSense (there is also pfsense but that uses a much older freebsd kernel that does not support newer NICs)


As per the article I guess the benefit is specifically for managing compose files across teams through git.


You can set this up with even less customization. Here's my snippet for doing so [0]. In this way, the default rule for all containers that are part of compose projects gets assigned a subdomain (service.compose-stack.lvh.me). Note that we also expose the traffic admin interface on `lvh.me`, which is perhaps a little more convenient than using `traefik.me`.

However, even with my tweaks, the overall solution is still limited. Because it's not on "localhost", the browser considers it an "insecure context" unless you also set up local HTTPS.

[0] https://github.com/CGamesPlay/dotfiles/blob/13659d19ca899cea...


That's really nice and combined with the environment variable solutions suggested by others could remove the need for the override file entirely. Also like the setting of a longer timeout which is suitable for dev.

In theory traefik.me can be set up with HTTPS but I haven't tried it yet.


Looks like we're all solving the same problems. I've used jwilder/nginx-proxy for the same purpose.

Letsencrypt certificates, reverse proxy and all this stuff. Took a bit, but at the end of the day, I'd only have to specify "testing.example.org" in the docker-compose, docker-compose up and everything would be routed correctly. I could check out a different branch and bring up "staging.example.org" within 10 seconds. Sadly, it only worked at home, where this real domain would be hooked up to my private router.

Then, however, it was 10 times better than port routing, localhost development certificates and all the other stuff you need to have just to test something that uses the camera on the mobile device.


Compose overrides are quite cool but can get very involved.

An easier way is to make the port range dynamic by adding a prefix variable in .env/example.env. So, once configured, the whole localdev binds to ports in the prefix range, eg: 342xx.

Experience shows that localdevs will need that env file anyway and adding this config step to the readme is quite effective.


Could you elaborate a little more here please?


Well, the idea is that your port mappings in the docker compose look roughly like this

```

ports:

- "${PORT_PREFIX}01:80"

```

This means that devs can drive the port range that the project bind to by editing their .env file.


I like this a lot. It works well alongside setting the compose project name.

However I'd suggest at least specifying a default value so developers don't need to mess with version-controlled .env files to customize their local setups.


> However I'd suggest at least specifying a default value so developers don't need to mess with version-controlled .env

That's a good point. Most of my projects tend to ship an `example.env` which documents each env var and introduces a sane default.

When introducing the principle to projects some times a dev will complain about config drift and the like but experience in practice shows that these files tend to change a bit when introduced and then rarely at all so it really is not that much of a deal.


Edit: see TheK's answer, which is virtually identical.


I've been using Yggdrasil to achieve something similar. Basically, my docker compose file includes a service for Yggdrasil that is configured to join the host Yggdrasil network listening at host.docker.internal. The service uses socat to forward ports from each of the other services. The end result is that each docker-compose.yml gets its own IPv6 address, and all the ports can be kept the same. No need for Let's Encrypt, unless maybe you want the network to be exposed publicly.

It just so happens that I wrote a gist recently that explains how to do this.

https://gist.github.com/Ravenstine/707180ef29e9d37a8f816e019...


This seems like an improvement over my current solution in that it can keep multiple projects open simultaneously and route to each of them, but does add more complexity to the setup.

I'm using Dnsmasq (https://thekelleys.org.uk/dnsmasq/doc.html) to map anything at .lo to the currently running project, like so:

  brew install dnsmasq
  sh -c 'echo "address=/.lo/127.0.0.1\naddress=/.lo/::1\n" > /usr/local/etc/dnsmasq.conf'
  sudo mkdir -p /etc/resolver
  sudo sh -c 'echo "nameserver 127.0.0.1\n" > /etc/resolver/lo'
  sudo brew services start dnsmasq
Would love to expand on that to route to specific projects, but since it's working "well enough" I probably won't touch that for the foreseeable future.


I use Nginx as reverse proxy, and each service runs on the same internal port. There is a way to configure Nginx natively to dynamically route to the container with the same name. If I need multiple services up locally for development, I bring up Nginx there too, and each service is mapped to a domain that ends with .test, which I have added to local DNS (in my case /etc/hosts ). I find that it's anyway better to run development with reverse proxy to find errors that otherwise only would appear in prod.

The main thing I want to improve is to not use one big compose file for all services, as it would be cleaner to have one per service and just deploy them to the same network. But I haven't figured the best way to auto-deploy each service's compose file to the server (as the current auto-deploy only updates container images).


Can you please elaborate on how to "dynamically route to the container with the same name" with nginx?


Something like this:

  location ~ ^/([a-z0-9_-]+)/  {
    proxy_pass http://internal-$1:8000;
  }
We pickup the service name from the URL and use it to select where to proxy_pass to. So /service1 would route to the docker container named internal-service1 . We can reach it via the name only as long as Nginx is also running in Docker and on the same network.


My work environment is now similar, using Traefik, Docker Compose and dnsmasq with easy to remember and customized fqdn such as api.myapp, www.myapp, db.myapp, … I had way too many servers to start for each project with ports to remember and creating conflict. The setup is described here: https://www.adaltas.com/en/2022/11/17/traefik-docker-dnsmasq...


I've got one compose that has like five web-services in it. And I had a heck of a time getting a config that allows each contained thing to see the others - and also be visible outside. For now I'm using fixed IPs in the compose to get it so it's all visible. Now, the network can be overwrite via env but I wish I knew an easier way to get containers visible to outside and each other easier. Else, in the service I'd have to have two references to the other services: the internal name and external name.


>And I had a heck of a time getting a config that allows each contained thing to see the others

Use a utility container to learn some of the network/options of docker. The network-multitool[1] container is a good one I use.

So for example, create a few containers in a manifests/compose file with different network settings and try pinging, telneting, etc from one container to another.

This container can also be used to help debug workload containers that are having connectivity issues. If you need more tooling just create an image with this container as a base.

[1]https://github.com/wbitt/Network-MultiTool


I use extra_hosts parameter for this particular scenario


If Traefik is not your thing Im happily using https://github.com/nginx-proxy/nginx-proxy and sslip.io for local docker compose development.

And then even plain nginx under that to proxy to non docker services...

(And ipv6 for really short urls. example.com.--1.sslip.io etc)


This had been the bane for my existence for quite some time.

Instead of localdev, i'd just spin up a DO server and develop things there. It's not great -- it's not as fast as my macs, have to have 2 sets of local dev env. But once i get things working in DO it serves as my staging/prod env.


A colleague of mine has a project that's meant to simplify this stack for Drupal sites. But I think it could be repurposed for other platforms fairly easily.

https://source.cknweb.it/open/shuttle


    For some reason the idea got new
    Still remember https://lando.dev/ ?


Yeah I use lando. Shuttle is pretty similar in theory, but more lightweight. Lando is a bit of a beast.


Nit: the justification of the text on mobile makes this straining to read.

One source: https://www.powermapper.com/products/sortsite/rules/accwcag2...


I use https://ddev.com for almost all of my web project development, which basically automates all of this. Per-project databases, web containers, plugins, etc, and it’s now using Traefik as its router.


It advertises, it's for php development.

Is it flexible enough to handle any http app? Does is support websockets, or maybe arbitrary tcp connections (such as jdbc postgres or activemq broker)?


Uuuhm, what if you just bind to port 0, let the OS assign you a random port?


I believe this is the core of what Coolify[0] does.

[0] https://coolify.io/


at the agency i work at we have been using https://lando.dev for some time with a lot of success. It is an abstraction over docker which isn't ideal but it allows us to quickly get going with projects and switch with no down time.


+1 for Lando. We used to rely on a bunch of shell scripts based on Docker4Drupal but it was such a PITA to maintain. Lando provides essentially the same customizability with a fraction of the hassle of our previous solution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: