
HTTPS-Portal: Automated HTTPS server powered by Nginx, Let’s Encrypt and Docker - ducaale
https://github.com/SteveLTN/https-portal
======
snorremd
Træfik works pretty well as an automated https proxy as well:
[https://traefik.io/](https://traefik.io/) It is still missing a caching
feature though, so it might not be a good fit for everyone. It has a Docker
backend which works with Docker labels (much the same way the https-portal
project uses environment vars).

~~~
mathnmusic
Is it possible for me to dynamically configure Traefik for non-Docker
environments? I would like to run a Traefik instance which acts as a https-
proxy for arbitrary domains,hosts and ports - without having to restart at
all.

~~~
tylerjl
As long as you're using one of the dynamic backends that Traefik hooks into,
such as Consul for example, you can do this pretty easily.

I do this exact thing with Traefik, Consul, and some various miscellaneous
services that don't fall under the dynamically-discovered umbrella when they
might run under k8s or nomad. I write a value to Consul with the desired Host:
match, a backend server to route requests to, and Traefik will handle routing
to any backend server it can reach without restarts or interrupting service.

------
tjbiddle
docker-compose is not meant to be used in Production.

Compose _files_ can be; however they're used in conjunction with Docker Swarm
- and when this is done, certain features are made available while others are
not. Networks would be used in this case rather than the `link` directive.

~~~
ripdog
Why shouldn't I? If I was just going to use this to host a single website,
what's wrong with running docker-compose?

~~~
bfirsh
Nothing at all, as long as you don’t mind the unreliability of a single
server. (I am the author of Compose.)

Honestly though, I would probably use Heroku for deploying little apps like
this. The Compose CLI’s sweet spot is development environments.

Edit: There are even official docs for this
[https://docs.docker.com/compose/production/](https://docs.docker.com/compose/production/)

~~~
Already__Taken
I have always been confused by examples of things that are deployed with
docker compose. And I'm sat here with babies first coreOs cluster wondering
how to mesh the two.

------
rapnie
This is a great project, and exactly what I was looking for. Now doing manual
cert renewal and looked into using jwilder's docker-gen to automate, but that
was an involved process. This brings it all together. Thank you!

~~~
dvfjsdhgfv
I'm sorry, I really don't get it. What do you need Docker for? Let's Encrypt
and Nginx give you practically full cert automation. Maybe there's some
crucial bit of information I'm missing here?

~~~
sureaboutthis
I don't get it either even if one uses Docker. I use jails on FreeBSD but,
otherwise, I have the same set up that does the same thing by just configuring
what comes with all that so I don't understand the point of this.

~~~
aaaaaaaaaab
Can you have a self-contained pre-built FreeBSD jail that you can fire up
without doing additional setup? Like a prebuilt Docker container with all the
necessary stuff already installed.

~~~
sureaboutthis
I'm a professional. I don't need someone else to configure my system tools for
me.

~~~
ric2b
Professional doesn't mean you have to do everything yourself, this saves time
so you can do the important stuff that can't be automated.

~~~
sureaboutthis
You only do it once. Then you do everything to this tool that it, too, will
need; adjust as necessary. When you do your own config, you know what's going
on and how to fix and adjust. And it's one less tool you need to learn and
maintain.

------
ssijak
Or use Caddy [https://caddyserver.com/docs/automatic-
https](https://caddyserver.com/docs/automatic-https)

~~~
graup
Looks cool, but it's not free for commercial use. Instead of paying
$25/instance/mo I can set it up myself with the https-portal docker config.

~~~
bpizzi
It's not free if you use the binaries builded/downloaded from caddyserver.com.
It's still free if you build the binaries yourself from source [0].

[0]
[https://caddyserver.com/products/licenses](https://caddyserver.com/products/licenses)

~~~
morpheuskafka
And there is a Docker Hub image freely licensed building from source:
[https://github.com/abiosoft/caddy-docker](https://github.com/abiosoft/caddy-
docker)

------
TheGrumpyBrit
I've been using [https://github.com/jwilder/nginx-
proxy](https://github.com/jwilder/nginx-proxy) and
[https://github.com/JrCs/docker-letsencrypt-nginx-proxy-
compa...](https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion) to
achieve this for a year or two now. I prefer it because the domain
configuration is set on the backend site, rather than the proxy image itself,
which means you don't have to worry about cleaning anything up when you remove
an image.

~~~
dexterbt1
Same with me, they're my go-to dockerized automated https reverse proxy setup.
Takes a few minutes to setup.

But recently though, I grew tired of handling 2 or more docker images, then I
discovered [https://github.com/Valian/docker-nginx-auto-
ssl](https://github.com/Valian/docker-nginx-auto-ssl). One image, sets up in a
single command without volume sharing complexity of jwilder/nginx-proxy. One
caveat though is the larger cpu overhead (from lua) when handling high volume
or high reqs/sec.

------
fulafel
Why are reverse proxies (in memory unsafe languages, no less) so popular? They
are unnecessary in most cases, and bring more conplexity and hinder
transparency more than the alternatives.

~~~
BjoernKW
They're popular because, for better or worse, Microservices are popular.

If you want to serve Microservices from a common host name as if they were a
single application, e.g. for public users of an API, you need some sort of
mapping between internal and external URLs.

Service discovery is another approach but that's probably only applicable to
internal service usage.

~~~
fulafel
How about just returning the service url along with the API auth token. This
would enable load balancing and failover too.

~~~
BjoernKW
Yes, but that'd require a more complex process on the client.

At least you'd have to send an additional request, e.g. instead of just
calling

[https://someserviceprovider.com/serviceA](https://someserviceprovider.com/serviceA)

you'd first have to call

[https://someserviceprovider.com/serviceA](https://someserviceprovider.com/serviceA)

which for example would return

{ "url":
"[https://someserviceprovider.com:8081"](https://someserviceprovider.com:8081"),
"auth_token": "..." }

and only then could you call the actual service under
[https://someserviceprovider.com:8081](https://someserviceprovider.com:8081)

------
DanielDent
This seems like it might be a new "hello world" for devops-inclined people.

I've authored a similar Docker image with less features:
[https://github.com/DanielDent/docker-nginx-ssl-
proxy](https://github.com/DanielDent/docker-nginx-ssl-proxy)

(Although lately I've been finding my cookie-or-IP-or-HTTP-basic auth feature
extremely useful in development, which this doesn't seem to have from the
README)

It hasn't been updated in a while, but I've also got an automatic service
discovered based version of this for Rancher 1.6:

[https://github.com/DanielDent/rancher-nginx-active-
lb](https://github.com/DanielDent/rancher-nginx-active-lb)

------
rawoke083600
I don't get docker... I get it for things of scale... but the everyday stuff ?
Maybe i'm still old !

~~~
nothrabannosir
Example : this company had part of their site built by some abstruse static
site generator with a million dependencies written in a language that no one
at the company had installed by default, nor knew (I think it was Ruby). We
put it all in Docker, and the README changed from 10 lines of install x, y, z
version foo to /random, to just “run docker build, then docker run.” Most of
the people working on those docs didn’t need to know about ruby or gems or
lock files or any of that.

A year later I overheard someone say thank god that thing was dockerized
because I was absolutely not looking forward to installing all those
dependencies just for a typo fix.

That’s one of the areas where docker shines: app delivery.

~~~
Fredej
I'm also still not on the docker-wagon and would like to know what I'm missing
out on. Most of my stuff is python. Why is docker better than virtualenv with
a requirements.txt file?

From this description it seems to solve the same issue.

I could imagine something like environment variables, but on the other hand
that's something I've learned not to keep in version control, and putting it
in docker would be exactly that, no?

~~~
anyzen
> Why is docker better than virtualenv with a requirements.txt file?

It's not, it solves a different issue. If you only have Python as dependency,
requirements.txt are fine (well, user needs to install correct version of
Python, pip and virtualenv / pipenv, but that's doable). But as soon as your
app is actually composed of nginx / apache, python, some background process in
Rust, bash scripts for cron jobs,... then you have a problem with app
delivery, which Docker solves nicely. Just package eveything in a Dockerfile
and distribute the image. Bonus point: you can now test it locally, with the
same installation.

I jumped on Docker wagon very early for this exact reason. I don't care about
hype, but it does solve these kinds of problems.

------
enriquto
what is the difference between "automated", "fully automated", and nothing ?
Aren't all algorithms automatic ?

------
tvaughan
[https://gitlab.com/tvaughan/docker-flask-
starterkit](https://gitlab.com/tvaughan/docker-flask-starterkit)

