Using an appropriate base image (debian for dev, alpine for production).
No you don't. It's nonsensical to use a "convenient" development environment, and then make the production application work in something more bare-bones/stripped down/different. I've never used Alpine Linux, but even I know there's a LOT of subtle differences between it and Debian. Nontrivial stuff that matters, like C library implementation (GNU libc vs. musl) and init system (systemd vs. sysv init). You don't want that, because it opens up a multiverse full of potential, exclusively bad surprises that you'll be unprepared to debug.
I agree on base image point. Although if we have python apps in the image, it should not matter (both 2-alpine3.7 and 3-alpine3.8 work ok).
Systemd vs sysv is not an issue in docker containers since it's (mostly) not used. If you are using an init system, you are probably using tini (included in recent Docker) or supervisord (usually used for older/legacy c/c++ apps).
I can attest to that.
We once had the issue that a python app running through gunicorn segfaulted with alpine while it worked fine with a centos image.
Process reaping and signal forwarding are solved by using tini. You can call if from inside the container, by installing it and then changing the entrypoint/cmd. You can also specify it in your docker run/docker create command by adding "--init"
I haven't seen the phusion base images in production in almost a year. Usually when an organization gets more experience in building images, they use better solutions then phusion images.
tldr
Don't use Phusion base images. If you need ssh and logrotate and etc, use lxd or a vm.
If you are not concerned about the size of your production image and if your modules don't compile with alpine builds, you can stick to the same debian base image.
Or you can use alpine for dev if you are not concerned about debugging too much.
I would say it boils down to use cases and priorities.
Especially these two, with alpines different c compiler for everything. Perhaps dev on a debian base and production on of one of those single binary only containers.
Just curious, why do you need gunicorn for auto-restarting the server inside a docker container, if you can just restart the container? It's pretty lightweight operation for config changes, imo.
Yes, you can restart the container too but it will not be real-time and sometimes when you restart docker daemon might take longer and you might have to wait to check your update.
COPY src/requirements.txt ./
RUN pip install -r requirements.txt
This has always confounded me as we always put `-e .` in our requirements.txt. Has this fallen out of favour? It doesn't appear the author's application needs to be installed like most pyramid or click applications do.
I haven't really looked at Pyramid in years (I do mostly Django). Why does it need `-e .`? In the projects I've worked on, that would be a really weird thing to do.
`-e` in a requirements.txt is always a bit of a hack. If you really have to do it, it's highly recommended to pin it to a particular sha1 or tag. But I guess my bias is from spending a lot of time and effort trying to get python projects to be as close as possible to reproducible.
Apart from gunicorn being easy to configure, i just happened to use gunicorn first. I'm sure the same Dockerfile can be applied with uwsgi. Are there any benefits to using uwsgi over gunicorn? Open to exploring.