

Docker orchestration - chrisbarra
http://chrisbarra.me/posts/docker-orchestration.html

======
mikewhy
> It is fucki*ng hard to create a postgres database without using psql or
> connect directly to the database.

well ideally you would have a script to create the DB and possibly migrate it.
IE this will create a PG database from `DATABASE_URL`

    
    
        import dj_database_url
        import psycopg2
        from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
    
        config = dj_database_url.config()
        db_conn = psycopg2.connect(host=config['HOST'],
            port=config['PORT'],
            user=config['USER'],
            password=config['PASSWORD'])
        db_conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
    
        db_cur = db_conn.cursor()
        db_cur.execute('CREATE DATABASE ' + config['NAME'])
    
        db_cur.close()
        db_conn.close()
    

then it's simply a matter of `fig up` + `fig run web python manage.py
createdb`

~~~
kdomanski
Not even that is necessary.

> The best solution probably is to create a new database during image
> creation, but in the official Postgres image this option is still not
> available.

The official image's README
([https://registry.hub.docker.com/_/postgres/](https://registry.hub.docker.com/_/postgres/))
says otherwise - both a user and a database will be created with name
$POSTGRES_USER (if set).

Furthermore, one can run postgres in single-user mode to prepare any initial
db image on top of the official one.

------
falcolas
As a point of comparison, an approximation (ie. untested, probably some syntax
errors) of the corresponding Ansible tasks:

    
    
        - apt: name={{ item }}
          with_items:
            - postgresql-server
            - nginx
        - pip: name={{ item }}
          with_items:
            - flask
            - peewee
        - copy: src=nginx/static dest=/www/ recurse=yes
        - copy: src=nginx/sites_enabled dest=/etc/nginx/ recurse=yes
        - service: name=postgresql state=running enabled=yes
        - service: name=nginx state=running enabled=yes
        - postgresql_user: name=foo password=oof priv=table1,table2
        - file: path=/usr/src/app state=directory
        - copy: src=app.py dest=/usr/src/app/app.py
        - copy: src=templates dest=/usr/src/app/ recurse=yes
        - copy: src=python_app.conf dest=/etc/init/
        - service: name=python_app state=started enabled=yes
    

Implementation note: this uses an init file for the python app, which will
depend upon what OS and init system you're running it under. Also missing
handlers for restarting services when config files change.

~~~
altcognito
They seem fairly similar -- this wouldn't provide container separation between
your pg/nginx/python layers right?

~~~
falcolas
No, though to my mind the separation is not really required for this trivial
of an install.

I would want the separation if I was running untrusted code, or if I was
running custom software which required non-standard library dependencies. Then
I'd want to split off that code, not necessarily every process.

Nginx and PostgreSQL in particular written well enough to compartmentalize
themselves within their own processes, and don't typically require additional
isolation.

------
kylek
The post mentions losing database data when doing a "rm". Couldn't you set a
volume parameter on the postgres container (to /var/lib/postgres or wherever
data is stored) to prevent that? Or am I missing something?

~~~
rackninja
Yes you could since docker-compose(aka fig) supports volumes. Or just run a
database on another machine/remote service and connect to it.

------
berbc
It's a bit of a basic overview of docker-compose. I would have liked to see
how to handle more sophisticated patterns such as the data volume container,
and how to manage database initialization, both in development and in
production.

