Brief clarification: some of those patterns were a consequence of us having to run Python 2.5 and 2.6 on older boxes - it's a survival technique, if you will - and having to vendor some dependencies to make it easier to deploy stuff under some other interesting constraints the article won't get into.
On the whole, these patterns make it easier for other people to not break your code when they want to add, say, another set of REST endpoints (which was my main occupation at the time).
You might also want to read my short piece on SSE: http://taoofmac.com/space/blog/2014/11/16/1940
And yeah, I've been meaning to change the blog layout. Hardly any time for it these days, honestly.
Was wondering, do you pip freeze > requirements.txt? I don't see the file in there, but I assume that's what you mean since you mention that none of the env folders make it into version control.
On a semi-related note, I used to have tons of projects like this a year ago, and having moved most of our python code to AWS lambda, our repos have shrank to maybe 25% of their original size. Granted it's not fabulous if you need to run a JS app on top of your Bottle framework, but we used to have a lot of small services here and there, the provisioning and deployment was so much work.
My (barely existant) OCD is going haywire over this article as I read it...
1. Why store dependencies locally? Seems a bit wasteful and unnecessary with tools like pip/virtualenv. (OP does mention virtualenv, but also "Include ALL the dependencies locally" so I'm not sure what to make of that)
2. You've chosen to separate your code into MVC-inspired directories instead of by function. Flask/Django have ways (apps, blueprints) of making application subcomponents modular. It seems like a single controllers directory and a single templates directory, etc. could get cluttered pretty quickly. What will you do as projects grow?
pip install -r requirements.txt --no-index --find-links=/path/to/wheelhouse
You can learn a bit more about wheel in this very good article:
If your application isn't able to be installed at the system level I would really recommend this for handling deployment, regardless of whether you have full access or not - deploying tarballs with configuration management tools like puppet are hackish at best, RPM's/DEB's really are the best way to roll your software out.
EDIT: As an alternative to virtualenv which is actually a PITA since --relocatable is always broken, buildout works great.
For example I use PyEnv instead of virtalenv/venv because PyEnv is written in bash and has a much better level of isolation than virtalenv or venv. It's simple bash scripting and the only system dependencies it has are based on features you choose to use. If you want to build Python from sources you'll need compilers and Libs, etc, but other than that sort of thing, it's zero dependencies.
Edit: PyEnv also has the ability for me to compress an entire Python environment and reuse it somewhere else provided I'm using the same OS and system libraries, so I can pre-build compilation steps and cut down on system package dependencies in production environments.
Pushing out a new release is as simple as running `koji build f23 git+ssh://my.git.server/my/project.git` and within 15 minutes it's been published to my internal yum repository and puppet is installing the newest version on all my servers. How is managing pyenv and dealing with fabric or whatever other tool of choosing any easier than this?
Because its all in one language, python. And when stuff errors out in python, you have a traceback and know (mostly) exactly what went wrong.
Now when we build our Debian package (debian/rules):
dh_virtualenv -p myproj --use-system-packages
So no development if you need to use npm. I can download e.g. Grunt from GitHub directly, but installing the local package immediately tries to download dependencies from npm.
This is a good question, and I think it depends on your toolchain and also the nature of the code you are working with. Not every project you work on will be OpenSource and might be inaccessible from the public containers/vms where you deploy. So its much easier to bundle the whole application into one package and push that into the destination. e.g if you want to deploy your app to heroku/cloudfoundry/bluemix.
I use dokku/CloudFoundry these days and just toss in dependencies to requirements.txt, but at the time I often had to deploy stuff on boxes with _zero_ outside access.
Alas, there is a downside: The JSON spec does not allow comments, which are often important for configuration files. (Though given how much that simplifies parsing, it may have been the right decision.)
So you either omit comemnts, or end up trying to shove placeholder key-values into the closest place you can to whatever you want to comment about.
Hackish, but beats having to bother with YAML parsers and their regular RCE vulnerabilities.
Would be nice to hear some discussion there, instead of the column layout.
I haven't used Peewee in a project yet, but I've poked around with it. It's really nice, so thank you.
Not using the ORM (I like writing SQL for some reason) but the connection pooling on Postgres is nice and I'm really enjoying working with it. Thanks for your work.
Celery is a task queue. Pika could definitely be used to build a task queue (similarly to how Celery is built on top of Kombu), but I can't think of many good reasons why that would pay off. Unless your use case was not a task queue...
My Pika/RabbitMQ stuff was much easier to build, unit test and debug (i.e. it actually works). There is every chance the I am just a bit retarded when it comes to using celery, but the fact I've built the Pika version maybe these were just real issues? Dunno, it might all work flawlessly for you in which case it's a great tool.