

Getting started with Django, Pipeline, and S3 - rigid_airship
http://blog.iambob.me/the-super-stupid-idiots-guide-to-getting-started-with-django-pipeline-and-s3/

======
tmarthal
Note: there is a great Django template at
[https://github.com/tommikaikkonen/django-twoscoops-
heroku-s3...](https://github.com/tommikaikkonen/django-twoscoops-heroku-s3/)
that will automate the creation and uploading of the s3 assets. It uses
`collectstatic` to upload the static resources to be hosted from s3 (which
seems to be the more typical way to do this). This template also uses heroku
to host the dynamic portion of the app.

You can start your new project with the command (the hyphens break the line,
but it should be executed with the settings in place): django-admin.py
startproject --template=[https://github.com/tommikaikkonen/django-twoscoops-
heroku-s3...](https://github.com/tommikaikkonen/django-twoscoops-
heroku-s3/archive/develop.zip) \--extension=py,rst,dotfile,rb
--name=Procfile,Gemfile,base.html,404.html PROJECT_NAME

Follow the rest of the README and/or buy the rest of the Two Scoops book
([http://twoscoopspress.org/](http://twoscoopspress.org/)) if you're using
django for any sort of large project (and want a standard/sane project
layout).

~~~
tommikaikkonen
Author of that repo here, I'm glad you found it useful! I've used that
template as a starting template for purely personal projects, so beware: it
does not have the polish of a well-maintained django package. But it's been
convenient for getting new projects running up quickly on Heroku and S3.

If you decide to use it, you might want to look at the development of the
original Django Twoscoops Project ([https://github.com/twoscoops/django-
twoscoops-project](https://github.com/twoscoops/django-twoscoops-project))
during the last year or so, as they haven't been pulled in to my repo.

------
z1g1
Hopefully the author will see this, but please make sure to add a note to add
the settings.py to the .gitignore file. If your AWS creeds are committed to
github anyone can have access to your AWS account

~~~
rigid_airship
fixing this now, thanks for note!

------
lukasm
Flask anyone? :)

~~~
tsumnia
Give me a few minutes and I'll write something up. I actually just finished
Flask to S3 on a project I'm working on. Next stop is PostgreSQL to store the
url's.

EDIT:
[https://gist.github.com/tsumnia/9986374](https://gist.github.com/tsumnia/9986374)

I still need to add proper logging to my application, but for now I print to
the console if there is an error

~~~
lukasm
Coolio. Instead of uploading to S3 through Flask you can make a direct upload
[https://coderwall.com/p/56a9ja](https://coderwall.com/p/56a9ja)

~~~
tsumnia
Thanks for this! I'm actually using Heroku as well, so I might make the
switch.

------
danso
I'm a big fan of "idiot" guides to ops/stack-setups, partially because it's
interesting -- and helpful -- to see what authors' assume about the
"idiot"...and it's a real art to write something that can appeal to such a
wide-variance of non-expert.

That said, let me ask the first idiot-question: is this a precursor to a more
complicated stack? Because what's the use-case for setting up Django with an
asset-pipeline to S3? I mean, what is the _Django instance itself_ running on?
(as I imagine setting up Django on EC@ is generally the harder thing to do)

~~~
herge
Serving static files from nginx still has overhead on the server and also can
be slower that s3. Also, you avoid a whole rake of problems with backups,
running out of disk space, sharing static files among servers, etc, by storing
files on S3.

~~~
danso
Sure...but _why Django_? If I'm missing something, how do you generate the
static pages from the Django app? I see that django-pipeline compiles and
generates the _asset_ files, but what about `mydjangoapp.com/books/1`,
`mydjangoapp.com/author/smith-jon` and so forth, i.e. the data-pages?

edit: I guess I'm assuming too much...the OP is assuming you have Django app
already online and running, and this process moves over the asset files?

~~~
rigid_airship
Yeah absolutely right, I'm just using the pipeline for my static files (i.e.
styles, images, js) not for data pages. One more caveat I forgot to put in the
post but should be mentioned is the collectstatic command. If you have a huge
collection of static assets, collectstatic is _painfully_ slow copying its
files directly to the S3 bucket. In my case, I don't have a huge number of
assets so it's not a big issue, but if anyone has any ideas for making
collectstatic faster, I'd love to hear them

~~~
infecto
Checkout something like this.

[https://github.com/antonagestam/collectfast](https://github.com/antonagestam/collectfast)

The problem is collectstatic will use modified date. When sending to S3 this
means the entire static pipeline is uploaded each time. Packages like
collectfast use the hash to look for difference and only uploads those files.
Works reasonably well. We have had a few issues along the way that we have had
to work through but its a start.

~~~
rigid_airship
that looks awesome, definitely going to give it a try. Thanks!

