kurt$ heroku run bash
Running bash attached to terminal... up, run.2
~ $ python -V
Simple Flask app:
/tmp $ cd redis-2.2.8/
/tmp/redis-2.2.8 $ make
/tmp/redis-2.2.8 $ cd src
/tmp/redis-2.2.8/src $ ./redis-server
 31 May 15:36:30 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf'
 31 May 15:36:30 * Server started, Redis version 2.2.8
 31 May 15:36:30 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
 31 May 15:36:30 * The server is now ready to accept connections on port 6379
 31 May 15:36:30 - 0 clients connected (0 slaves), 790480 bytes in use
 31 May 15:36:35 - 0 clients connected (0 slaves), 790480 bytes in use
 31 May 15:36:40 - 0 clients connected (0 slaves), 790480 bytes in use
Run and scale any type of app."
~ $ go
gobject-query godefs godoc gofix gofmt goinstall gomake
gopack gopprof gotest gotry gotype govet goyacc
~ $ erl
Since its fairly easy to composite multiple WSGI apps together, one could run a few different Python WSGI-based apps under a single multi-threaded process to take full advantage of the free quota. That would work with Pylons/Pyramid/Flask and other Python web frameworks that don't rely on a single set of module global settings (Django).
Add a Procfile and package.json per the docs:
And it just works:
This means any process type, for example cron jobs, for free. As long as only one process is ran per app, but as far as I know, you can run as many apps as you want. This sounds like a very nice deal. Am I missing something?
Of course, you can't share multiple apps behind one domain, so if the one dyno isn't keeping you you'll need to scale up and pay for it.
OTOH, if you don't need to worry about any extra latency (maybe?) you could probably get away with a rails+node app like https://github.com/maccman/holla running free by having one app just run the web interface, and point the backend at another app running node.
As soon as you go outside of one tech stack it's cheaper to just get a small VPS to start with, but heroku definitely has its perks.
Will the old configuration style for http caching continue to be supported indefinitely?
I love Varnish, but it's completely inflexible in Heroku so it's always been a mixed blessing. Sticking Google Analytics on your page, for instance, creates cookies for users that mean they will no longer get cached content. It's something that I can work around with dedicated Varnish, but Heroku can't really fix for me.
We've are running production apps with Varnish caching + Google Analytics without issue.
Set your cookies with JS or on uncached pages (such as POST logins) and you'll be fine.
You can set cookies on uncached pages in ruby and read them without issue in JS on Varnish cached pages. The page will still be cached even if the user has a session cookie.
Basically: don't set or get cookies in ruby on cached pages and you'll be fine.
"The new HTTP stack does not include Varnish (a reverse proxy cache) because it is not compatible with the advanced HTTP features described above. If you wish use HTTP caching headers, rack-cache and the memcache add-on offer features and performance comparable to Varnish, but with the additional benefit of giving you complete control over authentication and page expiration."
(Begs the question though: What are we doing serving pages >1MB?)
Has anybody published a complete SELinux or SMACK policy to use LXC with untrusted users? Last I checked LXC wasn't fully ready yet.
"- Full isolation of processes for security and performance"
Have there been security-related incidents before with their slugs? Is this to address safer multi-tenancy?
In any case, I guess you won't / can't say what you're using instead of those two? ;)