

New Dyno Networking Model - friism
https://blog.heroku.com/archives/2013/5/2/new_dyno_networking_model

======
sehrope
A nice follow up to this would be NATing external access from all dynos (web
and worker) for an application through a single unique IP per app. That way
external resources (eg your app's database) can be white listed by IP. One of
the reasons the major Postgres bug last month was so bad for DBaaS providers
(like Heroku Postgres) was that most leave inbound access totally open. That's
great when you're first getting started as you don't need to explicitly
configure firewall settings but it's terrible for production.

I'm not sure how economical it is to provide this for the free tier though I
think a paid option would be viable. AWS charges $3.60/IP/month so it's not
_that_ expensive and I'm sure there are plenty of folks who would pay $10,
$20, or even $50/month for a unique outbound address.

~~~
cachvico
Funnily enough this happened with Heroku and Facebook just a couple of days
ago - a bunch of apps got blocked from using the Graph API until Facebook
removed the banned AWS IPs.

<https://addons.heroku.com/proximo> does what's needed, but at a serious price
for high volume.

~~~
sehrope
Wow that is pretty steep pricing. The lowest tier that seems usable is the
$75/mo one. Then again if you're locked into Heroku then even $1,250/mo isn't
_that_ much for piece of mind and added security. I'm sure whoever is paying
for it is happy it's available.

This was one of the reasons we handled server deployments ourselves for our
startup. The cloud version of our app runs in production on AWS in a VPC so
all outbound traffic is NATed through a single public IP. With reserved
instances it costs about $22/mo for the NAT gateway and setup was pretty
straightforward.

~~~
agwa
> all outbound traffic is NATed through a single public IP. With reserved
> instances it costs about $22/mo for the NAT gateway and setup was pretty
> straightforward.

I've considered doing this myself so I'm curious: are you getting good network
performance through the NAT gateway? What instance type are you using?

~~~
sehrope
No issues to speak of so far. Our app databases are in the VPC itself so our
own traffic does not get routed through the NAT gateway but I haven't heard
any issues from users either.

At the moment we're using an m1.small for the NAT itself though if you have
perf issues you can bump that up as well. I would guess a c1.medium would be
more appropriate though as I said we haven't had any issues so haven't
considered changing anything yet.

Here's a speed test through from a m1.small through the NAT (27.5 MB/s):

    
    
        $ curl -o /dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip
          % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                         Dload  Upload   Total   Spent    Left  Speed
        100  500M  100  500M    0     0  27.5M      0  0:00:18  0:00:18 --:--:-- 29.2M
    

Here's a speed test from a m1.small vanilla EC2 instance outside the NAT (38.2
MB/s):

    
    
        $ curl -o /dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip
          % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                         Dload  Upload   Total   Spent    Left  Speed
        100  500M  100  500M    0     0  38.2M      0  0:00:13  0:00:13 --:--:-- 34.1M

~~~
agwa
This is great info - thanks! :-)

------
rdl
A bit puzzled why people would use the local IP address for inter process
communication on a single host, rather than unix domain sockets.

~~~
fabiokung
Not all services support unix domain sockets. It's one more option on the
table, but I agree that domain sockets should be favored in most cases.

------
zimbatm
This seems to go contrary to the idea of managed processes. Wasn't Heroku
supposed to abstract the idea of host all together ?

~~~
fabiokung
It is still abstracted, nothing changes from the point of view of most
applications.

It just opens more possibilities and increases the isolation.

~~~
zimbatm
From the article:

> A dyno should let you run any application or set of processes that you would
> run on your local machine or on an old-school server.

As soon as you allow multiple processes per dyno the abstraction becomes less
clear. It means that now a dyno is more like a small VPS and I have to know
which process is on which dyno for communication.

~~~
eli
You don't have to run multiple processes if you don't want to. You shouldn't
have to change the way you do anything of you're happy with your app now.

------
exogen
You used to be able to peek at how many other dynos you were sharing an
instance with by running

    
    
        netstat -l | grep lxc | wc -l
    

I sampled the resulting number a bunch of times and usually got 100 ± 25. I'm
guessing this isn't possible anymore – not that it was useful, just
interesting.

~~~
tlrobinson
Back in the day you could just do "ps -x" and see every process running on the
machine :)

