Hacker News new | past | comments | ask | show | jobs | submit login

I wonder why Instagram wasn't using VPC in the first place. I've been using AWS for a startup for a few years now and I had our instances running in VPC from about the second month onward.

It's been one of the best architecture decisions I've ever made. At this point we only use one public IP address. (If direct access to a machine is needed then you can connect via VPN running on the one bastion host with the public IP address, and this gives your machine access to the local IP addresses of instances running inside the VPC.)

All the machines in our cluster are protected inside local VPC address space, with the access by the external world being ELB to expose public service endpoints like the API and website. I can't think of any good reason why you wouldn't be using VPC in the first place. Having public IP addresses for private machines sounds like a recipe for disaster if you ever accidentally miss a port in your security rules.

Mike from IG here. VPC was barely a thing when we got on AWS (2010) and at the time not the default. I would definitely have done VPC from day 1 in hindsight, though.

Hindsight is 20/20.

I think you guys did an exceptional job to tackling a really difficult problem (I've been in the same position, migrating EC2 to Datacenters) and we determined that EC2 -> VPC -> Datacenters is really the only way, and Neti solves it surprisingly well.

Going forward, hope that acquired companies opened their AWS accounts late enough that Amazon forced them to use VPC.

We're small, comparatively - 20-30 servers max - and we need to get in to VPC for a new cluster that requires static internal IPs. (Reboot an EC2 Classic instance and you may get a different 10.x address.)

In any case, the migration is daunting even at our size, although our devops team size is 1. I do wish they had VPC when we started.

You could also just attach EIPs and use those, right?

In an incredibly late reply - EIPs are public-facing, I need internal IPs for fastest possible LAN routing.

If you assume they had no pressing need for any VPC specific functionality, you can get similar security by locking your security group/s down to only ELB for public service ports and having one instance in another security group with ssh/vpn allowed (to specific ips) as a jump box/vpn. Spending weeks of multiple teams engineering time to move to VPC without a pressing need would seem to me to make little business sense.

Agreed. This is the route I use and it works fine. I can see how it could quickly get out of hand with a lot of security groups, and I would love some sort of security group inheritance, but for -100 instances, it is not the hard to keep the public access to ELB.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact