
Do I Need a VPC? - forrestbrazeal
https://info.acloud.guru/resources/do-i-really-need-a-vpc
======
whalesalad
This post reallly rubs me the wrong way.

"Do the extra security controls of a custom vpc outweigh the increased
risk+complexity of configuring them for this app?" I do not agree with this
attitude of thinking.

For tiny little hobby projects, sure. For something that is production-grade
and user-facing I would certainly assemble that inside of a private network.

It's not that hard. If you are building production tech you are likely doing
it within the confines of a configuration tool like Terraform anyway -- so do
it right.

~~~
time0ut
I agree we should always strive for defense in depth. However, this only
recently became practical for lambda backed we services due to the extreme
cold starts a custom VPC introduced. Now that AWS has largely fixed that, I
don't know of an argument against it.

------
NikolaeVarius
Read like a argument to go away from EC2 and into managed services

> Forrest Brazeal is an AWS Serverless Hero and enterprise architect who has
> led cloud adoption initiatives for companies ranging from startups to the
> Fortune 50.

Makes sense

~~~
unethical_ban
Ugh.

There are a load of serverless resources in AWS. Lambda is a place to run code
without containers, VMs, etc. There is no IP address for a Lambda - it "just
runs". Access is governed by the IAM of the AWS account or through an API
gateway.

It is quite possible that you don't need a VPC at all. I'm having a lot of
trouble understanding putting a service running "above" legacy networking into
a private network.

~~~
nunez
You can run Lambdas in a VPC now. They used to be a lot slower to do so
because Lambda had to create ENIs that could access the Lambda VPC within your
VPC, but that latency has been reduced significantly.

------
watermelon0
Wait, isn't VPC more or less required unless you have a really old account,
that still has EC2-Classic support?

You either use custom or the default VPC, but in both cases, you are using a
VPC. You can manage them in the same way, and if you want, you can also delete
the default VPC.

*There are resources that don't necessarily use any of your VPCs (e.g. Lambda, Lightsail), but the vast majority of the services require a VPC.

~~~
kawsper
I thought Lightsail were using your default VPC?

~~~
watermelon0
I haven't used it yet, but looking at the docs it seems that it uses Amazon
VPC, but you have an option to peer it with the default VPC.

------
eximius
I mean, don't you _automatically_ have one even if you don't define one? At
least on AWS? In that sense, explicit configuration rather than relying on AWS
defaults that can change underneath you has value.

~~~
kempbellt
You are correct. In AWS, there is a default one provided for you.

The main benefit of creating your own is that you can define your base CIDR
block to be something that makes more sense to you. With subnets separated out
by use cases, in a more logical and memorable way.

As an example:

A root CIDR block of 10.0.0.0/16 will give you ~65536 usable addresses - more
than enough for many projects.

You can create a "public" subnet (10.0.1.0/24 - ~256 addresses available) and
route all traffic in this subnet directly to an internet gateway. Things you
want to put in here include load balancers, and your NAT gateway (which will
be used by private subnets).

Then create a "private" subnet (10.0.2.0/24) and route all traffic in this
subnet to your NAT gateway (which lives in the public subnet). Resources in
this subnet will have all internet traffic routed through the NAT gateway,
preventing direct access from the internet, just like your home router is a
buffer for your PC.

You can get creative here. Adding a subnet 10.0.16.0/20 will give you ~4096
addresses, which is useful for lambdas that need to scale horizontally (each
one consumes a private IP address). Also, if your lambdas need internet
access, to call an API for example, you _must_ route traffic through a NAT
gateway.

It may seem like a lot to setup, but once you have it up and running, you save
yourself a lot of headache in the long run. You can quickly identify network
configuration errors by knowing the local IP of a resource. "Why can't I ssh
directly into my EC2 instance? Oh, because it's IP is 10.0.2.45, meaning it's
in a my private subnet)"

------
danial
If you are hand-managing VPCs then there is every chance that this additional
layer of complexity will lead to mistakes. You could argue that the increase
in cognitive load of managing them can offset the benefits of an additional
defense-in-depth control. New developers joining the team are likely to make
mistakes and this is the sort of thing that doesn't get caught in code reviews
either.

However, VPC configurations are an essential defense-in-depth that can be
programmatically managed. AWS-managed VPCs are certainly not hand managed.

While maintaining Cloudformation or Terraform templates is still a pain, the
good news is that it is becoming increasingly easier via frameworks like AWS
CDK. This allows your deployment code to programatically generate the
infrastructure and VPC configuration. This decreases the likelihood of
mistakes made in configuration and increases the chances of such mistakes
being caught during code reviews.

~~~
braindongle
Yes. Also, if you're not serverless but are containerized, ECS/Fargate has a
simple workflow through the console that in turn runs Cloudformation and sets
things up (VPC, gateway, load balancer, security groups...) with sensible
defaults. You do still need to learn how to lock things down, inbound/outbound
rules especially. For pros, the console is simply not the way, but for your
first Spiffy Dockerized App, this is great.

Also, the new Amazon-managed firewall rules for web-apps are killer for app
developers who are not security pros![0]

Lest this sound like Fanboyism, our long-term strategy is Firebase, calling
AWS APIs when necessary :)

[0] [https://aws.amazon.com/blogs/aws/announcing-aws-managed-
rule...](https://aws.amazon.com/blogs/aws/announcing-aws-managed-rules-for-
aws-waf/)

------
kempbellt
If you're doing anything in the cloud, yes, it is a very good idea to
configure one for your project. It really isn't too complicated to set one up,
and there are a lot of good articles to walk you through it.

DevOps pro tip: Take the time to script out your VPC configuration, and then
don't think about it again. You'll thank yourself later for having a more
secure, and scalable solution, that you can reuse for various projects, or on
new jobs.

~~~
vandahm
At work, we have a Terraform module that receives a few input parameters and
generates a VPC that complies with all of our corporate security guidelines.
The module isn't complicated, and it ensures that we get it right every time.

We do this for every commonly-used AWS resource -- S3 buckets, SQS queues,
etc. It eliminates most of the risk of misconfiguration, allows our
infrastructure changes to be code-reviewed and unit tested, and eliminates a
lot of the fiddly crap work associated with setting up something on AWS.

I do some volunteer IT work for the nonprofit makerspace in my town, and I
obviously don't have access to my employer's tool chain if I have to work in
their AWS account. It really is a night-and-day difference.

------
vandahm
To me, this reads like an argument for a serverless architecture and not a
discussion about VPCs themselves. But that's not always on the table. Even if
a serverless architecture is better for some use case, if you have a
traditional application that already works, it's inconceivable that setting up
and monitoring the VPC is more effort than a massive rewrite to go serverless.

