
Serverless Containers with Google Cloud Run - mikkelbd
https://thecloud.christmas/2019/7
======
paulddraper
For those like me looking for a comparison point:

> In some sense you can compare it to AWS Fargate and Azure Container
> Instances. But the difference here is that Cloud Run will actually
> automatically scale to zero, and you only pay for resources during a
> request.

This seems really cool.

On the one hand, I appreciate the simplicity of serverless "Cloud CGI" as it
were. But on the other hand, I have been confused why these always tie to a
particular language and runtime.

Why can't I just run a binary in response to requests. I.e., real cloud CGI?

This seems made for me.

~~~
Can_Not
You should be able to put any binary into a docker container for cloud run.

~~~
paulddraper
That was my point; Cloud Run seems suited for my needs.

------
mikkelbd
In an effort to share knowledge by writing daily articles in our 12 different
advent calendars at [https://bekk.christmas](https://bekk.christmas) I
contributed with this article in
[https://thecloud.christmas](https://thecloud.christmas). It's just a short
introduction to Cloud Run, how it compares to alternatives such as FaaS and
Kubernetes, and some info on how to build and deploy to Cloud Run. If you want
to learn more about Cloud Run there are several links to both the Cloud Run
announcement blog posts from Google and the official documentation. Hope it
can inspire someone to check it out for their serverless journey. I have been
using it for a while for smaller services that integrate with both Cloud
Datastore and Cloud Pub/Sub, and I think it works really well :)

------
casperc
It would depend on the time to get a service to respond when scaled to zero.
Otherwise it seems to be twice the cost of fargate with AWS, if you need to
have it running all the time to get reasonable response times.

~~~
crashedsnow
It doesn't "exactly" work that way. The instance doesn't disappear after the
request is served. It hangs around in case there's another request (for a
while), but you only pay while the request is active. In practice "cold
starts" are single-digit percentage of most common workloads (# of requests
that are cold). Also FWIW it's not just scale to 0. Scaling any number of
instances up will hit cold starts, the difference is this happens
automatically versus fixed cluster sizes which have to either be pre-
provisioned for peak, or tend toward much slower scale up times.

Disclosure: I work for GCP

~~~
unraveller
These averages that always get dropped to alleviate all concern tell me that
serverless container cold starts will never be solved.

No platform will sweat to improve the start time of an environment 10x, not
when it only affects <10% of executions that tend toward commodity prices.

Individual end-users must occasionally endure the waking up of successive
microservices and devs must risk a bad first impression.

------
auspex
The biggest issue I saw when evaluating Google Cloud Run *(for containers) is
that a container needs to have an external IP address in order to access a
managed database.

I hope they address this soon!

~~~
crashedsnow
Curious as to why having an external IP/URL is a problem. If you're using
almost any cloud service that has an API for adminstration (e.g. an API to
tear down a VM), then is that really different to a public endpoint secured
with platform-managed authentication? (which Cloud Run provides). Is it
because you need firewall rules?

~~~
leg100
Many organizations insist on making everything private, i.e. running on an
RFC1918 IP address, within the corporate 'perimeter', cloud included.

True, a cloud has an API, and that tends to be public rather than private, and
that doesn't play well with the above approach.

There are some band-aids for this, such as Google Cloud's VPC service
controls, which restricts which clients can access the Cloud API, providing a
second layer of defence to IAM.

Personally I find this approach retrograde, because it assigns an element of
trust to entities within the perimeter, whereas the BeyondCorp zero-trust
approach does not, and plays well with the way public clouds have been
designed (public endpoints).

------
netcyrax
What about permanent storage? A separate managed database is needed i guess?
(Just like the rest of the serverless solutions)

~~~
mikkelbd
Yes, that is correct. That is one of the constraints of stateless share-
nothing processes. This is also described in one of the principles in "The
Twelve-Factor App":
[https://12factor.net/processes](https://12factor.net/processes). All
common/permanent state must be stored in common storage for all the processes.

Depending on the kind of permanent storage needed, you could use one or more
of the managed databases offered by Google Cloud. Here is an overview:
[https://cloud.google.com/products/databases/](https://cloud.google.com/products/databases/).
You could also provision another open source database yourself either in
Compute Engine or Kubernetes Engine, e.g. MongoDB. There is a lot of these on
Marketplace, again for example MongoDB:
[https://console.cloud.google.com/marketplace/details/click-t...](https://console.cloud.google.com/marketplace/details/click-
to-deploy-images/mongodb).

