
Is Serverless Computing Any Different from Heroku, OpenShift, and Other PaaSes? - osipov
https://medium.com/@osipov/composable-architecture-patterns-for-serverless-computing-applications-part-4-ca363f2581ab#.tcv0ltwno
======
moondev
Correct me if i'm wrong, but for "serverless" to work, does there still not
need to be a "long running" daemon somewhere that runs the containers on
demand? Who pays for that? Is it just extra capacity that AWS just takes the
hit on for the cost of the service? And if so, can this extra capacity burst
if needed when millions of requests come in at the drop of a hat?

~~~
osipov
You're right, the "long running daemon" is the serverless platform, i.e. the
process that hosts the serverless applications (microflows). In case of public
cloud providers like AWS, there is ample spare capacity, so hosting (and
charging for) microflows in addition to other workloads makes economic sense.

With respect to the extra burst capacity -- in theory there should be excess
capacity available, in practice it varies depending on your cloud provider.

The lack of the excess capacity is usually a financial, not an engineering
problem. The cloud provider may offer SLAs that ensure that a customer gets
credit in case if some requests haven't been serviced as expected.

~~~
moondev
That makes sense, it just seems wasteful on aws's end to have thousands of
machines running "for free" waiting on requests that may or may not come.
Provisioning more capacity also can't be instant. For a large scale enterprise
system it seems like it could be unpredictable.

