
Jaws: The Serverless Framework - gamesbrainiac
https://github.com/jaws-framework/JAWS/blob/master/README.md
======
ac360
Of interest:

The following real life scenario below portrays an administrative API layer
that runs in EC2, and how it compares to what the raw hardware costs would be
in Lambda. It purposefully excludes anything to do with requests
(CloudFront+ELB for EC2 and API Gateway for Labmda).

EC2 scenario: 2 c3.large instances, general purpose 15GB SSD (need two for HA,
span AZs) 1 Yr All upfront reserved pricing on both instances

Lambda scenario: 1042 memory size Workload running on EC2: 16000 requests a
day to an endpt. 200 ms ave = 3200000 ms total

3200000 / 100 ms = 32000 segments of 100 ms

EC2 Cost: 1084(yr)/365 = $2.96986301369863/day

Lambda Cost: 32000 * 0.000001667 = $0.053344/day

The cost difference is so large because this workload is not a very high EC2
utilization scenario. We need extra servers for multi AZ HA and to handle
bursts. Lambda handles both of these for you.

This also does not include the huge costs savings of no server management
(security patches etc) that go into EC2 or Docker containers.

~~~
falcolas
Using two c3.large instances to serve 16,000 requests a day (~11 requests a
minute) is akin to using a muscle car to commute a quarter mile - you're
vastly over-provisioned for such a simple workload.

This kind of workload could easily be handled by a pair of t2.micro instances.
This would cut your costs for dedicated servers to around $0.40 a day.

Plus, you could then take advantage of persistent in-process memory caches and
warm services, helping to speed up your responses even further.

Using tools like Lambda come with a number of advantages, but you give up a
lot of control at the same time. Perhaps this isn't a problem for most side
projects, but for business critical software I'd be very leery of handing that
much control over to someone as uncommunicative as Amazon.

~~~
ac360
"The cost difference is so large because this workload is not a very high EC2
utilization scenario. We need extra servers for multi AZ HA and to handle
bursts."

~~~
falcolas
The $0.40 cost per day I calculated includes multi-AZ HA, and your bursts
would have to be fairly significant to overwhelm two t2 instances which are
accumulating CPU credits 95% of the time - say around 600 requests per minute
for a few hours.

Of course, this is all conjecture since I can't actually view your workloads,
but we've been performing this exercise for our own services, and the t2
series of servers is remarkably capable: especially given their cost. The
original administrators thought (or were lead to believe) that we really
needed the raw horsepower that c*.large+ instances offer... and they didn't in
most cases.

Routers, crud API wrappers, even some disk persistance applications all
qualify as "non-CPU intensive".

------
pkkp
A neat idea, but the resultant vendor lock-in here worries me. I've heard
horror stories of the amount of effort required to move away from PaaS
platforms like Heroku (I believe Genius is one such tale) due to architecture-
specific components like jobs, but this seems to take that reliance to a whole
new, all-inconclusive level.

This might be neat or a quick weekend or hackathon project where you just want
to Get Shit Done, but I can't imagine anyone ever committing fully to the
platform and having no second thoughts.

An open architecture built on this sort of idea would be nifty, but tools like
Docker have almost reduced the sysadmin components for a lot of simple
projects to something that's not too far from this anyhow, from an ease of use
perspective.

~~~
alexbilbie
Lambda functions are just small Node scripts wrapped up in a Docker container
and then executed on a custom scheduler. I don't think it would be too
difficult to port to another platform.

~~~
lsaferite
And Java classes.

------
datalist
I am slightly confused.

Doesnt "serverless" and "using Amazon Web Services" contradict each other?

~~~
ac360
By "Serverless" we mean the developer does not have to think about servers.
They exist, but Amazon manages them.

Instead, the developers deals only with Lambda, an event-driven compute
resource. You upload your code and it runs when triggered, scaling
horizontally and massively, out-of-the-box.

The workflow rocks because it's endpoint/function isolation, not just
application isolation. Every piece of logic is in its own container. Best of
all, you only get charged when that code is run (!!!).

------
ac360
JAWS Slides From Re:Invent:

[http://www.slideshare.net/AmazonWebServices/dvo209-jaws-a-
sc...](http://www.slideshare.net/AmazonWebServices/dvo209-jaws-a-scalable-
serverless-framework)

~~~
speps
Slideshare is horrible, can you please put those slides on the GitHub repo or
a dedicated repo? Thanks.

