
Going Serverless with AWS Lambda and API Gateway - drewhenson
http://blog.ryankelly.us/2016/08/07/going-serverless-with-aws-lambda-and-api-gateway.html
======
spullara
I recently threw together a benchmark to measure latency with different
requests per second and request concurrency. It bypasses API gateway and just
calls the lambda directly. The p50 is very close to the minimum latency I can
see to US-WEST-2.

[https://github.com/spullara/lambdabenchmark](https://github.com/spullara/lambdabenchmark)

------
cweagans
Lambda seems like a neat concept, but is it really worthwhile? Conceptually,
an infinitely scalable backend for whatever thing I'm working on sounds great,
but in practice, there are some downsides. For instance, your client requests
hits the API Gateway, which then initiates another HTTP request to whatever
Lambda is supposed to be called, at which point, a container is started to run
your code. By this point, you've added some overhead to every single request,
and it's completely impossible to get rid of it because that's how Lambda is
designed. Am I off base here? Does it really work like that? If so, what's the
performance hit that you've seen compared to running e.g. ExpressJS directly
on a VM or bare metal?

~~~
justinhj
The idea is that containers are started to support the load of your
application. If the load of your application is so low you have zero
containers running most of the time you're not the target audience for this
kind of service.

~~~
bni
If you have zero containers running most of the time, AWS Lambda is attractive
because it costs $0 when there is no activity, unlike instances that costs all
the time even when idling.

------
_Marak_
If anyone is not interested in vendor lock-in to Amazon for Lambda and API
Gateway, I just released an open-source multi-language microservice backend
today.

[https://github.com/Stackvana/stack](https://github.com/Stackvana/stack)

Cheers!

~~~
lbotos
Is it feature-compatible with Lambda? Like can I take a Lambda and run it via
this stack? As a Lambda user the reality is, sure, at the moment there is a
bit of vendor lock-in, but I don't think at all about hardware, which is the
point.

~~~
_Marak_
This specific tool `stack` has a minimal feature-set for taking a function in
any language and putting it on an HTTP endpoint. Nothing else included ( by
design ).

It supports the same style of coding microservices as Lambda, except `stack`
also provides actual HTTP streaming interfaces.

Check out the echo examples here:
[https://github.com/Stackvana/stack/tree/master/examples/serv...](https://github.com/Stackvana/stack/tree/master/examples/services/echo)

An additional interface over `stack` could be used to support any cloud
function style ( including Lambda's )

------
gingerlime
> There seems to be some issue applying privileges that I can't seem to figure
> out. Your lambda function will fail to execute because API Gateway does not
> have permissions to execute your function.

This stuff is definitely confusing. I think I "reverse-engineered" from
applying things via the web interface and then looking at the policy when I
created Gimel[0] - an AWS Lambda A/B testing backend. Take a look here[1] for
the policy. I think that's what you're looking for, but I can't say I'm
confident it is... (it's confusing).

There are some more general-purpose tools to help you deploy to Lambda.
Kappa[2] and Apex[3] are worth looking into.

[0] [https://github.com/Alephbet/gimel](https://github.com/Alephbet/gimel)

[1]
[https://github.com/Alephbet/gimel/blob/master/gimel/deploy.p...](https://github.com/Alephbet/gimel/blob/master/gimel/deploy.py#L42-L60)

[2] [https://github.com/garnaat/kappa](https://github.com/garnaat/kappa)

[3] [https://github.com/apex/apex](https://github.com/apex/apex)

------
fragola
I did the Alexa Skill tutorial[1] and hosted the skill on AWS Lambda. It was
really easy, fun, and I recommend it to anyone who wants to mess around with
Lambda.

[1][https://developer.amazon.com/public/community/post/TxDJWS16K...](https://developer.amazon.com/public/community/post/TxDJWS16KUPVKO/New-
Alexa-Skills-Kit-Template-Build-a-Trivia-Skill-in-under-an-Hour)

------
the_duke
aka "Going towards total vendor lock in by basing your system on a
proprietary, non portable infrastructure service"

~~~
hehheh
Migration between infrastructures is always painful, so I don't think vendor
lock in is that big a deal, if you keep APIs small and their I/O well
documented (and designed).

~~~
kuschku
Migration between a OVH VPS, a local server, or a Hetzner rootserver is not
really painful.

I choose a preinstalled distro, deploy my script to automatically handle
dependencies and set up the container infrastructure, pull the containers, and
I have a server set up and able to run my containers in about 3 minutes, and
connected to a central server so it can be provisioned containers to run.

The entire infrastructure can obviously be moved to another hoster in days,
because the hoster is just providing a machine.

------
manishsharan
In case someone is looking to move their java REST server to Lambda, here are
a few lessons I learnt: a) fragment your REST war into smaller micro services
wars. The load time for big REST application will impact your caller. Small
micro services will boot up on Lambda much faster and respond quicker. b) If
you use spring dependency injection, you may want to rethink that and
tweak/optimize it. I had a database tier with over 20 hibernate/ebj beans .
The load time was horrendous. I had to eliminate all beans not needed by the
microservice.

~~~
minalecs
You're right, don't make a big war, but think of it as micro functions.

Also if you're going to use spring check out this guide.
[https://cloud.google.com/appengine/articles/spring_optimizat...](https://cloud.google.com/appengine/articles/spring_optimization)

its relevant to aws lambda as well.

------
adamors
Are serverless and container based architectures really that popular in the
real world, or is it yet another HN bubble? I'm still happy using
virtual/cloud servers for anything/everything.

~~~
mintplant
Serverless is pretty big in the PHP world, but people there tend to call it
"shared hosting".

~~~
runako
Are there shared PHP hosts that bill by the request? TFA indicates that is a
key difference of serverless, and why it's not shared hosting. The billing
model matters to some users as much as the implementation.

Serverless is just shared hosting like S3 is just FTP.

~~~
kuschku
There used to be quite a few PHP hosts that billed by the request in the past,
so, yes?

~~~
cweagans
I've been writing PHP for almost a decade and I've never come across any PHP
hosts that bill by the request. Not explicitly, anyway -- there were always
those hosts where if you got over some threshold of traffic and it caused a
certain amount of load on the servers, they'd cut you off with no
warning/notice. Those hosts suck. Never seen anything like Lambda for PHP,
though. Care to share?

~~~
kuschku
Well, there was a host – they shut down years ago, though – which did it.

And there were always the kind of hosts which had no quota, and no fee, but
where you’d pay based on how much you wanted to in a given month, but you’d be
expected – honor system – to pay more the more you used, with a rough formula
how much would be expected from you (but you’d never be terminated).

~~~
runako
With business models like those, it's not surprising they are not still
around.

~~~
kuschku
Oh, the honor model one is still around. The other one, which actually billed
you, isn’t.

Then again, here a lot of things are honor model or volunteer based, and
society still works.

------
datenwolf
How in the world is this considered something new? It was in fact the default
mode of operation for small to midscale webhosting that rendered pages
dynamically in the dotcom bubble burst era (late 1990-ies).

You'd upload your HTTP attached executables to cgi-bin or similar, register
other stuff in crontab and usually also got access to a MySQL or PostgreSQL.
Some hosters were actually fancy and deploy your stuff on a MOSIX cluster; but
simply load balancing toward the httpd machines and your application talking
to a dedicated DB machine worked as well.

~~~
subway
Load balancing? Heh, that's a good one.

My experience with hosting in the boom consisted of mostly first ip-based,
then name-based vhosted Apache, maybe a per-vhost, but often per-system cgi-
bin, and if you're lucky, mod_php.

Generally if you were in a shared hosting environment, your DB and webapp
lived on the same physical server (Probably a Cobalt RAQ or Netra T1, but
maybe a Sun Ultra AXi in an ATX case on a bread rack). Rarely was there any
form of multi-host redundancy for a given application. If something broke,
you're down while the hosting company swaps disks to a new chassis, and down
even longer when the disks die and you need to reproduce the app config/state
changes that happened in the 2 days since the last server backup happened.

So yeah. This is a slightly different model.

~~~
datenwolf
> My experience with hosting in the boom consisted of mostly first ip-based,
> then name-based vhosted Apache, maybe a per-vhost, but often per-system cgi-
> bin, and if you're lucky, mod_php.

At the core of it this is not so different.

> So yeah. This is a slightly different model.

No, not really. What's changed is our understanding of how to run this kind of
infrastructure. And we now have a little bit more sophistication in how we
organize process runtime environments (sandboxes, cgroups on Linux, jails on
FreeBSD). But if you tear down all the PR and get past the nice decoration it
boils down to a glorified shell access.

Back in the boom it sucked because nobody knew then, how to do these things
properly.

------
cheez
I did this. It's a pain in the butt. Worth it. But a pain in the butt.

