
AWS Lambda as a back end for a single-page app - arange
http://lg.io/2015/05/16/the-future-is-now-and-its-using-aws-lambda.html
======
d0m
So the backend becomes a repository of small lambda-modules.. on a testing
perspective, that's pretty cool. I agree that it could be the next big thing
as new libraries leverage that.

Basically, there's no more "Platform as a service".. there is just "your code"
that gets executed whenever. You can upload a new module without touching the
other modules.

Something worth exploring in my next hackathon : )

~~~
tlrobinson
It's really just a more fine-grained PaaS, not significantly different than a
bunch of tiny Heroku apps (the ~1 second cold start time is significantly
better than Heroku's though)

------
Zaheer
I've used [http://www.Webscript.io](http://www.Webscript.io) in the past which
is essentially just like AWS Lamda to make static sites 'dynamic' with no
backend.

------
al2o3cr
"the future is now, and it's down because somebody used the secret key to
drain the poster's bank account"

To be sure, the author specifies that the IAM role being exposed here is only
allowed to invoke the function. That's great for the security of the other
resources on the account, but still allows a reasonably determined attacker to
run up a Bill of Unusual Size quite rapidly.

For instance, the rate limiter currently kicks in at 1000TPS. Assuming the
smallest memory size (128MB) and requests <100ms, that's a worst-case spend of
roughly $18/day per Lambda function. Not the wallet-melting consequences of,
say, accidentally posting AWS root credentials but not great either. Multiply
that by the number of endpoints you'd likely want in a single-page app, and it
gets expensive.

~~~
kitbrennan
But the same is true of any backend. Once you know the endpoint being used by
the frontend, you can blast the backend with requests and one of two things
will happen:

* You will take down the site (a DoS attack). * Or the victim has auto-scaling and you rack up their AWS charges.

This is hardly a unique problem to Lambda.

~~~
rattray
The biggest difference is ease of rate limiting. With lambda, I imagine the
best you could do would be check the IP in a Redis cache at the beginning of
each request (if the SDK even includes that info) to minimize the damage. But
there would be no way to fully stop an attacker without turning off the
service entirely.

If you run your own webserver, I think you can stop stuff like that more
efficiently / without the expense, eg at the nginx level.

------
nothrabannosir
The real question here is still: why doesn't Amazon offer a GET interface to
Lambda? It's so, so, so close. So almost. They offer POST (through a work-
around with S3), why not GET?

That is the real destination. With lambda serving GETs, we can remove the "for
a single page app" from the title. AWS is so close to fulfilling the promise
of its cloud: let developers worry about code.

~~~
justincormack
It is a bit slow still. Jitsu[1], which starts unikernels is fast enough to do
per request image booting, but Amazon seem to have a 1s cold start which is
just a bit slow.

[1] [https://github.com/mirage/jitsu](https://github.com/mirage/jitsu)

~~~
fwefwefwef
Ah, so there is a 1s cold start. I couldn't understand how they managed to get
a container (which I imagine they are using?) running "within milliseconds". I
guess the millisecond claim only holds for concurrent requests then?

~~~
justincormack
Seems so for the measurements I have seen yes. Concurrent or close before it
has been shut down.

------
xur17
What's the typical latency for a request when you use lambda, such as for this
example?

~~~
paulsmith
My anecdotal experience is that it's about ~1 sec. for a "cold" request, and
on the order of 10s of ms for repeated requests thereafter.

------
S4M
Naive question: how does his example differ from a server responding to HTTP
requests, that would take a JSON _event_ as an argument and return a json
version of:

    
    
        "the value was: "+ event['key1']
    

I suppose the AWS wins in term of set up and easiness of deployment. Anything
else?

~~~
fru2013
It differs in that you don't have to worry about scaling up that HTTP server,
or pay for the machine hosting that HTTP server. With Lambda, you only pay per
requests, current rate is $0.20 per 1 million requests.

~~~
pyvek
Is there also any limit on machine resources that a single request can use?
I'm curious about cases in which requests might need non-trivial amount of
computational power.

~~~
zwily
You pay for request time too, measured in 100ms increments. And you pay more
for more CPU/RAM.

------
benologist
I don't think I'd trust building directly on it for the 100 req/sec limit and
whatever latency it introduces, but it seems like it would complement heroku
dynos really nicely farming out processor-intensive stuff like bcrypt.

~~~
chisleu
I think it is 100 concurrent requests per account. That is pretty low IMO. I
think it was intended for things that are CPU intensive and short-lived. If it
needs to pull from a data store and process, it likely isn't a good idea for
lambda to begin with, right?

I agree, it is only really useful to use like this for certain scenarios.

~~~
ranman
All of those limits are typically to protect users from being over charged if
they fat-finger something. Most limits like that can be lifted with a support
ticket. I'm not sure if this is one of them but I think it's likely.

~~~
jeffbarr
You are correct; that limit can be raised. Just ask!

~~~
rattray
Could the limit also be lowered, for damage control?

------
crdoconnor
Does nobody else look at this and worry about the vendor lock in implications?

If your app is dependent upon a technology like this, it's much harder to move
to a different platform than if just used, say, EC2.

~~~
mryan
I don't worry about lock in in this particular case. At the platform level
there is an element of implicit lock in because nobody else offers this exact
service.

However, the application level can be easily moved to another platform. You
could host the handler function code on an EC2 instance or bare-metal server.
Little would change in the app code, except the API endpoint to which requests
are sent.

So IMHO there is no lock in in the traditional sense, but there is a switching
cost if you want to move to another platform.

~~~
crdoconnor
>At the platform level there is an element of implicit lock in because nobody
else offers this exact service.

That's kinda what I meant (although there seem to be some similar variants
with totally different APIs).

Anything similar will use a different API, as well. Even if you could move to
an almost-identical service, there will be not-insignificant switching costs
as you reconfigure everything with a different API and reimplement the glue
code.

>So IMHO there is no lock in in the traditional sense

Lock-in doesn't mean that it's impossible to move to a different platform, it
just means that there's a high cost. To me, it seems like the cost of moving
this to some other platform is quite a bit higher than the cost of moving
something from, say, EC2.

Actually, it seems like this is Amazon's real business strategy with things
like this. They want you to use all of their different services like this,
SQS, SES, Elastic beanstalk, their hosted database thing. Individually the
costs of moving away from all of them is not _that_ high, but add them all
together and it becomes immense.

~~~
mryan
> Even if you could move to an almost-identical service, there will be not-
> insignificant switching costs as you reconfigure everything with a different
> API and reimplement the glue code.

That's true. I think we're essentially on the same page here.

I guess I'm still using the old definition of lock in. Poor example: a
proprietary Microsoft format that can only be understood by MS software. There
is literally no alternative to using MS software if you want to access files
created using this format.

Does "high switching cost" == "lock in"? I think it is a grey area. I would
not consider myself locked in to Lambda if I had made the decision to host my
app there, although I do see there is a high switching cost.

However, if I used RDS Postgres and Amazon decided to prevent me from creating
database dumps to migrate to a self-hosted Postgres, then I would be very
unhappy because that would be an arbitrary restriction with no purpose except
to keep me on AWS.

> To me, it seems like the cost of moving this to some other platform is quite
> a bit higher than the cost of moving something from, say, EC2.

I agree completely with this. Let's say this was implemented on EC2 instances
running Ubuntu, and configured with something like Salt or Puppet. The cost of
moving to a Digital Ocean box would be negligible.

------
jelz
Some time ago, when Lambda was in its early preview, I've created awsletter
[1], a newsletter system w/o any old-style backend components - only AWS SDK
for browser, S3 and Lambda. Pretty the same idea - utilize "invoke function"
calls to realize backend actions.

[1] [https://github.com/jelz/awsletter/](https://github.com/jelz/awsletter/)

------
codewithcheese
How does the pricing compare with say X amount of tasks that run for 1 second
each compared to what you could expect from 1 hour on a medium ec2 instance?

EDIT: I see it offers "The Lambda free tier includes 1M free requests per
month and 400,000 GB-seconds of compute time per month. " What is a GB-second
;)

~~~
ranman
Re: pricing -- good question I'm going going to test that.

Re: gigabyte seconds: It's how much memory (RAM) you're using:

128mb RAM (lowest possible value) for 10s == 1 GB/s

1gb RAM for 1s == 1 GB/s

I think it's a pretty clever way to (somewhat) directly correlate compute cost
with power (energy). If you remember what John McCarthy said when he imagined
that computing would be a public utility one day
([http://www.technologyreview.com/news/425623/the-cloud-
impera...](http://www.technologyreview.com/news/425623/the-cloud-imperative/))
I think this is about as close to that as we're going to get for a little
while.

~~~
CMCDragonkai
Doesn't this terminology come from power utilities where a kilowatt hour is
kW.h. as in kilowatt × hour. Not kilowatt / hour. So why is a GB second, a GB
/ second and not GB × second?

~~~
ranman
It is a GB x second -- I've just seen it written more commonly as GB/s. I
think we need a better abbrv for it... gBs?

~~~
CMCDragonkai
Then it should be GB.s or GBs.

GB/s gives a different meaning. For example I could use 600 GB in the first
second, then 400 GB in the second second. All together its 1000 GB.s, at an
average of 500 GB/s for 2 seconds.

This is not perfect because kilowatt is a rate so kWh is a quantity. But GB is
already a quantity. So a GB.s doesn't make sense either. It would need to be
an equivalent memory rate usage × time in order to get memory quantity. Like
avg GB/s × s.

~~~
jsjohnst
> This is not perfect because kilowatt is a rate so kWh is a quantity.

You're a bit mixed up there. Kilowatt is a quantity (unit of energy), kWh is
the rate.

------
teddyknox
This makes me think that Amazon should build a service that acts as a
virtualized "pre-fork worker model" server (a la [g]unicorn) and transparently
scales WSGI/Rack/etc application processes across machines.

------
zimbatm
How hard is it to pull in extra dependencies for the script ?

~~~
bshimmin
There are a few answers in the FAQ around this: (none specifically answering
the "easy" part):
[http://aws.amazon.com/lambda/faqs/](http://aws.amazon.com/lambda/faqs/)

My own experiences of working with Amazon services suggest that this will
probably be great for some use-cases and very complex and unpleasant for
others, and will probably be updated every six or twelve months in a fairly
drastic way which will make it hard to find helpful documentation. Maybe I'm
just feeling cynical today, though.

~~~
ranman
I think AWS is betting big on Lambda (I work there). So I think you'll see a
lot of continued innovation around the service. They're pretty intense about
consuming customer feedback and iterating on it so if you have an idea or
desire that you would like Lambda to incorporate I'd definitely shoot them an
email/ticket -- it's not a black hole.

------
akhatri_aus
AWS has a couple of issues with Lambda. If you use a binary its very difficult
to talk to the process. No port binding means its nearly impossible to talk to
processes without prohibitive changes (named pipes are a solution but it
requires extensive code changes to existing apps)

------
iceburg
It is certainly an amazing product. Would be interesting to know how Amazon
designed it, specially for security. Does anyone know if they just put all
requests in a queue and spin up containerized environments for each request?

------
tsxxst
Is it possible to have the same thing but over websockets? As Lambda can also
listen to DynamoDB, it would be quite interestig to have an ability to forward
this kind of events to browser clients.

------
JDDunn9
How would this use differ from Amazon's elastic beanstalk?

~~~
jiballer
With beanstalk, you're paying in terms of multiples of server instances at all
times, even if nothing is hitting your servers. With Lambda, you pay only for
the execution time of your function, which is only when a request is made.

------
codewithcheese
I like the idea of Lambda. I have some linux binaries that I would like to run
on demand. I wonder if this is possible.

~~~
adregan
You should be able to do that as described here :
[https://aws.amazon.com/blogs/compute/running-executables-
in-...](https://aws.amazon.com/blogs/compute/running-executables-in-aws-
lambda/)

Note that there are some limits regarding the size of the zip you upload [1].

1:
[http://docs.aws.amazon.com/lambda/latest/dg/limits.html](http://docs.aws.amazon.com/lambda/latest/dg/limits.html)

~~~
codewithcheese
Thanks for the links! If my executable relies on .so files in a lib directory,
is it possible to set env variables to point to a local path?

~~~
hendzen
Why would you need to do that? Just set DT_RUNPATH when you link the binary
and use the $ORIGIN variable to set a relative path.

~~~
girvo
Is there anywhere that has decent documentation on how to do that? I'm not
very good with compiling and linking C, and have been trying to work out some
distribution issues with a project I was working on in a compile-to-c
language, where it relies on a dynamically linked shared library that I want
to distrubute along with it.

~~~
hendzen
I would read section 3.9 of Ulrich Drepper's "How to Write Shared Libraries"
[0].

[0] -
[http://www.akkadia.org/drepper/dsohowto.pdf](http://www.akkadia.org/drepper/dsohowto.pdf)

------
Maarten88
> The future is now, and it's using AWS Lambda

How is this different from what Azure Mobile Services and many others have
been offering for the past 2 years? To me it seems the author is proclaiming
Platform As A Service is entirely new (look, no spinning up AWS instances!)
while this has been around for quite some time, just Amazon is getting into it
more seriously recently.

~~~
integraton
Lambda is a focused, specialized service for running short-lived processes
triggered by events and is one of the many services provided by AWS that have
formed the foundation for many companies and other PaaS providers for many
years, while Azure Mobile App Service is a packaged and branded collection of
services including data storage and push notifications that is similar to
backend service providers like Parse and Urban Airship pre-pivot (at least one
of which was built on top of AWS), and tries to offer services comparable to a
subset of other AWS services including SNS (push notifications), RDS or
DynamoDB (databases as a service), among others.

It's also quite strange to spotlight Azure as if it's doing something at all
remarkable considering the "Mobile Backend as a Service" market has existed
for years and actually seems to be on the decline, at least as a standalone
segment.

~~~
Maarten88
I mentioned Azure Mobile Services because (as it's node-based) it can be used
to implement exactly the same application as the OP, line-for-line exactly the
same.

A static webpage, that calls into a javascript based service, you can register
a script to run in that service without any infrastructure hassle, because,
well, it's a service.

You are right that there are many comparable Mobile Services that can do many
other things and might be branded differently, but this just adds to my point:
I don't see what's new here, other than that the OP has seen the light on
static webapps combined with platform services.

------
mardurhack
Can this be called RPC? If not, what is the difference? Amaz(on)ing service
anyway!

------
wnevets
I like lambda a lot. I just have to find more use cases for it.

------
jameshush
Hey, just a heads up, you posted your AWS secret key. I'd take that down and
change your key ASAP. I had my keys compromised before and someone racked up a
$5k AWS bill mining bit coins.

~~~
netcraft
he mentions in the article he created that account specifically to only have
the ability to call that one lambda function - which I didnt realize you could
restrict to that level.

~~~
idunno246
I like this one better because it demos using google auth to get temporary aws
credentials:

[https://github.com/cloudnative/lambda-
chat](https://github.com/cloudnative/lambda-chat)

~~~
mayli
Last message is "Peter Sankauskas Hello". I tried to send some message, and
it's saying "Message sent to SNS successfully". But these message never show
up in the page.

~~~
pas256
I'll be talking about this at Gluecon this week. You are right, it is a little
broken right now.

