
From Express.js to AWS Lambda: Migrating Existing Node Apps to Serverless - slobodan_
https://hackernoon.com/from-express-js-to-aws-lambda-migrating-existing-node-js-applications-to-serverless-7473041ecc56
======
foob
I'm kind of surprised that the article doesn't mention _aws-serverless-
express_ [1], a node module provided by Amazon which makes it fairly trivial
to expose an existing Express app as a Lambda function via AWS API Gateway.
The Lambda handler module basically just consists of the following code.

    
    
        const awsServerlessExpress = require('aws-serverless-express');
        const app = require('./app');
        const server = awsServerlessExpress.createServer(app);
    
        exports.handler = (event, context) => (
          awsServerlessExpress.proxy(server, event, context)
        );
    

It's really convenient to be able to develop simple APIs using Express, and
then to expose them via Lambda functions using _aws-serverless-express_. After
you've done it once or twice, it's really easy to throw together single-
purpose APIs and deploy them in a matter of minutes. For example, I was
recently frustrated with the fact that CircleCI's API for accessing build
artifacts is perpetually broken, and I wrote a little microservice to expose
the same functionality [2]. It's definitely awesome to be able to deploy one-
off projects like that without needing to worry about any sort of server
maintenance.

[1] - [https://github.com/awslabs/aws-serverless-
express](https://github.com/awslabs/aws-serverless-express)

[2] - [https://intoli.com/blog/circleci-
artifacts/](https://intoli.com/blog/circleci-artifacts/)

~~~
Meegul
How's the latency when using this method? I'd imagine if you foresee your site
being low usage, startup times would be quite bad. At least, on the order of
seconds rather than milliseconds.

~~~
foob
It depends a little bit on whether there's a warm container or not, but I
think that the overhead is much closer to ~100 milliseconds than to multiple
seconds. The CircleCI artifacts API that I linked to takes less than one
second to return an artifact, and there's a lot more going on there than a
single request/response pair. The endpoint makes an external request to
CircleCI's API, does some processing on the response, returns a 301 redirect,
and then the redirect is followed by the client and another request is made to
actually download the file from S3. You can test this directly by running

    
    
        time curl -L 'https://circleci.intoli.com/artifacts/intoli/exodus/coverage-report/total-coverage.json'
    
    

which outputs something like the following (the timing will vary a bit
obviously).

    
    
        { "coverage": "92.43%" }
    
        real	0m0.798s
        user	0m0.046s
        sys	0m0.010s

~~~
audiolion
I just ping my functions every 4 minutes to keep them warm. I am still in the
free quota doing that

------
fovc
For people who run their apps like this, do you find the complexity is greater
or lesser than running e.g. Express + nginx in richer containers?

This looks very cool, but seems ripe for some ugly emergent behavior to
appear.

~~~
nzoschke
After the initial “cost” of figuring out how to configure the services, I’m
finding the complexity far less than containers.

Many concerns are removed from my application code and build and deploy
systems by Lambda and API gateway.

You just throw some zip files in S3 and you have a web service.

No more image registry, container orchestration, load balancers, AMIs,
autoscaling, etc.

You also get to remove things code around CORS, auth, polling etc. from your
app.

I have a bunch of writing on how FaaS makes things easier here:

[https://github.com/nzoschke/gofaas](https://github.com/nzoschke/gofaas)

------
diegorbaquero
No async/await yet in lambda, it’s really sad. Serverless in AWS is stuck in
2015

~~~
Can_Not
Most JS developers are already use to using webpack/Babel. On Babel's website
is available some quick guides to starting a new project with many
options/templates.

Here is a project to enhance the lambda experience with webpack
[https://github.com/serverless-heaven/serverless-
webpack/blob...](https://github.com/serverless-heaven/serverless-
webpack/blob/master/README.md)

In production you would probably want to minify and micro-optimize anyways, so
why pick one of the standard build pipeline tools and get async/await also?

Your last option is spotinst. If you use their serverless service, you get
native access to node 8. They load balance across AWS EC2s, GCE, and Azure.

~~~
joshribakoff
Minifying and transpiling are different things. Minifying server side code
could make sense to get you wthin the filesize limits of aws. One downside to
transpiling to older versions of ecmascript is it slows your code and
complicates debugging. Newer JavaScript runtimes optimize for newer JavaScript
features

------
k__
Is pumping data really the right use of Lambda?

I read it should be used for data transformatiom and computation and not
transportation.

