
New for AWS Lambda: Use Any Programming Language and Share Common Components - abd12
https://aws.amazon.com/blogs/aws/new-for-aws-lambda-use-any-programming-language-and-share-common-components/
======
talawahdotnet

      We are making these open source runtimes available soon:
      C++[1]
      Rust[2]
    
      We are also working with our partners to provide more open source runtimes:
      Erlang (Alert Logic)
      Elixir (Alert Logic)
      Cobol (Blu Age)
      N|Solid (NodeSource)
      PHP (Stackery)
    

There is also native Ruby support as well[3]

1\. [https://aws.amazon.com/blogs/compute/introducing-the-c-
lambd...](https://aws.amazon.com/blogs/compute/introducing-the-c-lambda-
runtime/)

2\. [https://aws.amazon.com/blogs/opensource/rust-runtime-for-
aws...](https://aws.amazon.com/blogs/opensource/rust-runtime-for-aws-lambda/)

3\. [https://aws.amazon.com/blogs/compute/announcing-ruby-
support...](https://aws.amazon.com/blogs/compute/announcing-ruby-support-for-
aws-lambda/)

~~~
wpietri
This is great, but I'm ever so slightly disappointed that they called it the
Lambda Runtime API instead of Common Gateway Interface. [1] It's not quite the
same, but it definitely gives me an everything-old-is-new-again feeling.

[https://en.wikipedia.org/wiki/Common_Gateway_Interface](https://en.wikipedia.org/wiki/Common_Gateway_Interface)

~~~
fuball63
Interesting you mention CGI; I built [https://bigcgi.com](https://bigcgi.com)
([https://github.com/bmsauer/bigcgi](https://github.com/bmsauer/bigcgi))
primarily to address the issue of "any language" in FaaS environments.

Seeing this takes away one of my biggest comparative advantages, but I'm happy
to see that at least I was not wrong in thinking this feature would be
important for developers.

CGI still has the advantage of being an open standard, locally testable, and
not vendor specific.

------
talawahtech
Several related lambda announcements to around structure/reuse as well:

1\. Lambda Layers - Reusable components that can be shared across lambda
functions (covered in the linked article)

2\. AWS (Serverless) Toolkits for PyCharm, IntelliJ & VS Code -
[https://aws.amazon.com/blogs/aws/new-aws-toolkits-for-
pychar...](https://aws.amazon.com/blogs/aws/new-aws-toolkits-for-pycharm-
intellij-preview-and-visual-studio-code-preview/)

3\. Nested Applications Using the AWS Serverless Application Repository -
[https://aws.amazon.com/about-aws/whats-new/2018/11/sam-
suppo...](https://aws.amazon.com/about-aws/whats-new/2018/11/sam-supports-
nested-applications-using-serverless-application-repository/)

------
mncharity
Has anyone seen discussion of the impact of serverless on programming-language
_design_?

It relaxes constraints that have historically restricted the shape of viable
languages. With massively-parallel deterministic compilation (like Stanford's
gg - which just got simpler to implement). Parallel distributed incremental
shared type checking, like Facebook's Hack. Language-community-wide sharing of
compilation artifacts (sort-of like build images, or typescript's type
definition repos, or Coq's proof repo).

"That would be a nice language feature, but we don't know how to compile it
efficiently, so you can't have it" has been a crippling refrain for decades.
"[D]on't know how to either parallelize or centrally cache it" is a much lower
bar. At least for open source.

This involves not just compiler tech in isolation, but also community
organization for fine-grain code sharing. Any suggestions on things to look
at?

------
yebyen
Way to go, Ruby support!!!! I am irrationally excited about this, if I wanted
to do Serverless Ruby up until now, my nearest options were (some community-
supported thing with "traveling ruby" on AWS, or...) Kubeless, or Project Riff

[https://www.serverless-ruby.org/](https://www.serverless-ruby.org/)

We've been waiting! I thought this would never happen. Eating major crow
today, as I've assumed for a long time that AWS's lack of Ruby support in
Lambda was an intentional omission and sign that they are not our friends.

I will change my tone starting today!

(edit: Or Google Cloud Functions, or probably some other major ones I've
missed...)

~~~
jon-wood
I had a play with it this evening implementing a basic webhook handler and
it’s super smooth - I hooked a Ruby Lambda function up to API Gateway and
everything just works. I suspect you could very easily create some sort of
Rack shim on top as well, effectively giving near free, infinitely scalable,
Ruby app servers (assuming you can keep the start time down).

~~~
schappim
I just did the same thing, but things fall apart when you need to use gems w/
native extension (mysql). I'll need to investigate the native runtime route.

~~~
yebyen
That's interesting... gems w/ native extension, I'm not sure if that was on my
test case list, but it should be. Thx...

------
Tehnix
I'm personally very hyped for using Haskell natively on Lambda! In the keynote
he mentions the partner sponsored runtimes, and actually said "Scala and
Haskell, you'll just have to bring your own!" (as in, community effort).

~~~
samstave
For the uninitiated, can you please provide some example use cases where you
would use Lambda for Haskell for great good?

~~~
Tehnix
Honestly, for everything we are currently running on Lambda. At the moment we
are using the Node.js runtime with Typescript.

We use both Lambda for data processing on incoming IoT events, and also for
API interactions with our user, along with services to send mails and SMS,
etc.

\---

I would just want to use the robustness and correctness that I'm able to
ensure with Haskell, compare to TypeScript, and especially bundling quirks
with Webpack and NPM.

An example of an easy thing to let through is missing an _await_ on a call
(might be for logging or something else), which means it'll go out-of-band,
and can then easily not be run during the execution, if you have disabled
waiting for the event loop to be done (which is necessary in many cases for
global setup optimizations).

The call might then finish when the lambda function is thawed, but it might
also never run, because the function gets collected.

Now, admittedly, this is not a _big_ issue in any way, but it's the _death by
a thousand papercuts_ that I'd like to avoid.

------
mcintyre1994
Layers sounds like a great solution to sharing code/libraries. If anyone at
AWS is here, will there be a way to pull them down for local testing? At the
minute it's trivial because you're forced to pull all your dependencies into a
single place you can zip them, and you can test them at that point - but will
you still have to do that if you want to test locally with layers?

~~~
munns
hi! yes there will be support in SAM CLI that when you do local testing
referring against a Layer it will pull it down for you. - Chris, Serverless @
AWS

~~~
jacques_chester
Hi Chris -- it would be interesting to compare notes on Cloud Native
Buildpacks, which seems to have overlapping mission with Lambda Layers. Could
you come find us at [https://slack.buildpacks.io](https://slack.buildpacks.io)
?

------
matchagaucho
Awesome. Our "Distributed Monolith" problems are now solved.

We have so many Lambdas that share common Java JAR libs.

Lambda Layers appears to solve our reuse and deployment headaches.

~~~
teej
Lambda Layers fixes so many problems!

* We had to hack around Lambda zip size limits. Now we can deploy fat dependencies to Layers and ditch the hacks.

* We can drastically speed up build and packaging time locally by keeping fat dependencies in Layers.

* We use a shared in-house library to wrap all code that lives in a Lambda. Updating the version of this required us to deploy every single Lambda that used it. Now it can live in one place.

* We can eliminate repeated deployment pain for Python code with C dependencies by deploying the manylinux wheel to Layers. Now devs can go back to just packaging up pure Python and not worry about cross-platform problems.

And probably loads more I'm not thinking of.

~~~
spullara
Lambda zip size limits are still strongly in force. All your layers and your
lambda code still much be < 50MB zipped / 250MB unzipped total.

~~~
freedomben
Are you sure? I've been seeing conflicting information regarding that, but
can't find the authoritative answer from Amazon

~~~
teej
spullara is right, I am wrong -

> The overall, uncompressed size of function and layers is subject to the
> usual unzipped deployment package size limit.

[https://aws.amazon.com/blogs/aws/new-for-aws-lambda-use-
any-...](https://aws.amazon.com/blogs/aws/new-for-aws-lambda-use-any-
programming-language-and-share-common-components/)

------
dacm
We've been packaging pandas in a lambda which is used to perform some
calculations, but being a 50 MB zip file makes cold starts of about 6-8 secs.
We're lucky that the service has little use, thus our way to workaround it is
by having a lambda warmer which is run every 5 minutes and invokes N pandas
lambdas. I'd be very interested in knowing if Layers has some feature to avoid
this kind of issue.

~~~
paddy_m
How did you get the zip down to 50MB. I was under the impression that
pandas+numpy was closer to 300MB and bumped up against AWS size limits. I was
considering building some hacked together thing with S3

I came to this thread specifically to find out about numpy and pandas on
lambda.

~~~
richstoner
We've been running a stripped down version of numpy + scipy + matplotlib in
lambda. We'd build the environment in a docker container with Amazon linux,
manually remove unneeded shared objects and then rezip the local environment
before uploading to s3.

A similar method is described here: [https://serverlesscode.com/post/deploy-
scikitlearn-on-lamba/](https://serverlesscode.com/post/deploy-scikitlearn-on-
lamba/)

Layers should make this entire process easier.

------
Touche
Note that it's long been possible to use any language with Lambda through a
shim. In the early days that was using Node.js to call out to your binary.
That meant you had an extra bit of startup cost (Node + your binary). Once Go
was supported that no longer mattered much since Go binaries start almost
instantly.

Of course an official method is nice here.

~~~
staticassertion
I haven't even been using a shim to run my Rust binaries. Just statically link
them, and use: [https://github.com/srijs/rust-aws-
lambda](https://github.com/srijs/rust-aws-lambda)

TBH, I'm very happy to see native support, but it was already super, super
easy to use Rust on lambdas without a shim layer.

------
gt5050
I was really hoping they would increase the deployment package size. Currently
it is at 250Mb unzipped including all layers.

~~~
munns
Hi! This is super common feedback and something the team is definitely
thinking about! What would you want to see it increased to? (Chris Munns from
Serverless @ AWS)

~~~
znagengast
This would be amazing! At lot of ML use cases are largely unfeasible in lambda
on python without serious pruning. Latest version of tensorflow is 150mb
uncompressed, add numpy pandas etc to that and it adds up fast. I think 1 GB
uncompressed would be pretty reasonable in the current state of ML tools,
personally.

~~~
munns
Roger that! Thanks!

~~~
dodobirdlord
As a thought, could Lambda (perhaps in cooperation with AWS Sagemaker?) offer
a Lambda execution environment atop the AWS Deep Learning AMI? This would
solve a lot of problems for a lot of people

[https://aws.amazon.com/machine-
learning/amis/](https://aws.amazon.com/machine-learning/amis/)

------
Myrmornis
What does a test suite look like for an application structured using lambda
functions?

------
zkirill
Noob question but is it possible/advisable to somehow (re)use prepared
statements in Lambda?

"Prepared statements only last for the duration of the current database
session. When the session ends, the prepared statement is forgotten, so it
must be recreated before being used again. This also means that a single
prepared statement cannot be used by multiple simultaneous database
clients..."[1]

1\. [https://www.postgresql.org/docs/current/sql-
prepare.html](https://www.postgresql.org/docs/current/sql-prepare.html)

~~~
laurencerowe
I don't see why not. Presumably you would connect to the database and prepare
the statement when the Lambda function starts up and execute the prepared
statement from the per-request handler invoked.

------
Shorel
Now, if I can also link the MySQL C++ connector libraries, I could run some of
my code "natively" in Lambda.

I have used C++ in Lambda before, it is quite cumbersome and it still has the
performance hit of using Node.JS.

~~~
marcomagdy
You certainly can link MySQL connector libraries. Or any library for that
matter that you normally link in a C++ application. The lambda-cpp-runtime
does not limit you.

Check out some of the examples in the GitHub repository.

~~~
Shorel
Checking the examples is the first thing I did. No example uses MySQL
connector.

There are only example CMake calls to find_library, that will or will not find
this library based on things that are not documented.

AWS Lambda still uses Amazon Linux AMI as the base for running all code. MySQL
C++ connector library is not available to be easily installed by yum for this
particular OS.

In particular, the library is distributed as an RPM, from Oracle website,
which requires you to be logged in to download it, and then manually
installed.

Also, an appropriate CMake find script has to be available to CMake.

Therefore, this is still a valid question, and AWS support needs to clarify.

For the previous Lambda version, I had to add the mysqlcppconn.so file to the
uploaded zip and use something like this to ensure the executable runs:

"/lib64/ld-linux-x86-64.so.2 --library-path "+process.cwd()
+"/node_modules/.bin/lib "+process.cwd() +"/node_modules/.bin/helloworld "

------
tekno45
So will this allow me to run powershell 5.0?

I have O365 scripts i need to run, but i only see support for PS6+

~~~
joeyaiello
PM for PowerShell here, no ETAs right now, but we're working closely with some
O365 teams (starting with Exchange Online) to bring their modules up to
compatibility with PS 6.x.

------
ngngngng
Forgive the noob question, but why is it necessary to have a custom runtime
when using a binary from a compiled language? It seems to me that golang
support should just mean binary support, and then c++ and rust would be able
to comply already, no?

~~~
archgoon
As of yesterday, lambda only provided a way to specify a function handler. The
input to that function, and the response value from that function, needs to be
deserialized and serialized (and implicitly, construct a runtime in which the
concept of a function exists). Previously, a runtime for each supported
language handled deserializing the wire protocol into a runtime object,
invoking the specified function, routing stdout and stderr to cloudwatch,
serializing the function's return value, and transforming runtime exceptions
into response exceptions.

The lambda team is now letting you write that runtime, and presumably
providing more documentation on the responsibilities of the runtime.

Check out the example C++ and Rust runtimes to understand why each language
needed to have it's own custom runtime.

[https://github.com/awslabs/aws-lambda-cpp](https://github.com/awslabs/aws-
lambda-cpp)

[https://github.com/awslabs/aws-lambda-rust-
runtime](https://github.com/awslabs/aws-lambda-rust-runtime)

------
watty
We had to abandon Lambda due to cold starts. Any news if that's resolved?

~~~
otterley
Cold start performance is improving significantly as the Lambda fleets migrate
to Firecracker technology: [https://aws.amazon.com/blogs/aws/firecracker-
lightweight-vir...](https://aws.amazon.com/blogs/aws/firecracker-lightweight-
virtualization-for-serverless-computing/)

~~~
hn_throwaway_99
Anyone know if there have been any improvements to cold start times for
Lambdas in a VPC? That was the absolute death knell for us. If you're using
Lambdas as a service backend for mobile/web apps, it's extremely common those
Lambdas will be talking to a DB, and any decent security practice would
require that DB to be in a VPC. Cold starts for Lambdas in a VPC could be on
the order of 8-10 _seconds_ : [https://medium.freecodecamp.org/lambda-vpc-
cold-starts-a-lat...](https://medium.freecodecamp.org/lambda-vpc-cold-starts-
a-latency-killer-5408323278dd)

~~~
staticassertion
Agreed! I'm much more concerned with VPC performance - I don't have a single
lambda outside of a VPC. Firecracker is extremely cool, and I'm very glad to
see the improved perf at the VM level, but that's not my bottleneck.

Thankfully, in my case, I have a very steady flow of data so I don't expect
too many cold starts.

~~~
Tehnix
I know that they are actively looking at it.

One thing though, does your lambdas need both public and private access? Else
you can place them in a subnet for private only, since the slow part is the
ENI for the Nat Gateway.

~~~
staticassertion
They all need to access S3, which I believe requires public.

------
maxharris
[https://nodesource.com/products/nsolid-aws-
lambda](https://nodesource.com/products/nsolid-aws-lambda)

------
matte_black
Anyone come up with solutions for Lambda functions to effectively use database
connection pools without the use of a dedicated server?

~~~
pmohan6
Can't wait for that. Although I like DynamoDB, I'd love to connect to a
Postgres RDS deployment from Lambda.

~~~
matte_black
Agreed, this really is the last thing holding me back from going all out on
Lambda.

------
ziont
any improvements on cold start? this is a deal breaker for me. also doesn't
seem cheaper than running a $5/month DO

~~~
zild3d
most serverless frameworks take care of cold starts, no? zappa by default will
just keep your functions warm

~~~
ozten
I found this to be the best update on the cold start problem.
[https://mikhail.io/2018/08/serverless-cold-start-
war/](https://mikhail.io/2018/08/serverless-cold-start-war/)

~~~
ziont
skimmed it and seems like < 2~3s is the expectation from AWS Lambda.

So I will go with AWS Lambda. But seems like having lot of dependency
increases the lag, wonder if there's a way to put the source on a diet.

------
mtnGoat
I am still waiting for proper PHP support and the ability for Lambdas to use
VPC to connect to RDS servers, leaving my DBs wide open is kind of annoying...
they say its possible but I've had 4 engineers try and no one can get it to
work.

These issues that Azure has already solved, make me wonder how much longer i
will stay with AWS.

~~~
illumin8
Lambda has had VPC support for over 2.5 years now:
[https://aws.amazon.com/blogs/aws/new-access-resources-in-
a-v...](https://aws.amazon.com/blogs/aws/new-access-resources-in-a-vpc-from-
your-lambda-functions/)

~~~
mtnGoat
yes it has VPC support but for some reason as soon as we turn mysql to not be
open to the public in RDS, Lambdas can no longer connect.

------
funruly
Nice! Though we've been playing with this for Azure Functions over the last
few months.

~~~
funruly
[https://github.com/Azure/azure-functions-
host/wiki/Language-...](https://github.com/Azure/azure-functions-
host/wiki/Language-Extensibility)

------
wnevets
Layers sound really nice.

------
mychael
Missed opportunity to just support one runtime to rule them all: Docker
containers.

~~~
otterley
I suspect cold-start performance of arbitrary Docker containers would be
intolerable for most customers in light of the sizes and number of layers of
many of the images seen in the wild. Most people aren't building "FROM
scratch" images yet, if they ever will. Single binaries, or scripting
languages with well-defined runtime environments, are far easier to meet
customer-demanded SLOs when building a serverless platform.

(Disclaimer: I work for AWS, but this is strictly my personal
opinion/observation.)

~~~
jacques_chester
This is in part what Cloud Native Buildpacks is working to solve. Can you come
find us ([https://buildpacks.io](https://buildpacks.io))? I think it would be
really helpful to have CNB supported in/as Layers.

