
Announcing Go Support for AWS Lambda - jitl
https://aws.amazon.com/blogs/compute/announcing-go-support-for-aws-lambda/
======
ramses0
What's the current state of the art for "I know I use AWS for my 9999-server
production instance, but how can I test the majority of my services offline on
an airplane w/o spending $199.99/mo to keep a fleet of test AWS
servers/services at the ready"?

Last I looked the "fake-aws", specifically "lambda" supported python, not
JS/Node, which was not matching my use-case. Go is now on the radar as a
usable programming language for this purpose, but what are the chances of
being able to also use it with local test deployments?

~~~
cachesking
Unit tests are all you should really need. Mock the AWS services and make sure
to assert, where you can, the arguments being sent to those services and you
should have enough coverage to get the job done.

------
Corrado
I guess I'm confused as to how the support of one compiled language differs
from any other compiled language; couldn't you just replace "Go" with "Rust"
at this point? I understand how Lambda would have to support scripting
languages (NodeJS, Python, etc.) with built-in runtimes and such, but if you
are going to compile down to a binary object anyway, why does it matter what
language the binary object was written in?

Does it have to do with supporting libraries being pre-installed on the Lambda
image or something? Couldn't they build a Lambda image that handles any
language that can interface to common "C" headers?

~~~
owaislone
The answer lies in understanding how lambda truly works. Lambda does not start
a new version of your program every time the function is called. The real
entry point to the program is somewhere else hidden to customers. That program
starts up and then waits for requests like a web server. On every request, it
invokes our lambda function. It literally invokes the function, it does not
spin up a new instance of the program. The program lambda spawns keeps running
until no new requests come in for a certain amount of time which is not known.
This is also why first invocation of a lambda function after a long time is
slower.

Also, there still are other things to take care of. Online code editor
support, making sure everything works and there are no bugs, support for
compiling the programs edited directly from the browser, etc etc. Also, Lambda
support means that the language has access to it's execution context through
an API and there is no better way than to implement it in the same language
and just pass an object to your handler.

Lambda functions in reality are not one-off CLI commands that boot and
terminate fast. They actually spin up a long running service and then the same
instance handles multiple requests until the instance is killed off because of
being inactive for too long. Having lambda support means AWS implementing a
long running server that integrates with your code, handles things like
errors, exceptions, logging and provides you with an API to execution context.

~~~
oelmekki
Thanks for explanation, it's somehow clearer to me, now. There's one thing I
still don't understand: doesn't this mean it's basically just a small server
app with a standardized api (as in, api must be implemented in a certain way
which allow to call its endpoints as if we were calling functions directly)?
What is the specificity of lambda? (beside the pricing style, making pay for
requests rather than app execution time)

~~~
IanCal
Yes, in a way. You can also use it to run code on data in S3 or other aws
services.

For example I need to munge some data somewhat regularly, which is stored as
json on S3. I can trigger it, and have about a thousand processes running in a
few seconds, and my large chunk of json processed very quickly. I don't need
to maintain any systems for this to work and it's cheap enough it's not worth
me worrying about.

Though S3 select might supercede a bunch of my use cases.

~~~
oelmekki
Oh, I see: it's not just that it's a simple app with payment on endpoint
access, it's that it can spawn tons on this app very quickly.

Am I correct to assume to ideal use case for that is to replace what is
typically implemented as a background workers for webservices?

~~~
IanCal
I've not used it much outside my own area but yes I think that's a very good
use case. Particularly if there's a "process some files" part. The classic
example is thumbnails for images, you can point a very simple bit of
imagemagik calling code at a bucket and say "any time a file is dropped in
here, create a thumbnail and put it over here" and it'll scale when you have a
bunch of images dropped in and cost nothing when nothing is happening.

Not perfect for all use cases, although some people do seem to enjoy running
as much stuff in lambda as possible, but when it matches what you need it can
be a surprisingly simple solution to quite annoying problems.

------
drej
First Python 3, now Go, fantastic. Plus that X-Ray support looks great.

The only thing that surprised me is that the Body struct element is a String,
not an io.Reader. Can we not stream large bodies through API Gateway into a
Lambda? Does it always read the whole thing before passing it onto my Lambda?
Or what's the reason behind the String? I haven't tried this yet, so I'm
curious.

~~~
alexcasalboni
The maximum payload is 6MB for synchronous invocations and 128KB for
asynchronous invocations.

More info here:
[https://docs.aws.amazon.com/lambda/latest/dg/limits.html](https://docs.aws.amazon.com/lambda/latest/dg/limits.html)

------
leifg
I'm still wondering why Lambda simply doesn't support Docker containers? Would
put an end to all this requests like "Please support $my_favorite_language on
Lambda"

~~~
otterley
Because it’s a function execution system, not an application execution system.
If you need the latter, you can use ECS or K8S.

~~~
cle
Fargate is probably the closest AWS offering.

------
dhbradshaw
Go seems like a great fit for this application.

Would love to see stats comparing cold start times etc. for go vs.
java/python/js. One would guess that it would be faster, but measurement
trumps guessing.

~~~
mathw
Absolutely. I've done some work with Lambda, and the cold start times for Java
and C# were horrendous. Node and Python were pretty fast. I would expect Go to
be more the latter than the former, with the added advantage of running really
fast as well (and not being JavaScript), so my advice to my former colleagues
on implementation language for a Lambda-based solution would be different if
that pans out...

~~~
chronid
In my case when I gave Java a bigger allocation of RAM I got far better result
(and it usually got even faster than the equivalent Python/NodeJS code).

At the time at least (I don't know right now, it's been a long while since I
touched a lambda function) it seemed that the bigger RAM share == the more CPU
you get, which was not written anywhere in the documetation. And JVM startup
times with crappy CPUs are indeed horrible. :)

------
pixie_
For Go and/or C# lambda support would it be worth even running the garbage
collector, or just allocating a block of memory and cleaning it up when the
function ends?

Side note I think that should be an option for web servers as well for
languages with managed memory. Light isolated non-threaded api endpoints
shouldn't be interrupted by garbage collection.

~~~
jitl
We do this during request execution in our Ruby services. We tell the GC to
pre-allocate a few GBs of memory, and then explicitly suspend GC until the end
of the request. Then we do a single GC run.

~~~
wickawic
Very interesting! Have you experienced improvements from this approach?

~~~
jitl
Absolutely. It's a critical part of our runtime configuration, but we're
hardly the first people to think of this approach.

I remember reading some discussion in an HN thread about FinTech developers
who run Java with 100GB+ heaps and no garbage collector, and then reboot the
application after the markets close. I can't find that specific thread, but I
did find this which has a few nuggets on the same subject:
[https://news.ycombinator.com/item?id=6131786](https://news.ycombinator.com/item?id=6131786)

~~~
jnordwick
While that does happen sometimes, from where I've worked at it is more common
to write in a garbage-less style (basically not allocating or pooling
everything) since JIT time matters at startup, especially for certain classes
of strategies.

Ive seen applications run and never GC after initial startup until they get
bounced to update. Another problem with the big heap is that you still have
tlab issues if you keep allocating.

There is a new-ish poc collector called the null collector that literally does
nothing, not even instrumentation, i believe, of write barriers.

We always see idiomatic programs benched against each other but I'd really
like to see high performance pure java against high performance pure go and
others. Some of the low latency Java tricks we use I'm not sure if they can
even be copied in other hll (not including c/c++).

~~~
wging
do you mean Epsilon?
[https://bugs.openjdk.java.net/browse/JDK-8174901](https://bugs.openjdk.java.net/browse/JDK-8174901)

~~~
jnordwick
That's it. I was looking all over for it. Thanks.

------
schappim
I wish they'd do the same for Ruby... hopefully Ruby is next!

------
srebalaji
What about Ruby.. ?

------
baybal2
An interesting service this lambda is. Amazon wants to bill people money for
basically hosting xinetd scripts. I wonder how much money do they make from
it.

~~~
DenisM
They make a lot of money, and I am only too happy to pay for what they give
me. Countless hours, or months, of my life saved.

~~~
iMuzz
What kind of things do you run on Lambda?

------
jrs235
Dupe:
[https://news.ycombinator.com/item?id=16153780](https://news.ycombinator.com/item?id=16153780)

~~~
jitl
The link there pointed to a Github repo with an almost entirely empty README,
so I posted the official blog post with examples and a full walkthrough.

