
Stateful Experiments on AWS Lambda - cmeiklejohn
http://christophermeiklejohn.com/lasp/2018/03/10/serverless.html
======
teraflop
The article's conclusion -- that Lambda is cheaper than EC2 instances for this
use case -- is completely wrong. The author only counted the per-request
overhead, and neglected to add the actual cost of the GB-hours consumed. If
each container uses 512MB of memory, then keeping one request running at a
time for an entire month costs about $22. For comparison, a t2.nano instance
with the same amount of memory costs $4/month.

Lambda is a value-added service _on top of_ EC2. It only makes financial sense
to use it when you _don 't_ want something running constantly, or otherwise
have a way to take advantage of the extremely fine granularity in billing. (Or
if you're willing to pay a premium to have Amazon manage your process
lifecycles for you.)

~~~
cmeiklejohn
Good point.

But, most of the post was meant as joke to see what I could push lambda to do.

~~~
sitkack
Someone already published a paper that pushed those boundaries
[https://cs.nyu.edu/~anirudh/excamera.pdf](https://cs.nyu.edu/~anirudh/excamera.pdf)

------
alien_
Great article, and an interesting use of Lambda, thanks for sharing!

To answer your final question: I wrote a spot instance automation tool, you
can check it out at autospotting.org, so I would give spot instances a try.
The latest developments from AWS on the spot market are real game changers, I
think most of the workloads can now safely run on spot, my AutoSpotting tool
makes it a breeze to migrate from on-demand AutoScaling groups while keeping
them a bit more reliable than the native AutoScaling integration for spot.

As of a few months ago the pricing is much more stable than before, I've
rarely seen terminations even over the maximum three months of history for
instances that used to go bust multiple times a day. You also now pay them on
a per-second basis, and you can hibernate the last one to keep the state of
the group while everything is down.

So my approach for this would be to have an AutoScaling group of the smallest
spot instances that can run your app, scale them to N nodes right before your
experiment, then when you're done scale down to a single one, which you use as
data seed next time, which you detach and hibernate with API calls.

Next time you re-attach the seed to the empty group, and scale out to N once
again and run your test. So you only pay for the length of your test on a per-
second basis.

You can also keep the seed as an on demand node outside of the spot group and
have it run from the free tier if you still have some time left, or just
hibernate it as well.

------
mncharity
Keith Winstein (Stanford) et al's gg [1] is also fun. Sort of `make -j1000`
for 10 cents. Create a deterministic-compilation model of a C build task,
upload the source files, briefly run a lot of lambdas, download the resulting
executable. (Though it's more general than that.)

For folks long despairing that our programming environments have been stuck in
a rut for decades, we're about to be hit by both the opportunity to reimagine
our compilation tooling, and the need to rewrite the world again (as for
phones) for VR/AR. If only programming language and type systems research
hadn't been underfunded for decades, we'd be golden.

[1] [https://github.com/StanfordSNR/gg](https://github.com/StanfordSNR/gg) ;
video of talk demo:
[https://www.youtube.com/watch?v=O9qqSZAny3I&t=55m15s](https://www.youtube.com/watch?v=O9qqSZAny3I&t=55m15s)
; some slides (page 24):
[http://www.serverlesscomputing.org/wosc2/presentations/s2-wo...](http://www.serverlesscomputing.org/wosc2/presentations/s2-wosc-
slides.pdf)

------
ghayes
I've found out-of-the-box distributed erlang difficult to run in environments
with a lot of instance churn (e.g. containerized deployments on Kubernetes),
so much so that I generally opt to not connect my nodes for erlang message
passing. Does anyone here have experience running Lasp in Kubernetes? Is Lasp
effective in monitoring and adjusting to new or dead nodes?

~~~
di4na
Lasp use Partisan for its distribution, not the default erlang distribution.

It was built exactly to deal with a high level of churn, specifically for edge
computing with spotty network. So yes it does.

[https://github.com/lasp-lang/partisan](https://github.com/lasp-lang/partisan)

------
jcora
On mobile the characters on your site are literally like a millimeter wide you
should fix that

~~~
cmeiklejohn
Thanks for the feedback.

~~~
jcora
Np mate:) I recommend ghost btw!

