
Serverless Computing in 2018 - emforce
https://medium.com/@elliot_f/how-serverless-computing-will-change-the-world-in-2018-7818fc06b447
======
jknoepfler
I'm personally betting against serverless computing in the enterprise, except
for very specific uses.

In general, serverless computing tasks cost 100-1000× more per cpu cycle than
a counterpart running on a cloud server.

That means that for tasks that are running regularly, they are a huge loser.

They have unpredictable latency, you lose control of things like logging to
disk (you get locked in to cloudwatch or whatever, yet more money and tooling
required)

The notion of scale seems promising at first, but I would be very skeptical
before I accepted the claim that someone can scale lambda volume from 10 to 10
thousand executions per minute without setting off alarms, dropping traffic,
and probably having an engineer at AWS Lamba get paged.

So what are lambdas good for? I'd argue that they make good glue for
deployments or batch processes that happen infrequently, don't require
predictable performance, and for which having a dedicated server is a little
wasteful. I'm skeptical even of those. Remember, a micro server is free tier.

I have had success building prototypes with AWS Lambda, so there's that aspect
too.

I'd be interested to hear a counterpoint from someone who uses serverless
compute in a large enterprise setting in place of a traditional server stack.

~~~
illumin8
> In general, serverless computing tasks cost 100-1000× more per cpu cycle
> than a counterpart running on a cloud server.

Citation needed. With Serverless you have to do zero system administration,
patching, maintenance, security hardening, etc.

In the early 1900s, cars required tons of maintenance. You had to hand crank
the engine to start it, and if you went on a long trip, you needed to bring
along a mechanic, as your car was likely to break down. I know it's tempting
for engineers like ourselves to believe we can do system administration better
than Amazon, but it's just not true. Do you think automobile owners long for
the days when they had to hand crank their engine to start it, and do
maintenance just to travel 100 miles? In a decade or two, the idea of doing
manual system administration, load balancing, scaling, etc, will be just as
archaic.

~~~
013a
Nearly every trucking firm I've worked with has their own in-house shop, with
mechanics and everything. Turns out, outsourcing the maintenance of vehicles
that experience tens of thousands of miles every quarter is more expensive
than just paying your own people to do it. Same with municipal fleets. Same
with taxi fleets.

There are many variables that enter into a cost-benefit calculation when
thinking about this. But what it really comes down to is Scale. Projects on
the low end of scaling will find it cheaper to outsource everything they can.
But at higher scale? Think about it like this: Instead of building their own
data centers, does it make sense for Amazon or Google to just colo space in
other people's data centers to build AWS or GCP? Of course it doesn't.

So, those are the bounds of our "scale scale". At some point it becomes
cheaper to move off managed products and onto VMs in the cloud. At some point
after that, it becomes cheaper to deploy your own metal. The challenge
companies need to solve is determining where they're at, and cost-optimizing
parts where they can save money. A great example is Netflix; they're nearly
totally on AWS, but they've cost-optimized their CDN significantly to save
money on bandwidth by getting out of AWS.

QoS is another consideration. Deploying your own metal is hard, and the
quality of the service you provide might degrade. But don't marginalize the
other end of that scale; there are QoS downsides to going full managed like
with Lambda. Its another variable to consider.

Lambda -> EC2 is the same kind of cost-optimization movement, just further
down in scaling. At some point Lambda _will_ become too expensive. Then,
you'll probably want to move to something like managed Kubernetes/ECS, which
has almost no administrative overhead compared to just EC2 instances but still
makes scaling relatively easy.

~~~
illumin8
I agree that you need to make the right decision depending on what scale you
are at. The reality is that most companies are at the scale of a small shop,
and it would be better (total cost of ownership) to use Amazon's highly
automated services.

However, every sysadmin I've met (and I used to be one) would like to think
they are managing Walmart's trucking fleet, when the reality is they are more
like Joe's local furniture delivery.

------
throwaway13337
I hear a lot of evangelism but no success stories aside from one-off use cases
that almost seem designed to benefit from the autoscaling.

To me, this is marketing and second-tier marketing by consultants hoping to
sell you their services.

Serverless makes it harder to test locally, harder to move between hosts,
often times much slower (cold starts), and forces going to the network to do
things like persist to disk, run queues, etc. Of course, all these services
are provided by the megacorp in charge of your servers so the lock-in goes
deeper.

The only ones that should be considering serverless are companies that need to
scale up and down operations at a moment's notice. That is if you're huge or
doing a lot of short-burt big data processing. Otherwise, it's all koolaid.

Enjoy the open standards we have. Let's work on improving and evangelizing
those instead of megacorp lock-ins.

~~~
RhodesianHunter
My team (~20 people within a Fortune 50) could be considered a success story.

We use Lambda for ETL type work, Healthchecks, Web Scraping, and
infrastructure automation.

> Serverless makes it harder to test locally, harder to move between hosts

Most of our code is written in Java. We have a main method, which calls a
static method. The only difference between running locally and running in
Lambda is that the Lambda calls the static method while we call the Main
method locally.

This also pretty much eliminates lock-in since all of the code behind the
static method could without much trouble be moved behind an API endpoint
running in a more traditional environment.

> The only ones that should be considering serverless are companies that need
> to scale up and down operations at a moment's notice. That is if you're huge
> or doing a lot of short-burt big data processing. Otherwise, it's all
> koolaid.

Respectfully disagree, as first hand experience has taught me otherwise.

Just like any tool, Lambda is extremely effective when used for the things it
excels at.

~~~
xstartup
Assuming most companies start their ETL job at midnight in UTC. Isn't
serverless more expensive when everyone needs it at the same time?

~~~
RhodesianHunter
You're charged by memory utilization and compute time.

The number of others using it at the same time has no bearing on price.

------
iamleppert
Wow, there's so much misinformation and outright lies in the comments here.

You do not have to "notify" anyone at AWS to scale your lambda up to whatever
your account invocation limits or spending limit.

Also you can run whatever code you want, native libraries, etc. If you need a
cache you can use the disk or connect to some in-memory data store. You can
log to wheverer you like, or throw your logs away.

Local development is easy and there are several frameworks that abstract away
AWS' wart-like configuration API.

The limitations with lambda mostly have to do with the fact you're basically
doing distributed processing with finite resources, and under an ephemeral
model. Many libraries aren't written in that way, so you'll need to do some
first principles work here and there but I'd argue the resultant architecture,
independent of AWS, is worth it. You can share databases, S3 files, whatever.

Generally speaking lambda is good for workloads that aren't always at 100%
(basically almost every kind of long-tail application in computing). The cost
per CPU cycle is made up when you consider this, and there's nothing to stop
people from setting up lamda-style processing on their own clusters, machines,
or bare metal if you think you can optimize for utilization better/cheaper
than lambda/AWS.

If you're in the 1 or 2% of applications where your system is latency
sensitive or always at 100%, you either need to consider other architectures
or ways of solving your problem. If that's not possible, lambda might not be
for you.

If you're someone that doesn't like new technology and is resistant to change,
by all means continue to spend your time provisioning and tending to your
flock of instances. You're just going to pass that cost on to your customer or
user anyway.

~~~
thezilch
You will absolutely need to notify AWS to scale to "world changing" levels or
take on "massive surges" without spewing errors. There literal limits that
require filling out a web form and waiting to get your ticket administered.
Then Lambda is backed by EC2, which itself has functional (Re: AWS has per-
minute limits on how fast you can add concurrency) and physical (Re: AWS has
finite compute and will tell you they would like advances on your "massive"
compute needs -- you can't just get infinite cloud off the shelf).

------
ridruejo
I see serverless as the new Visual Basic (in a good way!). It allows you to
glue together disparate cloud APIs and services (the equivalent of OCX
controls in VB) and create useful applications. Though we are still in the
early days, serverless platforms will continue to get easier to use and lower
the barrier of entry to cloud application development for a whole new group of
developers. We built our own open source serverless solution on top of
Kubernetes, called Kubeless: [http://kubeless.io](http://kubeless.io)

~~~
jacques_chester
I had occasion to look at Kubeless and Fission early in 2017. At the time I
recommended we look more closely at Fission, but I am more partial to the
Kubeless view that startup time is a matter to push down to Kubernetes.

The decision made was to implement another FaaS instead (Project Riff). It
resembles Kubeless in a lot of respects. And now I'm assigned to work on it.
Funny how things turn out.

~~~
ridruejo
Thanks, I was not aware of Riff, seems indeed very similar to Kubeless in its
use of Kubernetes for dealing with the low level mechanics of scaling, usage
of CRDs, etc. I would encourage to revisit Kubeless as a lot of progress has
been made since early 2017 and the project is getting momentum with
contributions from SAP, BlackRock, Microsoft and others. Here's Seb
presentation from the last Kubecon
[https://www.youtube.com/watch?v=8P-aXKylCVs&index=94&list=PL...](https://www.youtube.com/watch?v=8P-aXKylCVs&index=94&list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-
ulZb)

~~~
jacques_chester
I'll draw attention to the similarities when I'm back at work. Convergent
evolution is often informative.

------
CSDude
We are using AWS Lambda in production heavily. It is basically an unhealthy
hate and love abusive relationship. It works great for event triggering,
cloudwatch events, sns topics, kinesis etc. when you combine them. However,
the lack of ability to monitor it properly and the unflexible parts is a
burdern. I hoped we were just allowed passing Docker images, and AWS handled
request/input passing & provisoning properly.

~~~
whoisjuan
I think where Serverless wins against managed containers, it's that you can
start creating microservices that solve problems very rapidly. Need a service
to manipulate an image or extract data from a file or make some type of rare
business logic calculation or simply keep your storage clean... Lambda + API
Gateway and 1 hour later you have a fully managed end-point that can do
exactly those things in isolation.

------
yegle
A quote from my colleague: despite the serverless world we provide to our
customers, people still want to SSH to access their app.

Also a quote from unknown source: there's no such thing called cloud, it's
someone else's servers.

~~~
illumin8
Who cares if they can SSH to their app? That's like saying "no matter how nice
my natural gas furnace is, I still want to shovel coal." The market will speak
- people don't like SSHing into servers any more than they like shoveling coal
into their furnace.

------
api
Coding everything to run on proprietary mainframes (sorry, cloud services) is
going to "change the world." Sigh.

~~~
Agebor
In 5-10 years, you'll click a button to move all your functions to a
decentralised computing fog.

------
hobaak
sorry, not much substance to the content.

~~~
tzahola
>How Serverless Computing _Will Change the World_ in 2018

You shouldn’t have expected much in the first place.

------
rcarmo
I just don’t “feel” this article, and I use Azure Functions heavily.

My money’s on Istio (or something like it) and managed Kubernetes, since they
can provide the right degree of insight/monitoring and control (any runtime,
as long as you can pack it into a container) and don’t restrict you to any
platform.

~~~
emforce
I would be interested in getting more feedback on the article from yourself,
what is it you dont feel and I could improve upon? Cheers, Elliot

~~~
rcarmo
I think the Unity analogy doesn’t make sense (I use Unity on occasion for
hobby stuff) and that the article is too shallow. It lacks concrete use cases
and measurable quantities.

~~~
emforce
I believe Unity has ultimately changed the way game developers work on Indie
titles though. It's undoubtedly had a huge effect on all levels of indie
developers due to its ease of use and the fact it handles a hell of a lot of
the complexities for you. Similar to how Serverless does too.

But I appreciate the feedback, it was somewhat hastily written as I was
preparing for the new year celebrations haha but I will take this feedback
onboard for my next articles!

------
empath75
In my experience, serverless, at least in aws, is primarily useful as
automated glue connecting various aws services together.

I think as a platform to build your core application on, it creates a lot of
vendor lock in for very little benefit that couldn’t be achieved using a
container scheduler like kubernetes.

------
schappim
Does anyone know of a major service that natively supports Ruby?

I have previously used Lambda with Traveling Ruby and Iron.io.

------
_Marak_
I've had good success this year running and hosting serverless on my own
hardware both locally in the cloud using my own application server:
[https://github.com/stackvana/microcule](https://github.com/stackvana/microcule)

------
jacques_chester
I've recently been allocated to work on a FaaS platform[0]. A few months back
a colleague and researched and wrote about FaaSes more generally.

I think FaaSes have value, but I am still on the fence about their changing
"everything".

To start with ...

> _When it comes to services such as AWS Lambda, when you expose an endpoint,
> you don’t necessarily have to worry about massive surges in traffic._

You sorta still do. The latency of a cold start is still higher than a warmed-
up chunk of code. Can Node start, interpret code, perform logic and respond in
less than 100ms? It certainly can.

Meanwhile, a warmed up JVM can perform the same logic in nanoseconds.

Lambda and most other designs I've seen retain warm copies. Either loaded in
advance (I saw an AWS slide deck saying that a large part of their secret
sauce is load prediction) or kept around once it has launched.

But you will always have a bimodal distribution, probably quite wide, because
FaaSes typically scale down to zero.

Hugging the curve is never free. Either you accept idle capacity (enough to
cover a start of a surge while you bring additional capacity online) or you
accept lumpy latency.

I suspect FaaSes will shine in situations which are sensitive to billing cost
but not latency. Where latency is non-negotiable, you will either see folks
sticking to services or some mix of services that are augmented by functions
built from the same codebase.

> _These Serverless cloud providers constantly monitor and manage the
> underlying fleet of servers running your code._

PaaSes do this too. It's one of the main selling points.

> _Thankfully, every endpoint you set up can use a different language
> runtime._

This isn't really true. FaaSes currently provide a closed list of supported
languages. Implementations vary, even within a single platform.

What's missing here is an open extension point. I have internally argued, and
I will be continuing to argue, that we already have one: buildpacks. The
concept is well-understood, the tooling and testing is well in hand and we
have a great deal of expertise in this area.

In particular, it means we don't need to reinvent a wheel. Identifying that
code is meant to be mounted in a FaaSes is no different in kind from detecting
that it is intended to be run as a Rails app or a vanilla Rack app.

What's missing from the discussion is what is missing from FaaSes: tools for
composing larger systems. Most efforts (eg. IBM's composer[1] tool or Lambda's
step functions) look at creating state machines wrapped around functions. I
think this is the wrong approach. Composition should be declarative, mediated
through state, with no logic outside of the functions. My model here is how
Concourse defines tasks. I've been making this argument too and I hope we'll
bring to FaaS users the kind of lego-brick composability that Concourse users
enjoy.

[0] [https://pivotal.io/platform/pivotal-function-
service](https://pivotal.io/platform/pivotal-function-service), based on
[https://projectriff.io/](https://projectriff.io/)

[1] [https://github.com/ibm-functions/composer](https://github.com/ibm-
functions/composer)

~~~
emforce
Hi Jacques, I really appreciate your comment!

1\. Yeah, there are always going to be situations where low-latency is a must.

2\. PaaSes most certainly do, I'm a huge advocate for CloudFoundry usage
within my place of work and help onboard people for just this reason. FaaS
will simply provide one extra layer of abstraction so that developers won't
necessarily have to deal with larger frameworks in situations where it doesn't
make sense

3\. This is really interesting point of view and I'm inclined to agree with
you, there does need to be extensibility that some of these platforms don't
currently offer.

P.s. I'm very much looking forward to working with Pivotal's function service
offering once it is made available to us!

------
pwaai
Serverless is the new Blockchain--hype overrides all arguments against it's
inefficiencies.

------
macawfish
i wanna see "serverless computing" on p2p mesh networks!

~~~
braderhart
Thank you! Isn't this what serverless really is about... federated and
distributed cloud platforms? Not another buzzword from whatever cloud
provider, that essentially means more proprietary systems that benefit their
profits and not technology as a whole.

------
thezilch
> you don’t necessarily have to worry about massive surges in traffic. The
> underlying system will automatically handle things such as load-balancing
> and the provisioning of appropriate infrastructure in order to meet any
> massive surges in traffic.

This is wrong. Most cloud providers, AWS included, will _require_ a heads up
for even small surges. There are also caps on capacity; in fact, you need to
contact them to lift said caps.

It's solving the market of oversaturated shared-VPS that run Node, Python, or
Go? Call when you have PHP.

~~~
risaacs99
This is just not true. A brand new account on AWS will come with a default
lambda limit of 1000 concurrent invocations with up to 2 cores and 3gb per
invocation. That is a massive amount of computing power. The load balancer
(typically API gateway in front of lambda for a serverless app) will scale
very quickly to any request volume and no one needs to be notified ahead of
time.

~~~
always_good
AWS' Elastic Load Balancer took minutes to spin up such that it was pretty
useless, god forbid you're getting DoSed.

I have a hard time believing that their other products don't have the same
issue when auto-scaling.

~~~
thezilch
Lambda is backed by EC2 and absolutely has EC2's limits. Just like scaling an
ELB takes minutes to serve "massive surges" \-- AWS recommends you pre-warm
ELBs or contact them to do so -- the same is true of Lambda.

~~~
risaacs99
AWS no longer recommends warming up the newer ALBs for the vast majority of
customers and scenarios. Of course lambda isn't magic and has an initial start
penalty. There are ways to mitigate it and as long as you are actively
handling requests your functions will be reused. It's not magic, but it's
capable of keeping up with normal traffic surges. If you have a special
circumstance where you expect a large surge on a scheduled basis (say
thousands of clients checking in at the stroke of midnight, you would have to
arrange to be ready for that and there are a lot of ways to do it).

API gateway isn't alb or elb and there isn't any recommendation or expectation
that customers warm anything up.

I never said "massive surge" but I stand by the idea that 2000 cores and 6
terabytes of memory is massive for most workloads. Of course, that's not the
limit and you can request more if you need it.

Lambda isn't for all workloads but AWS (and everyone else) is always changing
and improving. You have to check in occasionally to see if old assumptions
(e.g. elb) still hold.

------
MrBuddyCasino
I hope serverless matures. Devops is the bane of my (and many of my
colleagues) existence.

------
johansch
Well, Google App Engine (both the standard/classic and flexible kinds) has
been and still is pretty awesome for people who don't want to worry too much
about SRE stuff but still not pay through the nose.

AWS Lambda just seemed like too much bureaucratic busywork for no good reason
(ugh API Gateway...), last time I tried it out.

I guess it maybe appeals to "enterprise" type developers? Maybe it this way
cause Werner Vogels is german _and_ from an academic background? :-) I'm
adventurous, because I write this knowing that even referring to national
characteristics results in instant down votes here.

Just give me an endpoint where I can respond to HTTP requests and parse the
URL path/parameters myself, thank you. That way I can keep it as simple or as
complex as is needed, per situation. I just don't want to bother with your
insanely over-engineered API of definining custom API requests.

~~~
michael_l_650
Yeah I don't use the AWS much, how does this different from GAE? Hasn't GAE
existed for a while already? I feel like it's always had it's uses, and it's
very nice! But the revolution didn't happen five years ago, don't see why it
would happen now.

~~~
johansch
The difference is that Lambda is a) way more annoying to use, b) gets way more
coverage, because developers are locked into the AWS ecosystem and often don't
pay the bills.

