
Serverless: I'm a Big Kid Now - jetheredge
https://www.simplethread.com/serverless-im-a-big-kid-now/
======
djhaskin987
> Nobody wants to manage servers. Managing servers is a nasty side effect of
> wanting to execute code.

Actually, I am setting up a serverless app now. 4-5 lambdas, s3 buckets, RDS,
IAM roles, and 6 weeks (easily) getting everything into CFT's and Ansible so
that I can deploy this relatively small app.

You know how I would replace all that stuff? 1 single VM. (Alright, _maybe_ 2,
1 for the database.)

A server _buys_ you ease of deployment. It really does. Code needs an
environment. Lambdas punt on dependency management and environment. You have
to build the environment for the lambdas using IAM roles, RDS, s3 -- out of
toothpicks. All that stuff (security, storage) is _already there_ for you in a
VM.

I was talking with my brother in law about his serverless deployment of ~23
microservices he had carved up out of a Java Spring app. A small change
anywhere in the code meant he had to redeploy all of it, not because of how
the code was structure but that was the only way the build tools allowed him
to do it. More problematically, the dependencies had to be built into every
single lambda and made the build longer. (No, lambda layers didn't work for
us. Way too complicated to figure it out.)

Honestly after going through tons and tons of pain to deploy these lambdas in
both cases, it's just way easier to deploy on a VM. Maybe not for developers
who just can't do anything if it's not done in their IDE, but from a broader,
more globally-looking DevOps perspective, there are hidden costs.

Yes, it's easy to develop for lambda. But WOW, it's _awful_ in deployment in
comparison to just putting things on VMs.

~~~
untog
I don’t dispute that serverless deployments are a mess. But I look at it the
other way: they make you deal with scaling up front, whether you need it or
not. RDS, S3, all that... it scales to huge volumes. Your single VM will not.
You _can_ set up auto scaling with multiple VMs but by that point you’re going
to be dealing with a lot of the issues you have with serverless deployments.

Now, could a lot of serverless deployments do just fine on a single VM because
they never get enough traffic? ...probably. To my mind that's the real problem
with serverless: it's a premature optimisation that simply isn't needed in a
lot of cases.

~~~
nthj
Right, I’m a big fan of the monolith, but that doesn’t preclude anything you
just described. Absolutely use RDS. Absolutely use S3. Even on a low traffic
app.

But what I see a lot of is engineering teams making sweeping consistency
judgements. RDS and S3 are great as services, everything should be a
(micro)service. Our calendar feels amazing as a React SPA—everything should be
built in React.

No. 90% of a product is just CRUD to support the 10% of magic that customers
really value. Ship the 90% as a monolithic CRUD app and spend those cycles on
that 10%.

~~~
corytheboyd
This right here. As a less experienced engineer in the past I once made the
unfortunate call to go all in on a JavaScript SPA to replace the simple server
rendered rails views because it was “correct” and we should have “modern”
code.

It’s been many years since then and all this really did was bloat the client
with metric tons of business logic, punch inconsistent holes through the API
layer, and cause the entire web product to have abysmal load times (normal is
somewhere in the 5 second range IIRC).

I think it’s understated how much damage the young technical leaders typically
employed by early stage startups can really do. I can tell you this because I
was exactly this person, and I got to see what following hype without doing
actual mature research leads to.

------
the_af
Isn't the article's vision of things like Kubernetes (and similar) a bit too
idyllic?

I've heard talks by experienced K8s adopters (I want to say "Kubernetes
failure stories", but googling comes up with similar hits but not the specific
talk I was thinking of) where they mention that when K8s goes bad, it has all
the negatives or knowing the "traditional" tech stack _plus_ all the failure
modes of K8s itself. They argue that it's _more_ stuff you have to know about,
not less; and when trouble hits, it can get very complicated to understand
why.

(Not picking specifically on K8s; this applies to similar
orchestrators/container tech).

~~~
jeffbee
Some of these stories (I just Googled for "kubernetes failure stories") have
little to do with K8s. This one[1] is just whining that his app is slow when
it runs out of CPU. That's more of a "I don't know what I'm doing" story,
isn't it? Lifting the CPU rate limits in a container is not exactly a cost-
free magic wand.

1: [https://medium.com/omio-engineering/cpu-limits-and-
aggressiv...](https://medium.com/omio-engineering/cpu-limits-and-aggressive-
throttling-in-kubernetes-c5b20bd8a718)

~~~
yjftsjthsd-h
> That's more of a "I don't know what I'm doing" story, isn't it?

Sure, but k8s is big enough that "very few people know what they're doing" is
a valid argument. It's loosely like saying that git sucks because the
technology is solid but it's extremely complex and hard to get a grasp on.

~~~
jeffbee
I don't think the ergonomics of k8s are anywhere near as bad as those of git.
I think the perception of k8s as complex and confusing is promulgated by
people who already don't understand how their process starts and runs on any
given Linux box without container orchestration. Naturally these people are
overwhelmed by adding k8s on top.

The article to which I linked describes itself as "a wild ride of discovery"
but all that has been discovered is basic aspects of the Linux process
scheduler. The author could have skipped the wild ride if they had understood
Linux first.

~~~
p_l
Add to this conspicuous lack of good bottom-up description of how things mesh
together, and suddenly instead of having _complex system built out of simple
parts_ people face _complex system built of magic_. It's a credit to k8s that
you can trudge forward in the second case, but I'm too divorced from "k8s
newbie" to reliably check what's the current "newbie introduction" :/

------
sudhirj
One pain point I have is that functions as a service systems like Lambda have
an special event format - this is sorted out by tools like Up which will
install a small adapter and let you run your normal Http server in lambda.
[https://apex.sh/docs/up/](https://apex.sh/docs/up/)

Besides this, the other service I like is Fly
[https://fly.io/](https://fly.io/) with lets me submit a container and runs
that all over the world where necessary. Google Cloud Run does something
similar, but its region locked. Fly is like global lambda for containers.

Between these tools and services, I’ve stopped caring about servers
completely. Never need to install a package or use ssh again. I either submit
just the app executable to lambda using Up (for light HTTP APIs) or send a
container to Fly (for heavy servers, TCP, web sockets) and that’s it.

~~~
spyspy
You can now deploy apps on Cloud Run to different regions and use Google's
HTTPS Load Balancer to traffic them with a serverless endpoint group.

~~~
pier25
CloudRun is a no-go for apps that need custom domains as they have a 50 domain
limit which cannot be increased.

[https://cloud.google.com/run/quotas](https://cloud.google.com/run/quotas)

------
pier25
> Serverless container services such as Heroku, Netlify, AWS ECS/EKS Fargate,
> Google Kubernetes Engine, and Azure Kubernetes Service

IMO I don't think Heroku and others fit the serverless paradigm.

To me serverless is about automatic scaling of performance and cost.

With Heroku you need to provision capacity in advance, and you pay for it
whether you use it or not. Heroku doesn't scale automatically either, you do
that manually.

~~~
mping
Heroku does have autoscale capabilities based on avg response time. You set a
min and max dyno count and off you go. It's as simple as it sounds but for run
of the mill, follow-the-daytime apps it works.

~~~
pier25
As long as you're already provisioning a pro dyno which starts at $250.

~~~
mping
Didn't know, thanks. I was on an enterprise account.

------
awinter-py
> Serverless container services such as ... Google Kubernetes Engine ... You
> don’t have to worry about running the cluster that hosts your control
> servers, node servers

You don't _have_ to worry about it, you _get_ to worry about it.

At least from a cost perspective, GKE isn't serverless. Also I always end up
with 2 node pools because the default one is misconfigured. Also kube runs so
much crap that the first node barely fits any app containers.

Wouldn't call GKE a win for 'not managing nodes'.

------
jeffbee
Serverless: so grown up that Google App Engine launched 12 years ago and hosts
several huge, valuable services. The question of whether serverless is ready
answered itself years ago. The remaining questions for prospective adopters
are whether it meets your requirements, and are you mentally ready to adopt it
(because cramming legacy concepts into serverless never works).

~~~
detaro
And in reverse, it can be surprisingly tricky to convince serverless advocates
that it's not so new and GAE fits their definitions.

~~~
spyspy
To most people, serverless==lambda. That's where most of the conversation
lives and dies.

~~~
detaro
I'd be a lot less grumpy at them if they talked about "Lambda" or even "FaaS"
development then :D

------
thinkingkong
The developer experience (DX) will more or less get solved by over the top
vendors. Serverless.com and Begin.com are doing great jobs at this so far. For
any of the concerns about proprietary-ness - sure. But you're already locked
in once you start using S3, or relying on any other technology decisions like
which database you use. It's not currently free as in freedom to use some of
these early serverless solutions, but I'm also having a hard time seeing a
world where we have globally distributed, fault tolerant, reliable, secure
compute capacity at this cost that isn't running in someones basement.

~~~
kohtatsu
Backblaze B2 is now S3 API compatible: [https://www.backblaze.com/blog/aws-to-
backblaze-migration/](https://www.backblaze.com/blog/aws-to-backblaze-
migration/)

------
jermier
> Security – The operating system installed in a container is usually short-
> lived, very minimal, and sometimes read-only. It therefore provides a much
> smaller attack surface than a typical general purpose and long-lived server
> environment.

Is this true? I always thought things like Docker are massively insecure
because they don't respond to the threat landscape that well, since they are
kind of 'frozen in time' and kept that way for years at a time without any
critical security updates.

~~~
cnorthwood
If you're deploying your own application, you should probably build your image
from a known maintained base image, rather than from a community supported
one, and then periodically rebuild it - it's like how you'd have to redeploy
your app if there was a security issue in one of your dependencies. I wouldn't
recommend using any of the public Docker images outside of local dev
environments

------
beilabs
> I think I’m starting to develop a decent radar for which trends are going to
> have a lasting impact and which ones are going to fizzle out.

For those who want to get some fast insight to technology which is being
adopted or dropped across industries in the software space then definitely
check out Thoughtworks Technology Radar. It's regularly updated.

[https://www.thoughtworks.com/radar](https://www.thoughtworks.com/radar)

------
GekkePrutser
I absolutely like managing servers. It makes sure I am fully aware where my
data is at what time, and what is handling it. in a European world with GDPR
that became a lot more important. At the same time this has been a big factor
in recent data leaks. Those leaks happened mainly because implementation
issues, but those organisations didn't _choose_ to make their data available
in unsecured S3 buckets. It was able to happen because they didn't have enough
visibility.

There's also many reasons given that don't really make sense.

> Your containers are cattle, not pets. If your container crashes, a new one
> is automatically fired up.

If my container crashes, I want to find out why so I can prevent it from
happening again. One of the issues with these serverless technologies is that
they make this kind of debugging harder. I don't want code to randomly crash
and just getting restarted to be the solution. It means there is something
wrong. To be fair he does mention this later on in the article.

> Serverless functions force you to write your code in a stateless way

So what. If I want to have my own server I can also do it stateless if that
makes sense to me. Being forced in a direction is not a positive.

I think the scaling is very interesting. But this is not something that's
required for many applications. It doesn't mean everything should be
serverless. It just means there's a good option to choose from for usecases
that can really benefit from it.

Pushing serverless for everything is like putting blockchain into everything
because it's a buzzword right now.

~~~
nostrebored
Disposable infrastructure is a fundamental piece of distributed systems.
Stateful infrastructure is rarely disposable.

------
sushshshsh
Is it really that expensive to keep a server running 24/7 vis a vis the
development complexity of lambdas?

It depends on your work load. And your trust level in Amazon of course.

~~~
k__
No it isn't expensive to keep a server running 24/7.

It's expensive to keep that server maintained.

~~~
bigphishy
Updating a typical linux server can seem daunting, but it is not complex. For
example to update a centos linux system:

rpm -qa | grep -i kernel && yum -y update && reboot && ..... After system
comes back online, ensure the kernel is a newer version rpm -qa | grep -i
kernel

Other utilities like ukuu for debian or ubuntu systems make updating the
kernel a breeze.

If you're worried about specific packages, sure that can be daunting, so avoid
updating software you are concerned about breaking ( keep an eye on security
alerts for that software, which you should be doing anyway if you are using it
on a VM container, or serverless... none are immune to vulnerabilities! ) and
do not install unnecessary software.

------
castillar76
> Nobody wants to manage servers. Managing servers is a nasty side effect of
> wanting to execute code.

I'll take issue with this one. I'm fully aware I'm weird, but I _do_ actually
enjoy managing servers. And with that effort spent managing servers, I buy the
ability to actually create the environment in which my code runs, and interact
and improve it over time, rather than relying on someone else's idea of what's
good for my code. Moreover, I get the ability to dictate the security
environment in which my code runs, instead of hoping my cloud provider is
doing it properly enough for me (and not busy snooping on what I'm doing in
the process).

I think what the poster means is " _I_ don't want to manage servers, I want to
write code." Which: fine. The world needs people who focus on writing good
code. But I'm sick and tired of this attitude that coding is somehow more
noble or better than the important work of building and maintaining the
environment in which the code runs. You don't like it? Awesome, don't do it.
But stop spitting on people who do, and who legitimately enjoy doing it.

------
k__
Is Heroku severless now? (i.e. on-demand pricing, no capacity planning, etc.)

------
r0rshrk
I feel we need to move beyond the "serverless" nomenclature. Although, what
would be better? Remote Lambdas ?

~~~
k__
Lambda is a "Function as a Service" product and FaaS is (usually) serverless
in nature.

S3, DynamoDB, AppSync, API-Gateway, and Faregate are also serverless, but not
FaaS products.

------
furstenheim
One thing that bothers me about serverless functions is that there is
virtually no concurrency. Once you return you have no assurance that your code
will keep on working.

If you want to perform actions that might be throttled (cloudwatch for
example) and are not critical for the response, you cannot have a singleton
processing them after you return, because they might not be processed.

~~~
redisman
How could it be orchestrated by the cloud if there's no clear end to your
workload?

~~~
furstenheim
Of course, I'm not saying the opposite. But you're sacrifying development
options in the process

------
atlgator
Anyone that mentions vendor lock-in with cloud never worked in IT before
cloud.

