
Things I’ve learned using serverless - rmason
https://read.acloud.guru/six-months-of-serverless-lessons-learned-f6da86a73526
======
alpb
I read this blog post assuming I'm actually going to learn a thing or to in
terms of Best Practices in serverless paradigm, some ops/observability tricks
and such.

It turned out to be a complete AWS advertisement as well as hand-waving to
bunch of other blog posts without any good explanations. What makes me curious
is don't these people actually ever read Hacker News and know what good
technical blog posts look like (unless they're just bunch of paid evangelists
writing blogs with catchy titles)?

~~~
reificator
acloud.guru is a name I recognize from some (halfway decent to be fair) Udemy
classes I've taken a few years ago. Likely explains what you're seeing.

------
sergiotapia
It seems like you've traded in a bunch of open source solutions, to a walled
garden in AWS and amazon tools.

~~~
super_trooper
The trade-off is cost. The article even mentions they drove down the
operational cost at least 70%. You can still run whatever open source library
in the lambda that you need (still need to ask, is it with the extra bytes),
but yea, you are betting big with AWS. GCE serverless is way behind right now.

~~~
jacques_chester
Nobody ever shows any love for Azure in these discussions.

Disclosure: I work on Project Riff at Pivotal.

~~~
yebyen
OK, I'll do it, although this one requires that you have a live Kubernetes
cluster to run your functions on,

I haven't heard much about it other than that it is more friendly open code
from the lovely people that brought us Deis and Helm:

[https://github.com/Azure/brigade](https://github.com/Azure/brigade)

Hey, I bet you've heard of this, it sounds like Riff is absolutely in the same
space :D

I think for most small enterprises today it's not too much to ask that you
have a Kubernetes cluster with autoscaling provisioned somewhere. I think in
2018 you're not serious if you don't have at least that (or something
comparable, although I've heard "the war is over" and agree that people should
just get comfortable already with the idea of K8S if they haven't yet)

There are enough managed offerings today that don't charge anything for
masters, where you can simply push a button and get a cluster that is properly
configured, and push another button to tear it down when you're done, or call
an API and get the same effect.

I know that's not really "serverless" now, and it's all about the cost of
running computers in the cloud on a 24/7 basis, so tell me if you've heard
this one before...

I've never succeeded in standing up a Kubernetes cluster with ASG for workers
that will scale all the way down to zero when demand for worker nodes
evaporates for a long enough period of time (10-30 mins?). Admittedly I've
never spent that much time trying at it either... I am privileged to have some
real physical computers plugged into the wall that I don't have to turn off,
so I guess I just don't have to think that way.

There's just not any technical reason that won't work though, is there? You'll
need the master(s) to hang around, so it's possible to notice Pending pods and
scale back up when the demand returns, right?

(So why am I not seeing this capability advertised or demo from any managed
Kubernetes provider offerings, is it really just simple economic answer that
given the pricing model of no-cost masters, they don't make any money off you
during a period of time that you aren't running any worker nodes?)

~~~
jacques_chester
> _Hey, I bet you 've heard of this, it sounds like Riff is absolutely in the
> same space :D_

I have, and I admire a lot of the work Deis folk have been doing at Microsoft.
I have different opinions about the future looks, but I could be wrong. And
I'm not the only member of the riff team.

In terms of "scale to zero" for workers, I think your "two whys" need is
_containers_ on-demand, not _workers_ on-demand. That need is going to be met
by the various virtual kubelet efforts underway. Azure have been out front on
this, actually, with AWS Fargate coming hot on their heels. I expect that as
GKE matures it will hit this too.

As we move towards "five whys", it turns out that we are essentially re-
treading the path that Cloud Foundry got to years ago (and Heroku before
that): focus on making it easy to run code.

Containers are in themselves an almost-irrelevant implementation detail 99% of
devs should never have to care about, just in the same way that most of us
don't think about mallocs any more.

I call this the Onsi Haiku Test, after the `cf push` haiku that Onsi Fakhouri
gave at a conference a few years back:

    
    
        Here is my source code.
        Run it on the cloud for me.
        I do not care how.
    

And coming into riff from the Cloud Foundry universe, one of my personal
agenda items is that riff should pass the Onsi Haiku Test with flying colours.

~~~
yebyen
I would love to hear more of this kind of talk.

I'd really like to get you in the room with a couple of architects and
technology leadership in my office. (No seriously, maybe zoom room.)

I'm on the kubernetes train, but they are mostly still hoping on Fargate,
having never made this leap, and I have this feeling that I never would have
got into the k8s world without the kind of help I got from Deis.

> Containers are in themselves an almost-irrelevant implementation detail 99%
> of devs should never have to care about

Couldn't agree more. Deis made this easy for me before it was on Kubernetes
(CoreOS and Fleet), and when I was finally convinced to leave that stack
behind, Deis made it easy for me again to do the same on Kubernetes. I'm the
biggest fan of Deis anywhere.

(I've felt the loss of the Deis Workflow maintainers so badly that I'm
personally working on the team to fork Deis! But the bus factor is way too
high for my place of work, which is a university; they want something they can
understand and that they can support or pay a vendor to support if I am not
around anymore. That won't stop me, but it also means I need to keep an ear to
the ground for something we can use to start doing CI/CD here.)

The technical leaders in my place of work, have already made the leap to AWS,
but are just testing the waters of eg. spot market and serverless (lambda) to
try to get the cost and reliability benefits to start to materialize, and they
would really like to skip containers altogether and start building everything
for Lambda. I know enough to say "whoa there Icarus that's no way to reach
Lift-and-Shift" and pretty sure from my experience you should start lower (but
still with some higher abstraction than plain old Docker containers, and also
not Compose or Fargate.)

So I'm in a pickle because Deis is no longer offering support for end users,
otherwise that's probably what I'd still be recommending.

I've been looking at possible replacements like Cloud Foundry (and Convox, and
Empire) but your haiku hits me right in the feels and is the really important
message I need to deliver. I am developing an application right now and I need
the kind of devops machinery and support that is appropriate for that kind of
effort in 2018

(and I definitely don't want to be embroiled in exploratory project to
implement containers for the whole organization some time in the next 5 years,
at least not before we can get something out the door for our customers across
campus...)

I just don't think we do enough software development to justify spending on
something like PCF but I'm not the one who would need to be convinced, either!

~~~
jacques_chester
If you're using buildpacks, Cloud Foundry is the place to be. I obviously feel
like PCF is the bee's knees, but there are OSS alternatives.

You can run OSS Cloud Foundry (now called Cloud Foundry Application Runtime or
CFAR) using BOSH and cf-deployment. You can also run Kubernetes with the same
operator tools if you use CF Container Runtime (CFCR), for people who need
that capability.

SUSE sponsor an OSS GUI called Stratos.

For CI/CD, I am alllll about Concourse. Automation-as-a-Service is a secret
gamechanger.

My work email is in my profile if you'd like me to hop on a call with anyone.

------
jacques_chester
> _But RDMS systems are just another monolith — failing to scale well and they
> don’t support the idea of organically evolving agile systems._

RDBMS systems could handle billions of complex queries per day in the 1990s
(ie _last century_ ) and ship with an _entire language_ designed to allow you
to safely, incrementally evolve your data model.

MySQL and ORMs are not the limits of that universe.

~~~
gdulli
20 years in this field has taught me that (1) we move on to new technologies
more often because we don't understand the current ones than because the
current ones are flawed, (2) we fail to weigh the costs and risks and setback
of moving to new technologies, and (3) we don't realize that we're conserving
overall complexity and flawedness, just moving it around.

~~~
keymone
(4) new tech is sexy, old tech is all cranky old guys, like 30+, yuk.

~~~
dsparkman
Problem is that these young whippersnappers don't even realize that there is
nothing "new" about this tech stack. You can recreate the new sexy with 30
year old tech. You are talking about a load balancer that is redirecting
requests to individual cgi-scripts based on the url. They have just given up
knowing how to setup and configure physical servers.

~~~
laurentl
> They have just given up knowing how to setup and configure physical servers.

Or they know how complex and error-prone it can be, and decided to spend their
time on other things.

It’s good to know how that stuff works, how to configure a LB, install nginx,
rack a server... the way being able to do long divisions by hand is good to
know. But when you’re crunching numbers all day, it’s easier to use a
calculator.

~~~
dsparkman
More like learning to use a slide rule :) You still have to learn how to setup
a load balancer (API Gateway), firewall (IAM, API Gateway), Server Config
(CloudFormation, API Gateway, S3, etc) and so on. And those are Vendor
specific. Move to Azure or GC and you have a whole new set of "serverless"
servers to learn to configure. About the only thing you have really given up
is knowing where your machines are physically.

~~~
scarface74
You've also given up having to buy machines, predict resources, over
provisioning to meet peak demand, server maintenance for databases, caching,
web servers, etc.

If I want to load test something for a day and spin up 20 EC2 instances and
spin them down, I can do that with a script. Then I can see where my
bottlenecks are and provision instances, load balancers, increased disk IOPS,
etc. as appropriate and tear everything down I don't need.

~~~
dsparkman
Apples to Apples. Your 20 EC2 instances are just 20 VPS at any VPS provider
located geographically where you want to deploy them. Also with a script. You
still have not gained anything from your vendor lock-in. IaaS has been around
since the 90s.

~~~
scarface74
And what about the load balancers, the databases instances, the queuing
system, the global CDN, the caching servers, etc? I could script my own
strategy for autoscaling that integrates with metrics from the running
instances, but why would I when I click on a few buttons and have autoscaling
based on CloudWatch metrics, the size of the SQS queue, CPU usage etc?

But as far as "vendor lock-in", it's like developers wrapping database access
up into a repository pattern just in case we want to change databases. In the
real world, hardly anyone takes on massive infrastructure changes to save a
few dollars.

On the other hand, there are frameworks like Serverless and Terraform to build
infrastructure in a cloud vendor neutral method.

~~~
dsparkman
Again each piece you have named can be done in an "older" tech which was the
original point of this thread. Every few years the tech industry reinvents the
same tech and a new generation of developers think mana has fallen from
heaven, when in truth it is the same as the last round with new buzzwords
attached.

~~~
scarface74
Yes it can be done but how efficiently? I couldn't call up the netops guys to
buy and provision all of the resources I needed to test scalability within the
time it takes me to setup a cloud formation script.

In 2008 we had racks of servers we were leasing and that were sitting idle
most of the time just so we could stress test our Windows Mobile apps.

I've been developing professionally for 20 years and 10 years before that as a
hobbyist. I know what a pain it is to get hardware for what you need when your
company has to manage all of its own infrastructure.

Just setting up EC2 instances and installing software on them doesn't reduce
the pain by much. Sure you're cutting down on your capex but you still end up
babysitting servers or doing the "undifferentiated heavy lifting", I would
much rather stand up a bunch of RDS instances.

As far as serverless, why manage servers at all when you can just either
create a Lambda function for the lightweight stuff or deploy Docker images
with Fargate? That's just one less thing to manage and you can concentrate on
development

~~~
dsparkman
I am not disagreeing with you that it is easier than deploying your own
infrastructure. .. But, again back to my original point, lamda functions are
not anything new. They are simply an http app that is "typically" responding
to a single route. The API gateway is simply a configured proxy routing the
"public" routes to your various "functions".

All the parts are easily replaced or scaled however you see fit. Your function
can be in any language that can respond to http on any platform you want. You
can put whatever proxy you want in front to define your routes. You can get as
simple or complicated as you want.

Serverless is not serverless, you are just abstracted away from it.

[EDIT] I would add that personally I would spin you a cluster of Flynn on
Digital Ocean :)

~~~
scarface74
With serverless, you automatically get scale for each endpoint individually,
not just the entire app. If for some reason you get an unexpected amount of
GET request to POST requests, just the GET lambda will scale. If I tried to do
the same with EC2 instances behind an ELB, I wouldn’t get the same level of
granularity.

And lambdas aren’t just about responding to http requests, they are also used
to respond to messages, CloudWatch events, files being written to S3, etc. I
would hate to have to stand up servers for that. Even if you don’t want to get
“locked in” to Lambda, why not serverless Docker?

------
nickjj
After reading that I'm so happy that I develop traditional Flask and Rails
applications with server side templates and tiny bits of JS thrown in when
necessary.

I'm all for moving forward and using new stuff if it makes my life better, but
from the looks of it, Serverless is still many years away before discussions
like that can even take place.

~~~
rmason
> Serverless is still many years away

I strongly disagree, yes it is early and some of the tools notably debugging
and logs aren't anywhere near the level they need to be.

I'm developing an app with serverless and this article really resonated with
me about my struggles. I think once Aurora Serverless launches allowing
developers who need a relational database to easily move on the platform you
will see rapid growth with serverless.

Why? Because it makes so much sense. Why worry about managing servers or
scaling? Why continually write the same boilerplate glue code over and over
again?

I know that I'd rather write a configuration file calling best of the best
components over writing code. Don't get me wrong I like writing code but I'd
rather concentrate on the business logic.

~~~
electricEmu
> I'm developing an app with serverless

Lambda? If so, what benefit do you see over running AWS Container Service?

I ask because I've tried both. Serverless frameworks (AWS Lambda/AzFunc) were
horrific. I picked up Docker as an answer and never looked back.

Others in my company are abandoning serverless after seeing our success. Turns
out being easily able to run things locally, very similar to production, is
very important. We have no problem concentrating on the business logic AND
keeping flexibility.

~~~
Touche
It really depends on what you are doing I guess. Sounds like you enjoy the dev
aspect of Docker, which tells me you are doing more that just running a
function.

------
wheaties
Whoa, whoa, whoa. If you're building a serverless app, you don't start with
Flask. You start with the lambda and plain old Python. Seriously, what he
wrote basically said he tried to build to a construct which lambda isn't meant
to directly support and then had all sorts of problems.

Stop trying to write a full server and then map it to lambda. Start with
lambda and map it to your service. There, done. That's all you need to know.

~~~
edem
Yep, the whole article looks like building a strawman against Python. This
mentality of using "cool" frameworks and join the "cool" javascript kids
(isn't that an oxymoron?) reeks of the hipsteresque mentality of the whole
javascript community...just wait until the next `left_pad`.

------
nategri
> "And now we no longer worry about Python version 2 or 3 (is it ever
> upgrading?)"

I just threw out my back cringing.

~~~
jnwatson
Yep. He uses a 2017 version of JavaScript but complains about a 9 year old
version of Python.

------
languagehacker
Some of these observations are okay, but some of them border on dangerous or
not fully considered. Python is an absolutely fine tool in your tool belt for
serverless. I use it _along with_ JavaScript all the time -- depending on
which has better libraries or makes more sense for a given requirement. Quite
honestly, the best part about serverless is that you can generally pick and
choose which tool is right for the job up to the language in a far more
compositional manner than using more traditional distributed SOA platforms.

Dynamo is pretty good, but its value starts to dwindle when you want to be
able to do local development, possibly even without a network connection. And
most of the traditional ways of interacting with the data layer aren't really
available. So for instance, you're not doing to be able to use an ORM for a
simple application with Dynamo, which means writing a lot of your stuff from
scratch.

So given that pragmatically, you probably still want a database, you're going
to run into a position where you can't possibly be 100% "serverless". A
persistent database connection is a good thing, and one where you can control
the number of connections is an absolute requirement at scale. Even if you can
tweak your lambdas just right to accommodate your DB's maximum number of
connections, you're needless assuming the cost of opening one of those
connections on each invocation of your lambda.

My recommendation is to use serverless where it's really well suited, which is
for distributed, event-driven processing. Your data backend becomes an RPC
that can help work with the top of the funnel to map and distributed well-
populated messages through your system. For this, I use protocol buffers, and
base-64 encode their serialized bytes into an SNS topic. Depending on message
size, your mileage may vary here.

You can still use some of the more clever AWS offerings to reduce your
dependence on some fixed, running server. For instance, Fargate may make it
possible for you to run a persistent RPC server for managing read and write
requests to your RPC, which is maintaining a well-optimized connection pool
with your database.

I agree with using JWT for authentication. I agreed with it when stateless
authentication pre-dated the service offerings that made serverless a possible
paradigm. Serverless generally requires stateless, but you can still reap the
benefits from doing the same thing with servers.

Hosting a static SPA in S3 I think is one of the less challenging arguments in
this blog post and has been a good practice for getting on five years now. Vue
isn't necessarily part of it, and marrying the framework with the choice of
hosting I think muddies the waters on what's good advice and what's just an
opinion.

In all introducing serverless technologies to your platform is a great way to
significantly minimize your infrastructure costs. t comes at a similar price
as building any other SOA -- an increase in the cost of maintenance as
debuggability becomes more difficult, and the network becomes more complex. So
it's important to think critically about what parts you should take and what
parts you should leave or just defer when it comes to your architecture and
your business requirements.

------
sp527
> But RDMS systems are just another monolith — failing to scale well and they
> don’t support the idea of organically evolving agile systems.

What the hell does this mean? Completely unsubstantiated nonsense.

Hardly anyone needs to scale Postgres past 1B records and 50K QPS, which can
be achieved on a relatively affordable pair of synchronously-replicated boxes.

This guy clearly doesn't know anything beyond year 1 basics and the post reads
like a fatal overdose of Kool-Aid.

------
Finnucane
What this seems to be saying is that serverless is great if you are doing
javascript-heavy SPA's, otherwise not so much. Serverless is good if you don't
actually need to talk to the server.

~~~
cle
I didn't get that from this article at all. It didn't really seem to say
anything about backend stuff.

I build huge serverless backend applications, and IMO it's been fantastic.
There is a learning curve, because your application's execution environment is
pretty different from traditional applications, but it's allowed us to build a
remarkably complex and scalable application, very quickly. And it's been
pretty maintainable too.

~~~
wojcikstefan
Can you elaborate on what your application does, what’s a basic flow of a
request, and finally, what are the high-level steps you take to ship a new
feature or API endpoint?

~~~
cle
I can't elaborate on what the application does, but I can talk about high-
level architecture and development.

Basically we have an API that performs asynchronous data analysis and
processing. Our fronting service receives a request, writes some metadata, and
places the request in a "queue", which is picked up by a backend poller that
starts a workflow execution, which orchestrates the fulfillment of the
request. This is all serverless (AWS tech...API Gateway, Lambda, Step
Functions, S3, DynamoDB, CloudFormation, CloudWatch, etc.). Serverless makes
deployments much easier, since we can version our Lambdas and State Machines
using CloudFormation, and have many different versions running at the same
time (not fun if you're managing your own hardware!). We have a CD pipeline
that builds code changes, deploys them to a test account, and runs integration
tests. We use CloudWatch Alarms to monitor production and alert us of any
issues.

We have some development scripts for pushing code changes with CloudFormation
for testing during development. We use that to develop, then once we check the
code in, it works its way through our pipeline and into production.

------
tomc1985
Two paragraphs in and I want to slap this guy

"That’s quaint" ... ugh. _You 're_ quaint.

~~~
slig
There's more few paragraphs down:

> And now we no longer worry about Python version 2 or 3 (is it ever
> upgrading?)

~~~
flanbiscuit
As if Node doesn't have multiple versions that a nvm* is exists and widely
used (and I really like node)

*Node version manager

~~~
BaronVonSteuben
It's much different tho. Node is is backwards compatible, python 3 -> 2 is
definitely not. And python 3 isn't exactly new ...

~~~
jnwatson
Node’s policy is that major versions may introduce backwards compatibility,
and Node 4.0 did introduce some.

Just like Python.

~~~
paulryanrogers
Did you mean incompatibility?

~~~
jnwatson
Yes indeed

------
callumjones
> old-time request-response style of a website with a session managed by the
> server

Breaking news: your app still does this. You just moved the responsibility of
request/response elsewhere.

------
bklyn11201
The same blog has a great post on cold-start times showing Python as the clear
winner:

    
    
      https://read.acloud.guru/does-coding-language-memory-or-package-size-affect-cold-starts-of-aws-lambda-a15e26d12c76
    

Cold starts are a real issue and while warming via pings can mitigate the
issue, you will still run into cold starts when demand scales up.

Java with Spring is really difficult with AWS Lambda because of the slow cold
starts. Five seconds for a cold start is unacceptable for many applications.

------
fapjacks
"AWS Certified Technologist" == "AWS Lock-In Specialist"

------
nzoschke
These lessons match muni experience building
[https://github.com/nzoschke/gofaas](https://github.com/nzoschke/gofaas)

Except I opt for Golang and the Serverless Application Model (SAM).

Go let’s you ditch even more stuff by cross compiling binaries.

And SAM is a framework built by AWS and vastly simplifies the config files.

------
odammit
Lesson 7: when you reach webscale(TM) it gets expensive AF

~~~
mattbillenstein
Do you have some example numbers here?

~~~
mali9
[https://servers.lol/](https://servers.lol/) is a one resource at a very high
level to see if EC2s or Lambda is a good fit for the use-case you are looking
at.

The site gives a cost estimate and an application score comments (latency,
burstiness, function execution time)

~~~
odammit
That is an awesome resource, my friend.

------
daxfohl
It's weird, with all the time and money and brainpower invested over the last
10 years, I still find Heroku to be the lowest maintenance.

~~~
fbonetti
Agreed. Serverless is not free, or even cheap for that matter. It’s an
entirely new skillset that comes with a million new things to learn and worry
about. No thanks.

~~~
daxfohl
Well it's a thing. But the optimization problem needs to be stated clearly.
And I think it _is_ the solution to _some_ optimization problems. Some
analysis around that would be interesting.

------
baus
The article mentions they used Auth0 and Cognito. I spent quite a bit of time
researching Cognito ([https://github.com/baus/cognito-
strap](https://github.com/baus/cognito-strap)), but I never figured out how to
recognize which user is logged in when using federated identities. I found the
docs to be misleading or wrong in many cases.

I'm curious if anyone is actually using Cognito in production. It feels like
an alpha product to me.

------
evrydayhustling
That was interesting. Two questions:

1) Is there a problem with python, or a problem with flask? Isn't this what
chalice is for?
[https://github.com/aws/chalice](https://github.com/aws/chalice)

2) How are you dealing with cold starts?

I learned stuff from this post, but I would have learned more with some
background about the workload etc so I could reason about what generalizes and
what doesn't.

~~~
Flozzin
Not the author, but you can set up cloudwatch to hit your lambdas at defined
intervals. I set up my lambdas that are accessed through api gateway with a
special header to check. If the header is there with the correct value, it
just returns. Most keep-alive checks are in the 10-20ms range, and since
charges in increments of 100ms, it's the lowest possible tier for getting
charged.

~~~
evrydayhustling
We have done this. The problem is that concurrent requests have to warm a new
instance - so if your concurrent workload increases, newcomers face cold
starts. Worth noting that we are more worried about user experience from slow
returns than price.

Edit: forgot to say thanks for suggestion! Also, here is a related article:
[https://hackernoon.com/im-afraid-you-re-thinking-about-
aws-l...](https://hackernoon.com/im-afraid-you-re-thinking-about-aws-lambda-
cold-starts-all-wrong-7d907f278a4f)

~~~
Flozzin
That's an interesting problem. We don't get many concurrent requests and if we
do, well then it's not a huge deal.

How many instances do you want running? You could set up a separate keep alive
path that sends another request to the lambda, with a variable on how deep
into the keep alive request 'recursion' and break out if you deep enough. Does
that make sense? Super weird and just off the top of my head.

edit: this isn't a good solution either because if you have a lambda kicking
off 4 other lambdas because you want 5 running, and someone makes a request
well then you still haven't warmed up that 6th and your 5 lambdas are running
the keep warm code...

~~~
evrydayhustling
If I understand your suggestion right, it's to heartbeat concurrently to force
more warm instances. We have played with that, but spikes are spikes - the
most interesting ones defy expectations. As with many apps, the conditions
that make us spike make performance more important, not less.

Just found the same author as OP with a clever solution here:
[https://read.acloud.guru/cold-starting-
lambdas-2c663055589e](https://read.acloud.guru/cold-starting-
lambdas-2c663055589e)

Having the app pre warm instances on a per-user basis is super cool -- for
user-driven workloads like web servers. To make matters worse, we are serving
an API that takes hits from third party streams -- so our concurrency is based
on their client behavior, not something we can easily link to a session scope,
like users. Tricky!

~~~
Flozzin
Yes. That's what I mean. The per user basis does sound interesting.

Sometimes though, you can't force a square peg in a round hole. I dislike
server maintenance but docker is a decent alternative to lambdas if you can
absorb the extra cost.

~~~
evrydayhustling
Agreed, I think that's the state of the art: if variable concurrency is
important, manage your own spare capacity. But I expect AWS and other
providers will some day let us pay for reserved capacity without managing it,
and I can't wait.

~~~
jnwatson
Yeah it seems a like a lot of resources are wasted on useless pings.

~~~
Flozzin
It does. But if your endpoint is so inactive that it sits idle most of the
time, having it on a server/ec2 instance means you are paying 24/7 for it to
sit there not doing anything. You could argue that it's not that much
different to pay to keep the lambda warm vs paying for a server to be idle
half the time.

------
kolanos
If you're curious what cost savings AWS Lambda may provide, here's a handy
calculator: [https://servers.lol](https://servers.lol)

------
mooreds
That was really great. It would have benefited from discussing the type of
apps they were building (estimated traffic, etc).

I'm curious whether the 70-90% savings include dev time as well?

------
mataug
Lesson 8: Serverless isn't the solution for everything / everyone.

------
NicoJuicy
Any comments on the statement: Azure functions is extremely cheap to use? (
Work related and new @ Azure outside of appservices and VM part).

Also looking for any gotchas, someone mentioned stability/compatibility issues
in another post

------
jbg_
There's no such thing as serverless, just someone else's server.

~~~
FigmentEngine
There's no such thing as the Internet, just some else's network.

------
mali9
In my opinion, serverless is not there yet for large scale latency-sensitive
use-cases (where they cannot be hidden by UI tricks). Startup time of lambda
runtime (cold container and then the language runtime) is high for web use-
cases where the tail latency of multiple seconds cannot be tolerated.

Lambda serverless is really good if you have low RPS and want to pay-for-use
due to the low RPS ( prototypes, small production apps, cron-jobs, regular
scheduled events, compute intensive - image processing jobs)

------
jtchang
I'm surprised there is no mention of all the downsides of JWT. It's good if
you need to scale infinitely but a total pain if you ever have to invalidate
specific tokens.

------
edem
There are zounds of languages which compile to javascript so it is definitely
__not __the only option. With the advent of WASM this will likely get better
so we can finally ditch this abomination.

------
jetako
Can't believe I've gone this long (2 years) without knowing about the `sls
logs` command. It's life-changing.

------
digitalpacman
Why does this guy think that JWT tokens secure against CSRF? They're
unrelated. This scares me. I want to know what projects he works on so I can
avoid them. Not knowing something is insecure is one thing, but knowing that
it's insecure scares me.

------
whalesalad
Classic post where you jump from one tech to another and completely shit all
over the stuff you were previously using. There’s hardly a real tangible
difference between express and flask. They both do routing and turn requests
into responses. When folks make such naive and blanket statements as they do
in articles like this it’s impossible to respect any of it.

I feel bad for this team and the future they’re going to face with the poor
decision making at the top.

~~~
dang
I'm sure you have a point, but making it in the form of a snarky dismissal
breaks the site guidelines:
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html).
Any good effect from being right is drowned out by the bad effect of being a
jerk.

Maybe someone who makes poorer technical choices than you doesn't deserve
respect—that seems dubious, but we can argue about it. The community you're
posting to, however, certainly does, and by posting like this you're not only
disrespecting it but destroying it.

HN is a large, diffuse online community, so the bonds here are inevitably
weak. Agitating snark acts as a solvent on those bonds, making the community
less cohesive. This is exactly the opposite direction to the one we need. All
the default forces already point that way; please don't make them worse.

~~~
whalesalad
I’ve been a member of this community a very long time. When content like this
makes it to the homepage, it suggests the community thinks it’s good. Part of
my responsibility as a member of this community is to try and point out when
that is wrong. This post is full of misleading information.

I never said I didn’t respect the person. I said I didn’t respect the
information in the post. I really can’t imagine how my remarks suggest that
I’m a jerk.

Until I got flagged this was the top comment on the thread. The community
seems to agree with my remarks.

I stand by them.

~~~
dang
The issue isn't the corrective information in your post (sentences 2 and 3).
That's great. The issue is the snark and name-calling (and borderline personal
attack) in the rest, which is what I referred to as being a jerk. I'm not
saying and don't think that _you 're_ a jerk; it's an unintentional side
effect, but one we all need to guard against. As you know, HN is trying to be
a forum where people post civilly and thoughtfully.

You can't judge this by upvotes. Indignation and snark often get heavily
upvoted. That's a bug in the voting system, not an indication of comment
quality. HN can't live by upvotes alone.

------
shruubi
My god, This article is written like the author has just joined a cult and is
about two more claims about serverless being the high-exalted away from
drinking the cool-aide.

I mean, it's wonderful that you went to a conference and was able to take some
interesting lessons that you could apply to your product, but coming back and
dropping your entire platform to rebuild in a new language just to use the
fancy new things you learnt is utterly insane.

But hey, Kubecon has just finished, so I look forward to the upcoming article
"How I rebuilt everything around Containers", should be thrilling to hear
about how you tore up your product and rebuilt it a third time based upon some
cool conference talks.

~~~
dang
Please see
[https://news.ycombinator.com/item?id=17007019](https://news.ycombinator.com/item?id=17007019)
and
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)
and don't post like this here, regardless of how right you are and how
ignorant someone else may be.

Comments like these damage HN far more than a weak article.

