
The hidden costs of serverless - dfirment
https://read.acloud.guru/the-hidden-costs-of-serverless-6ced7844780b
======
jeswin
I wish people are upfront about conflict of interest while writing articles
like this. The author is the CEO of spotinst, which is listed in the last
table as 2x-10x cheaper than any of the other FaaS options. The article is
basically marketing copy devoid of technical details.

~~~
keithwhor
This isn't a conflict of interest. It's developer marketing. A Cloud Guru, the
publisher, is a company that sells developer education / training - should
they have disclaimers explaining that they make money by selling education
before every blog post or comic they put out? Amiram is providing important
contextual advice for developers, PMs and executives, and has the added
benefit of _having a real product in market that will solve these problems for
you_.

Most of HN is thinly veiled marketing supporting special interests. I think a
huge part of the _value_ of HN is lowering the barrier to entry to build great
companies by providing a platform to announce and promote your product / team
/ etc. as well as educate people with quality content. We celebrate "Launch
HN" and funding events, and are going to criticize a founder for building
something awesome and talking about it? That seems a bit silly to me.

~~~
TomBombadildoze
> This isn't a conflict of interest. It's developer marketing.

> The article is basically marketing copy devoid of technical details.

So since you agree with OP that it's marketing, _perhaps_ it's not a stretch
to suggest that it's a bit shady for the CEO of Super Cheap Cloud Product to
author a marketing piece decrying the hidden costs of OMG So Expensive Cloud
Products as though it's just helpful advice?

> I think a huge part of the value of HN is lowering the barrier to entry to
> build great companies by providing a platform to announce and promote your
> product / team / etc. as well as educate people with quality content. We
> celebrate "Launch HN" and funding events, and are going to criticize a
> founder for building something awesome and talking about it?

Apples and oranges. There's a big difference between "check out this thing I
made" and "these products all have somewhat opaque problems _by the way I'm
CEO of a competitor_".

~~~
keithwhor
Developer marketing != valueless. Every single $B dev co you know has invested
millions of dollars in developer outreach and content. It seems the issue is
that there's no "technical details" as if a post titled "the hidden costs of
serverless" would be about anything but _costs_. Perhaps the original response
expected "costs" in terms of technical debt - in which case, that actually is
covered as well, making the criticism a little underwhelming.

If you don't agree with the article, that's one thing. Can you pick out
anything in the post that's counterfactual? Look - I'm the CEO of a cloud
product that, in some sense, could be considered competitive with Amiram. I've
made _plenty_ of upvoted HN comments and Medium posts talking _directly_ about
our company and what we do - hell, there's one posted here if you scroll down.
I take a great deal of pride in it, and I'm very enthusiastic about the future
of the space. It's for that reason I consider the criticism a little unfair.

If the criticism is simply, "be more direct with your marketing and assert
yourself and your company" \--- that's totally fine. But dismissing an entire
article because it's _actively providing advice and context_ because the CEO
runs a company in the space (wouldn't that put him in the best position to
understand the market?) is... well, like I said, kind of silly.

If the above is, in fact, the actual criticism - perhaps we should ask A Cloud
Guru if they'd be more willing to allow promotional posts to feature partners
and neat products. A big problem with content marketing (generally) is the
people who control the flow of content often ask for pieces to appear as
neutral as possible, where a huge chunk of motivated content producers are
actually trying to push a product or service (platform, consultancy,
themselves and their career, their open source, you name it).

~~~
nicodjimenez
Agreed. Ultimately we all want the best software to win. Which means we
sometimes have to hear people out, regardless of whether there's self interest
involved on the author's part.

~~~
onion2k
I don't think we all want the best software to win. If you're the CEO of a
software company then you want your software to win _even if you know it 's
not the best_. You might choose to write articles about how the other software
is actually worse because there are lots of hidden costs. If you did that then
you'd need to make it very obvious that you have a partisan opinion though,
otherwise it'd look quite dishonest.

------
candiodari
The hidden cost of all cloud based software. Even the cheapest cloud (seems to
be Google at the moment) is 5x or more of the price of an equivalent dedicated
server (2x for "compute", >20x for bandwidth, so 5x is somewhat average. And
that's compared to "normal" dedi providers. Ok to use Leaseweb, Hetzner and
OVH ? Make it 10x more expensive).

Since VPS's exist cloud doesn't even make sense for the smallest websites
anymore. Ironically VPS's predate and are even the basis of cloud systems.

And there's the intangible technical debt that cloud imparts. Whichever cloud
you pick, in the future you'll change. That's how the world works. Switching
dedicated service providers is a complicated ops problem. Switching cloud
providers is a full rewrite of everything AND a complicated ops problem. Just
being on the cloud, by itself, forces you to put in serious technical debt.

I don't understand, even after using cloud at work, what those cloud systems
provide that can't be done, better, cheaper, and with less relying on a single
organisation, on dedicated service providers.

I mean Google's hypervisor is good (very good), but it still imparts
significant costs compared to bare metal.

~~~
cdoxsey
They provide many things:

\- managed systems like s3, dynamodb, etc, which are not trivial to run
yourself

\- on-demand and flexible instance types which allow you to try something
without committing

\- spot instances, autoscaling groups, access to job scheduling like
kubernetes. At scale (thousands of nodes) technologies like this are extremely
important

\- redundant and reliable service. It also gives you someone to complain too
and blame (they provide slas)

\- powerful management tooling and APIs that are well documented and widely
understood (you can hire an engineer that has used AWS)

\- security: audit trails, integrated single sign on, private networks, vpns,
role based access keys

I could keep going. Also keep in mind that reserved pricing (or googles
sustained use discount) lowers the cost.

~~~
candiodari
> managed systems like s3, dynamodb, etc, which are not trivial to run
> yourself

How many organisations need a database to scale like that ? Maybe 1000
worldwide, and at least 900 of them are forbidden by law to run them on clouds
(e.g. banks).

Also, cloud is essentially outsourcing server management. If you're of a size
that needs these sorts of databases, you need a big IT department, cloud or no
cloud. At that point, cloud is just vastly more expensive without any monetary
advantage.

And of course, if you use any of these, the lock in is ridiculous. Also known
as "they have you by the b....", and if one thing is damned sure the result
will not be saving money, complexity or effort. Or savings of any kind
whatsoever.

> on-demand and flexible instance types which allow you to try something
> without committing

And ever more management systems that let management "control spending",
"control access", ... and so on, designed to prevent that. The problem with
the old system was management and processes getting in the way of
experimenting, and the only advantage cloud has is that it doesn't have decent
support for such processes yet.

Every vendor is racing to build them in, and it's getting worse week by week.
It won't be long before experimenting is impossible in cloud just like it is
in owned datacenter and dedicated setups.

The problem with experiments is management in every company I've ever
consulted for, and cloud doesn't change a thing, aside from, at the moment,
management incompetence getting in the way of their usual sabotage. That will
stop once current engineers that know cloud start getting a few promos and is
already happening.

> spot instances, autoscaling groups, access to job scheduling like
> kubernetes. At scale (thousands of nodes) technologies like this are
> extremely important

There is no shortage of system supporting this sort of load on dedicated
machines, servers, ...

> redundant and reliable service. It also gives you someone to complain too
> and blame (they provide slas)

Have you read those slas ? They'll return 10% or so of what you normally pay
(not in cash of course, in vouchers) if their service is out for 33% of the
time or so. That's not an sla, that's just laughable. If they are out for 5%
of the total time, I'd be so pissed I'd start a chargeback.

As for redundancy and reliability, there is no shortage of vendors with
similar reliability track records, and it is well documented how to make
server setups that provide reliability.

> powerful management tooling and APIs that are well documented and widely
> understood (you can hire an engineer that has used AWS)

And dedicated servers are linux servers. You can hire engineers, I believe,
that have used those.

> security: audit trails, integrated single sign on, private networks, vpns,
> role based access keys

Yep, these controls are part of what's preventing that experimenting advantage
that currently exists. In reality, all of this relies on well designed
policies at the customer side, and I've yet to see that done properly for more
than a single team in any company.

All of these are of course critically dependent on Google's network, tapped by
the NSA as documented in the famous leaked "SSL inserted and removed here"
slide, and for numerous privacy breaches, Amazon, famous for "somehow" finding
out about how the businesses hosted on their cloud work and taking their
customers away (apparently more than 100 court cases against Amazon currently
going on), or Microsoft, famous for so many things, including pushing through
centralization in Skype for the admitted reason of gaining the ability to spy
on their customers. I'm sure their ethics in the cloud hosting department are
much better.

And please don't even mention those "encrypted storage" assurances. They're
not even worth being called bullshit. If I get to write the code that you use
to encrypt/decrypt, I can obviously trivially break any key you use. If you
don't believe me, I'll give you a webbrowser to download, and I assure you it
encrypts all your passwords, then just log into your web banking a few times
and see what happens (note: webbrowsers inserting "extra" transactions when
doing webbanking are a common thing to find in security work).

So secure ... yeah right.

Sure, it can work. It won't, of course.

> I could keep going.

Yeah you could mention that cloud's real advantage is that engineers believe
that if they get a web-based non-multiplayer version of minesweeper
implemented on kubernetes, they'll get a $200k+ GOOG/FB job. Not true, of
course, but nobody wants to tell em that. In fact, implementing a linked list
in C on a DOS 1.0 C compiler will help them a LOT more.

~~~
paulie_a
Minor point but aws can be HIPPA complient, if it capable of that, bank
software will be perfectly fine for complience. (although incredibly terrible
for other reasons but I digress)

~~~
Johnny555
Indeed, Capital One is already moving to the cloud:

[https://aws.amazon.com/solutions/case-studies/capital-
one/](https://aws.amazon.com/solutions/case-studies/capital-one/)

[https://www.computerworld.com/article/3145622/cloud-
computin...](https://www.computerworld.com/article/3145622/cloud-
computing/capital-one-rides-the-cloud-to-tech-company-transformation.html)

And they are even investing in cloud companies:

[https://www.geekwire.com/2017/trying-snowflake-computings-
ne...](https://www.geekwire.com/2017/trying-snowflake-computings-new-product-
wall-street-capitalone-invests-company/)

------
soulnothing
My current work project is trying to switch to a primary serverless
architecture. I'm trying to fight it but know I'm going to lose.

I don't have any problem with "serverless". Years ago I wrote a nginx handler
that routed to .py files and ran them inside an lxc container, while at a
hosting company. Fronted via haproxy, and documented with sphinx
automatically.

My problem is the hidden costs. Sure lambda requests are low cost. But what
about API gateway, dynamo/rds access. Then I'm writing a basic crud app. Low-
performance low request rate, noncritical. There is this additional
complexity, that just isn't necessary. Every day at my job I'm thinking to
myself it doesn't need to be this difficult.

My usual route is django, django admin, and django rest. With an haproxy in
front for load balancing. Dead simple and easy to work with. It's passe sure,
but KISS. The alternative here is an api and a frontend. Several more projects
to maintain. Tuning the projects to meet lambda size requirements/standards.
The return on investment is minimal to me. If you can't automate managing a
fleet of containers or vps instances then why're you here?

The vendor lock-in, is insane to me. You're pretty much at the beck and whim
of the provider. They raise the prices, what're you going to do. You either go
with the price increase or do a rewrite.

As I was sitting in these architecture meetings on moving too lambda. I was
hearing several new packages/repos. New CI/CD, and configuration to maintain.
During the meeting I scratched out a POC using EC2, Load Balancer, RDS and
django to do what they were saying with half the code. But nope gotta be
lambda.

~~~
ceejayoz
> They raise the prices, what're you going to do.

In the 11 year history of AWS, I don't think they've ever increased prices for
anything. I'm pretty comfortable with their ability to avoid it in the future.

~~~
hueving
An 11 year history is hardly worth betting what could be the future of your
company on. Especially for something just slightly more convenient than what
can be built in-house.

~~~
vageli
> An 11 year history is hardly worth betting what could be the future of your
> company on. Especially for something just slightly more convenient than what
> can be built in-house.

So you can build a globally distributed, redundant, eventually consistent
object store with high availability easily and conveniently and without
allocating developer time that could be better used building..oh I don't know,
the actual business logic? Not to mention ongoing maintenance and its expense.

It's ridiculous to suggest that the entirety of the AWS offering is something
that is just slightly more convenient than what could be built in house.

~~~
toomuchtodo
S3 is not globally distributed. Each region is fully independent of each
other. If Virgina is nuked, you lose anything in us-east-1. I’ve confirmed
this with AWS staff because of a prior HN discussion :)

The ease through which Amazon provides the ability to architect incredibly
globally redundant applications is unmatched. But your business probably can’t
afford to implement at that scale, Amazon or otherwise (geographically load
balanced DNS, multiple primary databases across regions and their associated
followers, replicating S3 objects from all primary region buckets to secondary
regions, instances for each application running multi-AZ in multiple regions,
etc).

~~~
DarronWyke
Of course, if a S3 region gets nuked, it can have repercussions in other zones
too. Remember the S3 outage?

~~~
toomuchtodo
Coming from an tech|infra|dev/ops background, I remember all AWS outages
painfully ;)

------
bluepeter
> Cold starts. This isn’t the time to dive in deep, but it’s a main reason why
> some companies decided against going Serverless.

Cold starts as an issue? I mean, all you have to do is create a CloudWatch
event (or cron if you want an explicit server involved) to fire the Lambda
function every few minutes and you're all good. Sure, if you get heavy
traffic, I suppose you could have multiple containers running simultaneously?
But that's also easy enough to handle w/ CW events.

~~~
mslate
Wow, sounds easy--almost as if it should be that way out of the box...

~~~
dasil003
The economics don't work out if they keep them all hot all the time.

~~~
hliyan
Woot can't they give a simple Boolean seeing for a slightly higher price:
"always up"?

~~~
bluepeter
It's probably not as easy as that. Lambda functions run in a container. If you
have a spike in traffic, you may have 10 different containers handling
requests. So should they keep 1 container always up, or 3, or 10. I suppose
you could have some ELB-type rules to "auto-scale" new containers...

------
nwmcsween
Serverless sort of got taken over by people trying to push PaaS subscriptions.
A real serverless architecture would be something like IPFS and CRDTs

~~~
kitotik
When you think of it that way, in practice it really is the exact opposite of
the “Serverless” buzzword - it’s 100% definitely absolutely requiring a
persistent centralized remote server.

Decentralized solutions 100% definitely absolutely do not.

------
keithwhor
It's interesting, because as the Serverless space evolves there are two
emerging sales and marketing tracks for the technology:

(A) This will reduce your costs (CIO Track)

(B) This will reduce your time to market (PM / Developer Track)

Amiram's article here clearly tries to deconstruct the argument in (A) and
suggest, well, it's more expensive than you think (and Spotinst is the
cheapest option - kudos to Amiram and team) which... is undeniably true, and
API Gateway is a large part of that. It seems based on Re:invent 2017 in
November that Amazon is moving to target the (B) track more aggressively,
which is where we've been residing in a niche with StdLib since we launched
[0].

Realistically I think both tracks are actually half-truths (as is all
marketing), as Amiram even touches on re: lines of code written to maintain
Serverless architecture. At the end of the day, Serverless technology is going
to open up a world Simon Wardley [1] has painted a picture of: one of
reliable, fault-tolerant, self-healing, predictably priced _service
composition_ that can be performed by developers who are increasingly unaware
of implementation details, and perhaps not even developers in the traditional
sense.

It's been a neat exercise at StdLib trying to find the intersection between
current development paradigms (old hat monoliths, etc.), the utility / lower
cost / lower time to delivery of serverless tech, and the future of emergent,
unexplored markets. As the marketing sizzle of "serverless" begins to fade (it
hasn't peaked yet, but it will), and we see more challenges to the technology
itself, it will be interesting to see what _business practices_ and
_development paradigms_ pop up around serverless architectures, and how
companies begin maximizing the utility of a new development canvas.

One day it won't be "serverless," it will simply be best practice for most
companies to ship application logic directly to the runtime layer. What does
that world look like, where will costs (/prices) settle, and how do all
players in the market continue delivering the most value to developers and
companies? It'll be neat to watch it all play out, to say the least.

[0] [https://stdlib.com/](https://stdlib.com/)

[1] [http://blog.gardeviance.org/2016/11/why-fuss-about-
serverles...](http://blog.gardeviance.org/2016/11/why-fuss-about-
serverless.html)

------
duncan_bayne
> Like the jump from on-premises to the cloud, the move to Serverless is more
> or less inevitable.

Speaking as a fan of AWS: neither of those is in any way inevitable.

------
pbreit
Is server less really as inevitable as the move to cloud? The extra moving
parts don’t seem worth it for garden variety crud app so far imo.

~~~
convolvatron
the extra moving parts aren't...but I think the point was to remove the
existing moving parts (installing and maintaining a server instance). they
just got it wrong.

it would save everyone a lot of time and energy to not have to deal with
server instances, so its likely a better model will come along. persistent
storage in the presence of scaling and concurrency is a bit of thorn though.

~~~
bamboozled
That's the thing though, how hard is it to manage a VM in 2018?

If you're using Terraform + modern config management tools I find it a breeze.
I feel like this is a weak argument.

If developers are unable to understand how something like Salt, Chef or
Ansible works then I'd be surprised.

~~~
convolvatron
its not that people cant understand [salt, chef, ansible, terraform], its just
a non-trivial amount of human effort, error, and upkeep just to say 'run my
program'. kind of a shame that every shop has to have one or a few devops
people on hand to do that.

I guess it mostly stings when there is a critical openssl update, or something
analogous. Despite the ideal of continuous integration and lightweight
deployment - none of the shops I've worked at lately can make than happen in
the 10 minutes its supposed to take.

~~~
bamboozled
With all due respect, I think it’s laziness. Most of the good soft ens I know
spend the one - two hours or so it takes to get familiar with these systems.

------
dfirment
There are more costs to Serverless than just CPU and RAM — and for many users,
the additional cost categories of API Requests, Storage and Networking will be
the major cost drivers.

------
galaxyLogic
I think serverless basically just means sharing more with other customers. You
don't juts share the same hardware running your own VM on it, you share the
same VM executing a big program one part of which is your lambda-functions.
This leads to better utilization of hardware resources.

It's not about a cloud vs. inhouse you might have the lambdas of your
different departments running in the same VM.

"serverless" really means "shared program".

------
alexnewman
Just don't use SQL databases. Lambda will knock over your DB

~~~
rwol
Could you expand on this? I’m trying to learn more about using Lambda as a
serverless REST API recently and one of my projects has a SQL DB.

~~~
rmrfrmrf
Traditionally, a web application creates a group of always-on reusable
connections (a pool) to a database server since setting up the connection
takes time and each connection uses up resources on the database server that
could otherwise be used for computation. If there are more connections to your
API that require the database than there are connections in the pool, those
requests are queued and handled when the next connection is freed. These
connections are then closed when the app shuts down.

The problem on Lambda is that you can’t persist any application state beyond
the function call itself, so you can’t take advantage of connection pooling
the way you normally would in a self-contained webapp. Without some kind of
middleman, each database call will require you to open a new connection to the
database and close it when the function ends. This becomes catastrophic for
the database server as the number of requests scales up.

~~~
mrep
Yes you can. You are supposed to instantiate those connections outside the
function call as per the docs.

Example: [https://docs.aws.amazon.com/lambda/latest/dg/vpc-rds-
deploym...](https://docs.aws.amazon.com/lambda/latest/dg/vpc-rds-deployment-
pkg.html)

~~~
alexnewman
Tried and failed

------
fuball63
API gateway costs on AWS seems like a massive "gotcha", especially considering
how access via HTTP is super important for those worried about vendor lock in.

This article was written by someone with a competing service, but I know there
are a bunch of projects right now (including mine) that come out of the box
with HTTPS, run generic/vendor-agnostic containers, and can even be self
hosted.

Self hosted seems to offer the best of both worlds; FAAS with fine grained
control over platform costs, at the cost of the devops work to setup and
maintain.

------
methodin
Just as with anything it's evaluated on a case-by-case basis. Serverless is
GREAT at starting something, testing it and then evaluating if it's even worth
spending time to work on actual server-based stuff. You can get stuff going
immediately with little effort without the overhead of setting up a box,
managing deps and all the nuances that come up standing up classic servers
(even in the cloud). Is it a silver bullet? No, though I'd say it might be for
fleshing out ideas.

------
drdrey
I'm curious about the linear lines of code growth -- has anybody acually
experienced that? What is causing it?

~~~
shortj
There's an interesting side effect I've seen that a serverless approach
enables, which is that it is now much easier to logically separate your
various routes and logic to "route handlers" or "services".

However, if you naturally extend your system with these logically separated
handlers in a vacuum, which is easy to do, and you have not thought through
your packaging and dependency management, you can quickly fall into a pattern
of producing a lot of duplicate boilerplate and utility functionality that a
monolith would have avoided. Basically, take all the pain points and downsides
of SOA and make it really easy to make all the mistakes.

That said, when approached with foresight, it's a perfectly manageable problem
and I wouldn't agree that it is linear.

------
denkmoon
"You will save time: No more [...] thinking about how your application will
scale up or down"

What a dangerous thought.

------
anfilt
What is old is new again! Whoo-hoo.

Regardless it's still not server-less... The entire system is still running
off a server...

~~~
balls187
Which you do not manage in any way shape or form, nor do you have the ability
to.

~~~
anfilt
I am not saying any thing is wrong with that. I am saying it still runs on a
server...

So server-less is a misnomer.

~~~
cwyers
Do you think that there are actual daemons in your computer? If you have to
kill a process, do you only do so in self-defense?

~~~
anfilt
Very funny. No, but what if I called a virtual machine CPU-less. Bad name? I
would say that is. Also treating long running task as a little demon
performing a task is more allegorical same with killing a process. I am not
sure I would call it being server-less allegorical.

------
na85
"Serverless" is a silly name because there are of course still servers. Peer-
to-peer architectures could perhaps charitably be called "serverless", but I
digress.

To me it sounds like this whole "serverless revolution" is just the product of
Amazon's PR team who found a nice term to befuddle and bedazzle know-nothing
CEOs.

~~~
ceejayoz
Look, at some point, my "wireless" internet involves wires, but I don't really
have to _worry_ about managing them, so it's wireless in the ways that matter
to me.

~~~
na85
That's a fair point, but wireless internet is a different mode of delivery to
your house, whereas "serverless" is just adding a layer of abstraction, and I
don't really think that these things are analogous.

