
Containers won the battle, but will lose the war to serverless - rmason
https://read.acloud.guru/simon-wardley-is-a-big-fan-of-containers-despite-what-you-might-think-18c9f5352147
======
thinkpad20
When someone speaks in the style of this guy — absolute certainty of the
superiority of their chosen technology, dismissive of or glossing over its
disadvantages, condescending to those who use or advocate other technologies —
it makes me very dubious of their claims. It also makes me assume they’re
trying to sell me something.

~~~
tyrankh
Agreed. This is clearly a fluff piece promoting his technology du jour.

------
matthewmacleod
I hate to be all dismissive, but this is meaningless twaddle.

I'm baffled that anybody with _any_ experience of software development could
think that something like FaaS will solve logic duplication issues; the
hypothetical "118 systems doing pretty much exactly the same thing" exist
because of _organisational failures_ , and the same thing will continue to
happen regardless of how that logic is implemented.

~~~
zlynx
And if the organization does decide to have a "one official system" then it
will need 118 option parameters or weirdly specific virtual functions in a
base class and become a disgusting, complex mess. Because "pretty much the
same thing" is not "exactly the same thing." Sometimes it can be hammered into
place. Other times those differences were important.

So yeah, I agree with you that duplication will continue to happen a lot.

~~~
Macha
And sometimes this results in the 1 system being so complicated that it's less
effort to build and maintain system 119 than hammer system 1 into a round
hole.

------
seibelj
> Sure, it would be nice to have a competitive environment with different
> providers you can switch between. But that is secondary to usefulness and
> functionality, because companies are in competition with each other anyway.
> And the chance of getting the actual providers to agree is pretty close to
> zero. Everybody goes: “Well, we’re going to differentiate on this or that.”

That is exactly the situation with kubernetes and containers, and it is nice
indeed, as I can easily switch between providers. In fact, my company got free
digital ocean credits and was able to seamlessly host a cluster on DO for free
to enhance testing. This is the ultimate.

What scares me about serverless is the complete lock-in with the vendor. It’s
the same reason google app engine just couldn’t be a game changer. You need
that balance between customization and commoditization that serverless I think
fails on, unless some protocol like docker comes in to save it.

~~~
zlynx
From what I've seen with serverless, it doesn't look that hard to build your
own runtime for it if you did want to run outside of your vendor and avoid
lock-in. It would turn your stuff back into a stack of containers, of course.

~~~
brutopia
That’s the relatively easy part. There’s also a lock-in with all the
integrations as you probably need to have some sort of persistent storage and
all the other managed services helping to cope with the complete
statelessness.

~~~
k__
And you have this with containers as with serverless. So no argument here.

~~~
ec109685
No, with Kubernetes, you specify, for example, that your app needs block store
of a certain size or ingress at a certain port. The runtime knows how to
satisfy that on each cloud.

~~~
k__
What I meant was, when you set up the container and got your data in S3, it is
in there, same goes with serverless.

Nobody prevents you to write your serverless function so it needs a block
store of a certain size or ingress at a certain port.

~~~
ec109685
But there is no standard way to specify those requirements in a cloud agnostic
manner.

------
snowwrestler
Isn't "serverless" just a fancy way of saying "someone else's containers"?

~~~
CharlesW
> _Isn 't "serverless" just a fancy way of saying "someone else's
> containers"?_

No, "serverless" just means that there are no servers (or VMs, or containers)
to be directly managed by the developer.

~~~
ams6110
In that sense, any 3rd party API, e.g. Stripe, or whatever, is "serverless"
from the standpoint of the developer using it, but I doubt many would think of
it that way.

~~~
moduspwnens14
That may change. S3, for example, is a building block for applications and
abstracts away servers completely from the developer.

As do the following:

* SQS

* SNS

* DynamoDB

* Step Functions

* Lambda

* API Gateway

* CloudWatch Logs

* Cognito

* DynamoDB

You can build entire applications with the services above and never have to
bother managing instances. It's not just about running code.

------
zlynx
Serverless really needs to work on their latency I think.

Things will be going great and then there's the oddly weird 2 second delay. I
guess it is bringing up a new server or container to run the lambda in.

Whereas with your own (or well, Amazon's) machines you can scale up before
hitting the limits and not need long pauses.

Maybe one day they'll fix that.

~~~
Ros2
I just started using cron to keep my cloud functions warm at a cost of pennies
per month. It feels like a strange ceremony to let more knowledgeable people
game the system. Even figuring out the time that your functions go 'cold' is a
secret handshake you can't find in proper documentation.

I'm curious what's going to happen when everyone else does this too. It goes
without saying this isn't the intended use for the price they've set and it's
also apparent that >75% of customers will, likely, choose to make this
performance optimization before going to production or after complaints of bad
latency.

Also it is a bit scary that even with keeping a single server warm, you still
pay the cold startup penalty on subsequent scale-ups. Afaik, no cloud provider
has claimed to have 'solved' this (yet more secrecy in how the platform is
managed)

~~~
heavenlyblue
I don't see why they could not implement a pricing strategy to pay for the RAM
used by keeping the lambda hot.

After all this is just keeping it "loaded in memory".

~~~
wahnfrieden
It'd be a regressive concession misaligned with the goals of serverless - a
return to peak capacity planning. _How many_ containers do you keep warm?
Might as well just use traditional non-serverless platforms at that point.

------
shadowmint
He's right (in some ways).

I know it's written in an irritating style, but I feel like this kind of
scaled application deployment strategy is basically the future, and you'd be
very very naive to dismiss it as just some marketing hype.

The ability to deploy arbitrary micro-services to a cluster is basically all
this is, under the hood, and that's something that intrinsically beneficial.

Sure, you may argue, the difference between deploying _code_ to a cluster, and
deploying a _container_ to a cluster may seem immaterial, largely, but
ultimately its hard to argue that it's better to choose the container in most
cases, if you can delegate the work of 'configure and manage the container and
container infrastructure' to someone else.

There's no question it'll slowly start eating the container world in my
opinion; the barrier to entry is low, and ultimately (with something like
openwhisk) you can use a container if you do need to package say, a specific
version of opencv for some specific purpose.

Can anyone suggest why this is a bad approach?

It doesn't sound like a bad approach to me, and a lot of people are taking a
lot of interest in it.

The problem is the people who are pitching "microservices as a service", as
though you might be able to delegate some parts of your service out as a
microservice to other people, like you can run a 'lpad service' and there will
be the new 'microservice app store' and people will all flock to use it, and a
few people who write the early 'good functions' will Get Rich Quick.

...and every time any single one person screws up their service, or has a
tizzy and shuts it down, dozens of applications will fail.

What a disaster waiting to happen; its ridiculous, and all the noise about it
is just people looking for funding (or just clueless, I have no idea).

...but, that doesn't change the fundamental value proposition of serverless:
auto-scaling without devops.

(Also:

> And we’re seeing the exact same pattern with serverless. Yes, other
> companies could challenge Amazon’s dominance. But they probably won’t,
> because they don’t believe it’s for real. And by the time they do, it’ll be
> too late.

Yep. That's spot on as well)

------
EtDybNuvCu
To quote from the interview, verbatim, "Sure, it would be nice to have a
competitive environment with different providers you can switch between. But
that is secondary to usefulness and functionality, because companies are in
competition with each other anyway. And the chance of getting the actual
providers to agree is pretty close to zero. Everybody goes: “Well, we’re going
to differentiate on this or that.”

The one exception, of course, being how everybody seems to have come together
around containers. So now everybody’s excited about containers, but the
battle’s shifted up. So you’ve won the battle, but lost the war."

The interviewee here manages to make a Santayana mistake without realizing it;
eventually, just as containers standardized, serverless functions will
standardize, and cloud vendors will either conform or marginalize their
offerings in response.

~~~
wmf
But by the time serverless is standardized he'll probably be pimping the next
thing.

------
mankash666
There are many connection oriented protocols that can never run on function as
a service (Faas) offerings, which the article seems to suggest, is the only
type of severless. Fundamental applications like email, webrtc, even git,
cannot run inside FAAS.

The future, in my opinion is offerings like AWS fargate. The developer writes
logic that is containerized, and the auto scaling, scheduling, etc is handled
in a seamless fashion by the provider, removing the burden of ops from the
developer. In many ways, this is a connection-oriented Faas, and that is where
the future likely lies, due to its wide applicability

~~~
k__
Isn't Lambda just the glue?

I mean there is also Kinesis Streams, AWS IoT and such.

------
darkmarmot
He's almost right! The Erlang VM (BEAM) will soon replace all the things :)
(See Grisp: [https://www.grisp.org/](https://www.grisp.org/))

------
ErikAugust
Lambda is somewhat standardized and competitive - if you look at the Node API
for example - you can port your endpoint to Express or Koa in 20 minutes. You
aren't locked in by any means.

~~~
k__
Yes. You just need a tiny wrapper to convert your Express endpoint to a API
Gateway Lambda.

------
marcell
I'm new to serverless, can someone clear up two things for me?

1\. Can I run a standard Rails or Django app on "serverless"?

2\. If I'm running on Heroku and don't directly manage servers, does that
count as "serverless"?

~~~
pronoiac
Another name for serverless is function as a service, which might help
clarify. No to both of your questions.

~~~
tmnvix
Zappa[1] allows you to do this (for python WSGI apps - I'm not sure if there
is a similar option for hosting a Rails app serverlessly). I've found it very
straight-forward to use for my Django app. I also like that if I choose to, I
can ditch Zappa and host my app elsewhere (such as on Heroku) without really
changing my app's code.

[1] [https://www.zappa.io/](https://www.zappa.io/)

~~~
pronoiac
Huh, I'd thought Lambda covered smaller components than those asked. TIL.

------
ilaksh
If they are talking about AWS Lambda-like stuff or just using 3rd party API
services.. there are very obvious advantages to doing things that way over
managing containers or VMs. Of course lots of people will still be using
containers and regular VMs for quite some time just because it will be more
practical for awhile for their use case or just out of familiarity.

But anyway I think that eventually the real 'serverless' will be peer-to-peer
systems like Ethereum smart contracts, IPFS, dat, etc.

------
olavgg
I really like the idea of serverless, in my use case a serverless setup would
be great to run prediction on. For example a chatbot where you predict the
intent. It would be a lot more cost effective than running the model on their
own GPU/CPU instances.

------
davesque
Wouldn't it be great if someone could figure out a way to express the logic
that implements a function service as a compact, machine readable package
which could be locally "cached" by all services that depend on it?

------
013a
Serverless is the future. But I doubt FaaS is. FaaS has a place, especially
around latency-insensitive or bursty event handlers. But to suggest that it
will replace any traditional server is pretty farfetched.

~~~
k__
Aren't they working on that?

I mean there is stuff like kinesis streams, AWS IoT and Lambda@Edge

------
davesque
I'm just waiting until serverless platforms develop build scripts that create
file system deltas to make deployments more storage efficient.

~~~
vikiomega9
Wouldn't version control solve this problem anyhow?

------
k__
Good to know. Wanted to go full-stack this year and decided to try
"serverless", lets see how it goes :D

------
whalesalad
This is like claiming cars will lose the battle to planes. These technologies
are not mutually exclusive.

------
jaequery
I was actually skimming his article to see if he mentions decentralization or
blockchain somewhere.

