
“Lambda and serverless is one of the worst forms of proprietary lock-in” (2017) - peter_d_sherman
https://www.theregister.co.uk/2017/11/06/coreos_kubernetes_v_world/
======
cwyers
I have worked with database code that was meant to only work with the database
it was running on, and database code that was meant to be agnostic to what
database you used. I always thought that the costs of the second were
underappreciated relative to their benefits. And unless you're actively
maintaining running on more than one database (as in, shipping a product where
your users have more than one database) you tend to miss all the implicit ways
you come to depend on the implementation you're on -- yes, the syntax may be
the same across databases, but performance impacts are different, and so you
tend to optimize based on the performance of the database you're on.

I suspect the same is true for cloud. Real portability has real costs, and if
you aren't incurring all of them up front and validating that you're doing the
right things to make it work, then incurring part of them up front is probably
just a form of premature optimization. At the end of the day, all else being
equal, it's easier to port a smaller codebase to new dependencies than a
larger one, and attempting to be platform-agnostic tends to result in more
code as you have to write a lot of code that your platform would otherwise
provide you.

~~~
jasonkester
It’s not just portability that’s an issue with lambda. It’s also churn.

Running on Lambda, one day you’ll get an email saying that we’re deprecating
node version x.x so be sure to upgrade your app by June 27th when we pull the
plug. Now you have to pull the team back together and make a bunch of changes
to an old, working, app just to meet some 3rd party’s arbitrary timeframe.

If you’re running node x.x on your own backend, you can choose to simply keep
doing so for as long as you want, regardless of what version the cool kids are
using these days.

That’s the issue I find myself up against more often when relying on Other
People’s Infrastructure.

~~~
nerdbeere
It's not about using what the cool kids use these days. I can't stress enough
that unmaintained software should _not_ run in production.

This way you have a good argument towards management and if you do it
regularly or even plan it in ahead of time it's usually not much work.

During a product planning meeting: "Dear manager, for the next weeks/sprint
the team needs X days to upgrade the software to version x.x.x otherwise it
will stop working"

~~~
jasonkester
I guess we have different philosophies then. My take is that software in
production should not require _maintenance_ to remain in production.

Imagine a world where you didn't need to spend a whole week every year, per
project, just keeping your existing software alive. Imagine not having to put
off development of the stuff you want to build to accommodate technical debt
introduced by 3rd parties.

That's the reality in Windows-land, at least. And I seem to remember it being
like that in the past on the Unix side too.

~~~
patrec
Your vision is only workable for software for which there are no security
concerns. This might improve to the extent industry slowly moves away from
utterly irresponsible technologies like memory-unsafe languages and brain
damaged parsing and templating approaches and more or less the whole web
stack. I wouldn't hold my breath though. And even software that's not
cavalierly insecure will have security flaws, albeit at a lower rate.

~~~
jasonkester
Keep in mind that you're arguing against an existence disproof. The Microsoft
stack, for example, is a pretty big target for attack, and has seen its share
of security issues over the years.

But developers don't need to make any code changes or redeploy anything to
mitigate those security issues. It all happens through patches on the server,
99% of which happen automatically via windows update.

~~~
avodonosov
Yes, Microsoft is good at backward compatibility.

So many open source hackers do not know the basic tecniques for backwards
compatibility (e.g. don't reaname a function, just intoduce a new one, leaving
the old available).

I'm spending very significant efforts maintaining an OpenSSL wrapper because
OpenSSL constantly remove / rename functions. I hoped to branch based on
version number, but they even changed the name of the function which returns
version number.

And that's only one example, lot of people do such mistakes costing huge
efforts from users.

And this popular semantic version myth, that you just need to update major
version number when you chane the API incompatibly to save your clients from
trouble.

~~~
ben0x539
> So many open source hackers do not know the basic tecniques for backwards
> compatibility (e.g. don't reaname a function, just intoduce a new one,
> leaving the old available).

I'd dispute this, or at least I think this doesn't capture the whole picture.
Microsoft makes money with backwards compatibility and can afford to spend
significant effort on to the ever-growing burden of remaining backwards-
compatible indefinitely. Open source volunteers are working with much more
limited resources and I think that it comes down much more to intentional
tradeoffs between ease of maintenance and maintaining backwards compatibility.

If you have a low single-digit number of long-term contributors, maybe the
biggest priority to keep your project moving at all is to avoid scaring off
new contributors or burning out old contributors, and that might require
making frequent breaking changes to get rid of unnecessary complexity asap.
Characterizing that as "they don't know that you can just introduce a new
function" doesn't seem like it yields instructive insights.

~~~
avodonosov
Yes, this is exactly the wrong reply I often hear when complaining about
backwards compatibility.

The mistake here is that in 99% of cases backwards compatibility costs noting
- no efforts, no complexity.

Of two equally costing choices the people breaking backwards compatibility
just make a wrong choice.

> maybe the biggest priority to keep your project moving at all

When you rename function SSLeay to OpenSSL_version_num, where are you moving?
What does it give to your project?

Ok, if you like the new name so much, what prevents you from keeping the old
symbol available?

    
    
            unsigned long (*SSLeay)(void) = OpenSSL_version_num
    

(Sorry for naming OpenSSL here, it's just one of many examples)

When developers do such things, they break other open source libraries, which
in turn break other. It's a huge destructive effect on the ecosystem. It will
take many man-days of work for the dependent systems to recover. And it may
take years for the maintainers to find those free days to spend on recovery,
and some projects will never recover (e.g. no active maintainer).

With a lift of a finger you can save humanity from significant pain and
efforts. If you decided to spend your efforts on open source, keeping
backwards compatibility by making the right choice in a trivial situation will
make you contribution an order of magnitude bigger, efficient.

So, I believe people don't know what they are doing when they introduce
breaking changes.

~~~
avodonosov
I saw developers introducing breaking changes, then finding projects depending
on them and submitting patches. So they really have good intentions and spend
more their volunteer open source energy than necessary. And when the other
project can not review and merge their patch (no maintainers) they get
disappointed.

So please, just keep the old function name. It will be cheaper for you and for
everyone.

~~~
pnutjam
An unmaintained duplicate way of doing things is a mistake waiting to happen.

~~~
jammygit
I was just thinking this, but I guess were really just talking API changes.
Everything under the api can still get rewritten, no?

------
scarface74
This is not true.

For example:

Using this template.

[https://github.com/awslabs/aws-serverless-
express](https://github.com/awslabs/aws-serverless-express)

I’ve been able to deploy the same code as a regular Node/Express app and a
lambda with no code changes just by changing my CI/CD Pipeline slightly.

You can do the same with any supported language.

With all of the AWS services we depend on, our APIs are the easiest to
transition.

And despite the dreams of techies more than likely after awhile, you aren’t
going to change your underlying infrastructure.

You are always locked into your infrastructure choices.

~~~
dickeytk
You're only thinking about the _input_. Technically, yes, I can host an
express app on lambda just like I could by other means, but the problem is
that it can't really _do_ anything. Unless you're performing a larger job or
something you probably need to read/write data from somewhere and connecting
to a normal database is too slow for most use-cases.

Connecting to AWS managed services (s3, kinesis, dynamodb, sns) don't have
this overhead so you can actually perform some task that involves
reading/writing data.

Lambda is basically just glue code to connect AWS services together. It's not
a general purpose platform. Think "IFTTT for AWS"

~~~
staticassertion
OK. So you connect to Postgres on RDS - cloud agnostic.

You connect to S3, and:

a) You can build an abstraction service if you care about vendor lock-in so
much

b) It has an API that plenty of open source projects are compatible with (I
believe Google's storage is compatible as well)

Maybe you use something like SQS or SNS. Bummer, those are gonna "lock you
in". But I've personally migrated between queueing solutions before and it
shouldn't be a big deal to do so.

It's really easy to avoid lockin, lambda really doesn't make it any harder
than EC2 at all.

~~~
scarface74
Have you ever asked the business folks or your investors did they care about
your “levels of abstraction”? What looks better on your review? I created a
facade over our messaging system or I implemented this feature that brought in
revenue/increased customer retention/got us closer to the upper right quadrant
of Gartner’s magic square?

~~~
wisswazz
Why should they care, or even be in the loop for such a decision? You don’t
ask your real estate agent on advice for fixing you electrical system I guess?

~~~
scarface74
Of course your business folks care whether you are spending time adding
business value and helping them make money.

I’ve had to explain to a CTO before why I had my team spending time on a CI/CD
pipeline. Even now that I have a CTO whose idea of “writing requirements” is
throwing together a Python proof of concept script and playing with Athena
(writing Sql against a large CSV file stored in S3), I still better be able to
articulate business value for any technological tangents I am going on.

~~~
wisswazz
Sure. Agree totally, maybe I misread your previous comment a bit. What I meant
is that run-of-the-mill business folks do not necessarily know how business
value is created in terms of code and architecture.

------
dr01d
In most cases, very few companies have products that need to scale to extreme
load day 1 or even year 1. IMO, instead of reaching for the latest shiny cloud
product, try building initially with traditional databases, load balancing,
and caching first. You can actually go very far on unsexy old stuff. Overall,
this approach will make migration easier in the cloud and you can always
evolve parts of your stack based on your actual needs later. Justify switching
out to proprietary products like lambdas, etc once your system actually
requires it and then weigh your options carefully. Everyone jumping on the
bandwagon these days needs to realize: a LOT of huge systems are still rocking
PHP and MySQL and chasing new cloud products is a never ending process.

~~~
com2kid
Serverless is also easier to develop for.

With Google Firebase Functions I was able to start writing REST APIs in
minutes.

Compare that to setting up a VM somewhere, getting a domain name + certs +
express setup + deployment scripts, and then handling login credentials for
all of the above.

I had never done any of that (eventually I grew until I had to), so serverless
let me get up and running really quickly.

Now I prefer my own express instance, since deployment is much faster and
debugging is much easier. But even for the debugging scenario, expecting
everyone who wants to Just Write Code to get the horrid mess of JS stuff up
and running in order to debug, ugh.

(If it wasn't for HTTPS, Firebase's function emulator would be fine for
debugging, as it is, a few nice solutions exist anyway.)

But, to be clear, on day 1 the option for me to write a JS rest endpoint was:

1\. Follow a 5-10 minute tutorials on setting up Firebase Functions.

OR

1\. Pick a VM host (Digital Ocean rocks) and setup an account

2\. Learn how to provision a VM

3\. Get a domain

4\. Get domain over to my host

5\. SSH into machine as root, setup non-root accounts with needed permissions

6\. Setup certbot

7\. Learn how to setup an Express server

8\. Setup an nginx reverse proxy to get HTTPS working on my Express server

9\. Write deployment scripts (ok SCP) to copy my working code over to my
machine

10\. Setup PM2 to watch for script changes

11\. Start writing code!

(12. Keep track, in a secure fashion, of all the credentials I just created
for the above steps!)

I am experienced in a lot of things, and thankfully I had some experience
messing around with VMs and setting up my own servers before, but despite what
everyone on HN may think, not every dev in the world also wants to run a bunch
of VMs and manage their setup/configuration just to write a few REST
endpoints!

So yeah, instead I can type 'firebase deploy' in a folder that has some JS
functions exported in an index.js file and a minute later out pops some HTTPS
URLs.

~~~
fyfy18
If you don't want to learn DevOps why not use a PaaS like Heroku? That way
when you want to learn DevOps, you can move your application without rewriting
large swathes of it.

It's funny but when I learned to code basically all ISPs provided you with
free hosting and a database, and you just needed to drag and drop a PHP file
to make it live. It's like we have gone backwards not just in terms of
openness but also in terms of complexity.

~~~
com2kid
The last time I had done server side dev, yeah, it was all PHP and FTP drag
and drop a file over.

I was a bit shocked at how asinine things had gotten.

------
seniorsassycat
Of all the AWS features to criticism for lock-in, Lambda seems like the
weakest choice.

You don't have to write much code to implement a lambda handler's boilerplate,
and that boilerplate is at the uppermost or outermost layer of your code. You
could turn most libraries or applications into lambda functions by writing one
class or one method.

A lambda's zip distribution is not proprietary and is easy to implement in any
build tool.

~~~
Karunamon
I'd include the triggers as part of that analysis, like being able to invoke a
function every time something is pushed to an S3 bucket for example. Just
being able to run arbitrary functions without caring about the OS is the core
product, but the true value is that you can tie that into innumerable other
services that are so helpfully provided.

Basically, AWS has so much damn stuff under their belt now, and it all
integrates so nicely, every time they add a new feature it lifts up all the
other features as a matter of course.

------
QuinnyPig
"I'm scared of vendor lock-in, so I'm going to build something that's
completely provider agnostic" means you're buying optionality, and paying for
it with feature velocity.

There are business reasons to go multi-cloud for a few workloads, but
understand that you're going to lose time to market as a result. My best
practice advice is to pick a vendor (I don't care which one) and go all-in.

And you'll forgive my skepticism around "go multi-cloud!" coming from a vendor
who'll have precious little to sell me if I don't.

~~~
andrewstuart2

        Pick a vendor and go all in.
    

That sounds like the perspective of someone who's picked open source vendors
most of the time, or has been spoiled by the ease of migrating Java, Node, or
Go projects to other systems and architectures. Having worked at large
enterprises and banks who went all in with, say, IBM, I have seen just how
expensive true vendor lock-in can get.

Don't expect a vendor to always stay competitively priced, especially once
they realize a) their old model is failing, and b) everybody on their old
model is quite stuck.

~~~
dijit
I am incredulous that people wouldn't be worried about vendor lock-in when the
valley already has a 900lb gorilla in the room (Oracle).

Ask anybody about Oracle features, they'll tell you for days about how their
feature velocity and set is great. But then ask them how much their company
has been absolutely rinsed over time and how the costs increase annually.

Oracle succeed by being only slightly cheaper than migrating your entire
codebase. To offset this practice, keep your transition costs low.

\--

Personal note: I'm currently experiencing this with microsoft; all cloud
providers have an exorbitant premium when it comes to running Windows on their
VMs, but obviously Azure is priced very well (in an attempt to entice you to
their platform). Our software has been built over a long period of time by
people who have been forced to run Windows at work -- so they built it on
Windows.

Now we have a 30% operational overhead charged from microsoft through our
cloud provider. But hey.. at least our cloud provider honours fsync().

~~~
james_s_tayler
I think perhaps not all vendor lock-in is created equal. I too shudder at the
thought of walking into another Oracle like trap, but it's also an error in
cognition to make the assumption that all vendors will lock you in to the same
degree and in the same way.

I guess the part of us that is cautioning ourselves and others are aware of
the pitfalls, but others also have valid points around going all in.

There is a matrix of different scenarios let's say.

    
    
      You can go all in on a vendor and get Oracled.
      You can go all in on an abstraction that lets you be vendor agnostic and lose some velocity while gaining flexibility.
      You can go for a vendor and perhaps it turns out that no terrible outcome results because of that. 
      You can go all in on vendor agnostic and have that be the death of the company.
      You can go all in on vendor agnostic and have that be the reason the company was able to dodge death.
    

Nobody can read the future and even "best practices" have a possibility of
resulting in the worst outcomes. The only thing for it is to do your homework,
decide what risks are acceptable to you, make your decision, take
responsibility for it.

~~~
dr01d
Vendors have 2 core requirements to continue operating: get new customers and
keep the existing ones. Getting new customers requires constant innovation,
marketing spend, providing value, etc. Keeping existing customers only
requires making the pain of leaving greater than the pain of staying.

~~~
james_s_tayler
Sure. And from even from that you still can't infer what outcome will
materialize. If you made the technically correct decision and your business
went under because of it, that is still gonna hurt no matter which way you
look at it. Hence the advice is do your homework, figure out which risks are
acceptable to you, make your choice and take the responsibility. There is no
magic bullet to picking the right option. Only picking the option you can live
with because that's what you're going to have to do regardless of the outcome.

You might know all the theory on aviation and be a really experienced pilot
and one day a sudden wind shear might still fuck you.

------
lbacaj
At the expense of losing what little reputation I have on HN I will say this:

As many others on here seem to be correctly saying, i think this article
amounts to fear mongering of vendor lock in. The modern public cloud is very
different from the Oracle/IBM mainframes of yester-year.

The whole point of the public cloud is to leverage managed services to their
fullest extent so you can move incredibly fast. As a startup, you’ll run laps
around your competitors doing all of this from scratch simply to preserve
their non vendor lock in.

The notion that removing that glue code that glues your code to AWS or Azure
managed services amounts to vendor lock in, that is no more true than any
other code running on any VM that talks to those same managed services. Except
the main difference here is that your not wasting time writing the glue code.

Additionally Azure Functions or AWS Lambda, or even Functions on Kubernetes,
which are meant to be the smallest unit of work when used correctly (similar
to a MicroService) and should contain only your application logic are “vendor
lock-in” is absolutely rediculous to me. If anything when you do decide to
move vendors this will be the easiest code in the world to migrate, inputs and
outputs.

I will concede that it is hard to see this the way I’m describing if you
haven’t actually worked on the modern public cloud and are not actively taking
advantage of managed services on there for speed of delivery.

A little self promotion: as an example of what’s actually possible with these
Serverless frameworks I recently built a cross platform app, as a side project
in just a few months nights and weekends with the entire backend as Serverless
Functions, the app can read any article to you, using some open source ML
models for text to speech, and can be found
[https://articulu.com](https://articulu.com) if you want to check it out.

------
Bucephalus355
There is a certain amount of arrogance to always being afraid of vendor lock-
in. Most companies don’t survive, even the best ones might be just around for
20-25 years. The big worry should be on building a business that won’t
immediately die.

And even with Oracle (probably the primo example of lock-in) it’s not like
there aren’t firms who’s sole speciality is pumping data out of the Oracle DB
and transforming it magically into T-SQL. It’s never the end of the world with
vendor lock-in.

NOTE: now vendor lock-out does scare me like no other ironically

~~~
travisjungroth
By lock-out do you mean the vendor shuts down, or you get banned, or something
else?

~~~
freehunter
Not the person you're responding to, but I worry about (and have experienced)
both with my tech stack, even as I've purposefully switched vendors multiple
times with minimal headaches.

Locking yourself into a single vendor is easier to voluntarily work your way
out of than your vendor shutting down or shutting you out unexpectedly. But
the good news is if you plan for one you get the other for free.

------
nilshauk
I would argue that small to medium web services don’t need Kubernetes nor
serverless. It doesn’t even need to be split into services. Build a tidy
monolith and see how far that takes you first. Have less moving parts.

Yes, serveless ties you in to platform specifics but in their nature the
functions you create should be small and easy to reimplement elsewhere.

Kubernetes on the other hand is arguably also a certain lock-in, by virtue of
being complicated. No wonder vendors love it, it’s an offering that is hard to
do right in-house. And when Kubernetes releases updates only the most seasoned
in-house teams will be able to keep up. It creates job security by being a lot
to learn and manage. Yes there are good abstractions but when something breaks
you’ll need to delve into that complexity below. (Makes me think of ORM
abstractions vs SQL.)

Yes, Kubernetes is an awesome vehicle for orchestrating a swarm of
containerized services. But when you’re not Netflix or Twitter scale it’s
ridiculous to worship this complexity.

Frankly I keep coming back to appreciate Heroku's abstractions and its twelve
factor app philosophy [https://12factor.net/](https://12factor.net/). Heroku
runs on AWS but feels like a different world than AWS to develop on. I can
actually get projects flying with a 2-3 person team me included.

~~~
sonnyblarney
" don’t need Kubernetes nor serverless."

Actually 'serverless' is where small shops might want to start.

A single Lambda can encompass a whole variety of functions, and if you're
using a datastore that scales as well, you don't need to worry about much.
Once it's set up, it should be very easy to monitor and change.

I'd rather a simple Lambda than managing a couple of EC2's with failover
scenarios and the front end networking pieces for that.

~~~
nerdbeere
Also small scale is where lambda really shines in terms of costs. If you have
some api endpoint that gets a hit 100 times per hour and does some execution
then this is actually way cheaper then even the cheapest ec2 instance in a
production setup with ELB.

------
swamp40
This is interesting:

 _We 've heard from our customers, if you cross $100,000 a month on AWS,
they'll negotiate your bill down," said Polvi. "If you cross a million a
month, they'll no longer negotiate with you because they know you're so locked
that you're not going anywhere. That's the level where we're trying to provide
some relief._

~~~
bgroins
As someone who has negotiated with AWS at the $1m/month level this is
completely false.

~~~
Someone1234
I've also negotiated with AWS, and both your position and their position
strikes me as equally true.

There's certain products of theirs that they just aren't going to negotiate
on, because they know they've got you, whereas others the clouds part and
discounts rain down.

It certainly used to be this way when AWS had less competition, these days
there's an Azure/Oracle Cloud/Rackspace/Google/etc alternative to most of
their greatest hits, which gives a greater negotiating edge.

~~~
redisman
Lambda certainly has more alternatives than Dynamo for example. But I guess
the true lock-in is the integration. If you use Lambda, chances are you'll end
up choosing S3 and SQS and Dynamo and API Gateway etc.

------
fishnchips
I’m not buying that. Lambda is merely an execution environment. In most Lambda
functions I write, the Lambda-specific bit is tiny, and could be easily
replaced without affecting business logic.

On the other hand, most Lambdas I write interact with other AWS APIs, which is
where the real lock-in is. The effort to eg. move the data off Dynamo is
substantially higher than what’s required to switch that bit of code to run on
k8s and consume a Kafka topic.

~~~
TheRealPomax
Cool. What part aren't you buying, the title of this post, or the actual five
paragraphs in the article that actually give you the context of that title?

------
jdietrich
The Serverless framework is platform-agnostic and open source. You can use a
bunch of different FaaS providers, or self-host on Kubeless or Fn.

[https://serverless.com/](https://serverless.com/)

------
milesward
Unless your serverless platform is OSS...
[https://github.com/knative/](https://github.com/knative/)

~~~
BryantD
Or [https://github.com/fnproject](https://github.com/fnproject) .

I think best practice is to think of serverless deployment as a technical
operations technology, rather than as a methodology to eliminate the need for
technical operations. Don't lose track of what you're effectively outsourcing
to your serverless provider. Have a backup plan, just like in the old days you
wanted a backup plan in case your datacenter provider had issues.

------
captainbland
The problem I see with this kind of vendor lock in is you can get screwed in
several different ways if you let yourself get locked in enough.

The 'good': a competitor overtakes AWS and is able to offer vastly cheaper or
better value services than you have access to, rendering you less competitive
than people who are able to move to that platform easily.

The 'bad': Amazon starts deprecating services you rely on and you're forced to
port things anyway.

The 'ugly': Amazon decides that it's happy with its market share or its
shareholders start demanding they bring in more revenue and they realise that
those who are locked in to AWS are easy targets. It'd be easy to just jack up
the higher tiers of things like lambda, dynamo DB, API gateway, etc. and on
those who they have bespoke agreements with without even necessarily affecting
their marketshare.

It's really a risk/reward thing when going for these platform specific
serverless systems. It's like asking if you trust a big company enough that
you want to give up all of your bargaining power with them, and that you're
going to put thousands or even millions of dollars where your mouth is on
that.

~~~
scarface74
As far as I know, in the entire existence of AWS since the first services
launched in 2006, they have never abandoned a service.

------
adjkant
After using lambda and serverless myself over the past year, I really struggle
to see where this lock-in is. If you're already writing a stateless API as
most are these days, and the cloud platforms support many language options,
going between say EC2 and Lambda really isn't that much difference in code. If
that changeover time is too costly for you, that's far more likely a sign of
changing infrastructure too often.

~~~
robrtsql
In my opinion, the real lock-in is not the stateless API, but the tie-ins with
other AWS services that may end up being required to accomplish what you need.

Like, if you're trying to provide a calculator API, you can definitely run
that in Lambda and then easily move it somewhere else when AWS does something
to upset you. But, let's say you're trying to do something a little more
complicated (a common example is validating and transforming profile pictures
for some sort of app), you might end up using AWS Step Functions and SQS. Your
code is still portable but it relies on a bunch of managed services.

------
peterwwillis
> He elaborated: "It's code that tied not just to hardware – which we've seen
> before – but to a data center, you can't even get the hardware yourself. And
> that hardware is now custom fabbed for the cloud providers with dark fiber
> that runs all around the world, just for them. So literally the application
> you write will never get the performance or responsiveness or the ability to
> be ported somewhere else without having the deployment footprint of Amazon."

It's almost as if you're paying to use someone else's massive investment in
technology so you don't have to reinvent the wheel, enabling you to just get
business done quickly and at ridiculous scales. Kind of like using Windows
tech stacks, or buying a Ford F-350. Who could possibly build a business on
such terrible lock-in devices?

------
forrestbrazeal
Serious question for all on this thread: have you personally encountered a
deal-breaking issue while _actually implementing_ a significant application on
"Lambda and serverless"? Whether that's lock-in, scaling issues, cost,
performance, or whatever. Has there been something that's caused you to go
"yeah, no, this was a bad idea; should've rolled my own infra."

I'm not asking this disingenuously; I legitimately want to know.

------
jedberg
There is absolutely no lock in whatsoever with Lambda. The features provided
by Lambda are also provided by Google Cloud Functions and Azure Functions.

The lock in comes from the ecosystem you use them in. If you make code that
just returns the time, you can run that anywhere. If you make code that uses a
database, your _database choice_ provides lock in, but not Lambda.

And it's the same lock in you get using any service from AWS.

But your trade off is that you can make something that's super portable, but
must cater to the lowest common denominator of features amongst all the
providers you want to be compatible with.

I'd rather have lock in than be hamstrung by the velocity of the slowest
provider.

------
time0ut
I've built a number of serverless systems over the past few years on AWS and
GCP. None were too extreme, but ranged from moderately complex SPA to silly
chat bot. Some saw light, but real, usage.

To echo what others have already said: the lock in isn't in the compute, it's
in the ecosystem, which also happens to be where all of the value is.

Like everything else in our industry, serverless is a series of trade offs.
There are a number of classes of problems where it is absolutely worth trading
the downsides of serverless for the agility and velocity the ecosystem can
provide you. As with anything, the key is knowing when it is the right tool
and how to use it properly.

------
shiado
Somebody should make a movement called 'serverful' that builds technologies
that allow you to deploy a web service on any arbitrary server in any cloud
that scales to the amount of resources the server is capable of consuming. You
could just reskin Apache and call it a day.

~~~
taneq
They need a snappy new name for it, though. "Web hosting" or something.

~~~
Aeolun
That sounds a bit outdated. “Web-scale hosting” has that little bit of extra
oomph.

------
flurdy
This is why I can see why Kubeless [1], Fission [2] and OpenFAAS [3] are
gaining traction.

But my take is always that it depends on the size of your company, your cloud
strategy and how much serverless you are using.

* If you are small company dabbling in small serverless scripts, just use Lambda.

* If you are a medium+ company but have gone all in on AWS or GCP, and serverless is still a limited small part of your stack, then also just use Lamda or Google Cloud Functions. But consider the options.

* If you are a multi-cloud company or more invested in serverless. Then they are the ones that should definitely consider OpenFAAS etc and not use Lamda etc for anything but minute parts of your stack.

* If you use Kubernetes and are fairly Cloud agnostic, then use Kubeless etc so that you have full serverless support in local and staging clusters and any cloud provided clusters you expand and migrate to as well.

[1] [https://kubeless.io](https://kubeless.io)

[2] [https://fission.io](https://fission.io)

[3] [https://www.openfaas.com](https://www.openfaas.com)

------
reilly3000
I the Serverless framework and have been able to successfully redeploy
functions from AWS to GCP (all node, this was in 2018) with only a few changes
to the Provider section off serverless.yml. We are adopting Kubernetes now and
I'm feeling out the landscape, so I'm planning on trying the same thing with
Kubeless. AFAIK it should be pretty seemless- I'm more worried about Ingress
working properly than Kubeless not being able to run my code.

FaaS has an important role to play: we often prototype things with Zapier,
then redeploy them as FaaS functions when we need to scale them or process any
PII. I can't imagine trying to make a full app with them with the current
state of the dev/testing workflow, but for internal systems, integration, and
stream processing they are pretty tough to beat.

------
CyanLite2
Lock-in concern in the cloud is an antipattern. Only way to solve it is to go
on-prem and manage everything yourself.

In the meantime I'll enjoy super cheap S3 storage rates. If AWS ever goes out
of business then I'll worry about that then.

~~~
WrtCdEvrydy
AWS won't go out of business.

You'll start seeing slowly increasing rates... and as people leave, the rates
will increase further. Eventually, you'll start seeing Snowball no longer
supported for getting your data off S3.

------
kondro
Unsurprising that when your continued existence (in this case CoreOS) relies
on something being true, that every alternative to that is false.

------
softwaredoug
There’s a whole generation of developers that didn’t come up in IBM and then
later MS days of vendor lock-in. Open source is default and it’s easy to only
see the benefits and positive side of one vendors vision. Only now it’s harder
as proprietary tech is often cloaked in “open” culture and only when you go to
rip the bandaid off do you see where the real lock-in is

------
aussieguy1234
Apache Openwhisk [https://openwhisk.apache.org](https://openwhisk.apache.org).
it's open source and can run on any cloud platform. It will run your
serverless apps. You'll maintain some infrastructure to run it on unless you
go for a hosted service like IBM provides.

I'm building the infrastructure for Libr (Tumblr replacement,
[https://librapp.com](https://librapp.com)) on a serverless platform that I
won't name which will be hidden behind a reverse proxy. There won't be any
vendor lock in. It's an express/Vue app and will run on any serverless
platform or CDN.

If the app is censored by my first cloud provider (perhaps due to pressure on
the provider from SESTA/FOSTA) I'll move to a new one. It's likely I'll build
parralel copies of the production infrastructure on different cloud providers
at some point for rapid migration capabilities.

------
nzoschke
If you want to see what the “lock-in” actually looks like check out:

[https://github.com/nzoschke/gofaas](https://github.com/nzoschke/gofaas)

It’s a boilerplate Go, Lambda, API gateway, dynamo, SNS, x-ray etc app.

Personally I embrace the “lock-in”. This architecture faster, cheaper and more
reliable than anything I’ve seen in my 15 years of web development.

Most importantly it is less code. Most time is spent writing Go functions. A
little time goes into configuring the infra but the patterns are simple. No
time goes into building infra or a web framework.

I think Go is the antidote to true lock-in.

I have a ‘Notify’ function that uses SNS that I recently replaced with a Slack
implementation. With well defined interfaces you can swap out DynamoDB for
Mongo if you have to move.

It is also easy to turn a function into a HTTP handler. There is a smooth path
from function to container to server if the cost or performance of lambda
doesn’t work. It’s hard or impossible to go the other way.

------
johnklos
Sigh.

"Serverless" is the most ridiculous example of bullshit marketing in recent
history. It truly took me a good twenty minutes to understand what it is
supposed to represent because I kept thinking, "There HAS to be more to this
than vendor-supplied CGI."

People make many arguments for designing WITHOUT portability (and cwyers even
calls portability "premature optimization"). What they're implicitly stating
is that they can't code to abstractions, aren't effective at coding without
using edge cases, and require package specific optimizations to barely get
acceptable levels of performance. If the edge cases and package specific
optimizations weren't considered necessary, there'd be no real case for making
something non-portable.

The fact that people can even rationalize non-portability boggles the mind. It
just seems like a poor attempt at job security or something equally silly.

------
stcredzero
I am thinking of my own SaaS offer, but combined with Open Source. Basically,
you will be able to publish a certain kind of application with a little bit of
Javascript coding, and coding several lambdas in Golang. There will be an
entire miniature server cluster running as goroutines, which you will be able
to download off of github, then run locally. You will also be able to take the
same server cluster and run it on a service like AWS. (On my roadmap, I'm
going to remove all dependencies outside of the project, so you will pretty
much be able to fill out the config file, and just run the executable, and
have it scale according to the number of processors.

However, you will also be able to sign up for an account on my website, then
use a command line facility to "inject" your lambdas into my system, which
takes care of the autoscaling, database backup, and staging for you.

------
sologoub
When discussing lambda/serverless/<whatever flavor of pay per request> setup,
people don’t often seem to stop and think about the usage/access patterns, the
associated costs and performance.

I’ve seen such setups being recommended for APIs that have predictable and
fairly constant load, for which you are a lot better off having an actual
running set of processes that can be reused. For Google that could be
AppEngine, for AWS ElasticBeanstalk. It’s a question of the right tool for the
job.

One tech that I haven’t played with that’s really interesting is KNative,
where you can run an underlying infra with predictable costs/performance, but
allocate it like a lambda per request. Performance of the requests themselves
may still be less though when compared to a more traditional setup.

------
rynop
I made the OSS [https://github.com/rynop/aws-
blueprint](https://github.com/rynop/aws-blueprint) partially to address this
problem. Easily migrate from Lambda to ECS (remove lambda lock-in). It
abstracts all the difficult & time consuming to learn aws idiosyncrasies in to
a best-practice production ready harness.

You could argue that if you have lots of lambdas, this is non-trival. I would
argue that tons of lambda is a poor architecture. What you gain with isolation
you lose management, complexity, nimbleness, attack vector surface.

You could also argue my harness locks you into AWS as it is aws specific.
However I'd argue that it is the other aws services locking you in (ex:
Dynamo) or your code/architecture.

------
staticassertion
The argument makes no sense. Because it is deployed in AWS, you can't get
performance without using AWS services, therefor it is locking.

This seems like no more of a lockin than, say, choosing a DNS server that's
giving me lower latencies.

My AWS Lambdas talk to a Postgres database, S3 (which has many open source API
implementations), and SQS, which yeah, I'm "locked into".

The work to move to another service would be absolutely trivial. All of the
AWS stuff like Postgres, S3, and SQS is totally abstracted from the business
logic. I could rip it out at any time.

I just don't get what anyone means when they saw lambdas lock you in, I don't
feel locked in _at all_. I could move to GCP in, idk, two weeks probably.

------
tyingq
I would guess ecosystem as a whole is a bigger deal.

Porting lambas alone probably isn't a huge deal. But then the cloudformation
templates, sqs configs, dynamo tables, rbac configs, S3 access settings,
cloudwatch logging and alerts, etc. It all adds up.

------
byteface
Is using the features of a tool, 'lock in'? You could do the things the tool
does yourself but you choose to leverage the tool. That's why you use it.
You're aware of this. aws lambda functions can be just pure python or other
code. Anything logged there can be a metric in cloudwatch. Users are
blissfully aware of how much it's doing for them from security to monitoring.
And to be honest companies with enough money would prefer these solutions to
something you can knock-up yourself. I feel so locked in by my million free
requests a month to service that wont choke that I didn't even have to set-up.

------
jimmychangas
Yeah, absolutely, if your engineers decide to adopt serverless due to hype or
just to improve their own curriculum, you are going to spend a lot more on
infrastructure than you would by provisioning VMs or running containers. By
being selective about which workloads are eligible to become FaaS and doing a
little of optimization, however, you can cut some costs and avoiding
overprovisioning, with automatic and efficient scalability.

I believe that, in most cases, it is better to control the exit costs of your
architectural decisions than to avoid lock-in at all costs.

~~~
Coredalae
This is why the serverless framework (trying to give the possibility to deploy
to any cloud provider) is so important. Some repetitive simple tasks are
extremely well suited for lambda/functions /whatever name and some tasks are
suited for big machines with gazillion terraflops and terabytes of ram. The
job of the engineer is to know what his software needs, and what the most
optimal path for this is.

Business is ever changing. This is just another step

------
jonthepirate
I wrote a flaky test management system called
[https://www.flaptastic.com/](https://www.flaptastic.com/) on AWS Lambda... my
first AWS bill was $2.50. I love it. I also used serverless.com's wrapper to
deploy it for free. If this gets expensive and I want to raise money to have
expensive DevOps engineers setup Kubernetes for months then fine... but I
really don't need that and I can easily port this to any other platform if I
want to later.

------
jniedrauer
I use lambda to perform simple, stateless units of work like autoscaling event
post-hooks, chat bots, routine scheduled tasks, etc. They're mostly cloud-
agnostic and I could move them to a server if I had to.

I don't think anyone can really build a full scale application using
serverless. It's just not performant or predictable enough. I've seen people
try, and it always ended in frustration. A properly configured docker
scheduler is better for this type of work anyway.

~~~
sanxchit
You can definitely build a fully functional web app using just serverless. For
an example, take a look at [https://acloud.guru](https://acloud.guru) . Where
I work, we almost exclusively use serverless, and I have found it to be
incredibly reliable, and way more hands-off than a docker deployment.

------
asaddhamani
Zeit is a great alternative here in my opinion. You don't need to make any
changes to your code, it will just run on Zeit. Have used it to host a few
microservices in the past and the experience has been much more pleasurable
than Lambda (I needed to do PDF generation through a browser and had to use
Zappa and modified binaries for phantomjs)

------
Brahma111
Clear abstraction anyone? We make it mandatory to separate the Managed service
code to separate interfaces and implementation. We have the same code running
both on Azure and AWS each leveraging their respective managed services. Just
implement the interfaces for your need. It's not difficult.

------
CrankyBear
Ah.. this is an ancient story and it's really all about promoting Kubernetes
rather than dissing Lambda.

------
tracker1
Worth mentioning that CoreOS was bought out by RedHat, which makes a lot of
sense given where they were going with OpenShift. Which, in turn was bought
(is being bought) by IBM.

In the end, the tooling in use is crossing a lot of lines and becoming very
common in a lot of ways.

------
legend_sam
I think that's the exact reason why people came up with Kubeless.
[https://kubeless.io/](https://kubeless.io/)

It doesn't have to be vendor locked. You can literally host the k8 cluster in
your local

------
gigatexal
Is it lock-in? It’s just your code being packaged in a container and run on an
api invocation. I thought one could move from any of the big three’s
serverless function offerings pretty easily?

~~~
quickthrower2
On Azure it's not a docker container, or any other kind of 'standard'
container.

That said you can get a docker container with the Azure Function runtime, so
you could in theory port your functions elsewhere, but I don't think you get
the monitoring benefits that you'd have keeping it on Azure.

------
tnolet
Yes, a lock-in that can be avoided with a 20 to 30 line piece of Javascript
that just handles the messages and passes it to your cloud agnostic piece of
business code.

~~~
scarface74
More like two

\- deserialize event

\- pass object to your business logic.

------
justasitsounds
A lot of embittered Ops engineers shaking their fists in the comment thread of
that article. "Real engineers write assembly, on punchcards, blindfolded" etc.

------
acroback
So are cloud APIs which lock you in. E.g tensor flow, Aurora and all that
shiny jazz.

Hate it, it is useless once you change provider or go bare metal.

Cloud computing is Vendor lock in of 21st Century.

------
crb002
Totally disagree. It's a stock Linux container. The most transparent of all
the "serverless" runtimes out there.

------
galaxyLogic
What about hybrid cloud? Wasn't that supposed to solve the problem?

------
pantulis
And that's why Knative is going to be a thing.

------
fdsak
Well, then why not standardize them ?

------
gjmacd
article is from 2017, AWS has EKS (Kubernetes)

------
type0
from 2017

~~~
dang
Added. Thanks!

~~~
AnimalMuppet
How do you _do_ that? With all the stories in play, and all the comments being
added, how do you notice within four minutes that one of them says to add the
date to a title?

~~~
Robin_Message
I wonder if a regexp for a short comment containing a single date would work.
I expect comments matching that regex are streaming by, Matrix style, on one
of dang's many monitors¹, as he sips his morning cold-pressed flat grey². In
fact, with a bit of practice, there is probably a regexp that catches all of
them.

¹ I haven't seen dang's workstation, I'm just imagining.

² Sorry, doing it again.

------
JohnFen
Why is it called "serverless" when it is not, in fact, serverless? It's petty,
but that nomenclature drives me nuts.

~~~
dragonwriter
> Why is it called "serverless" when it is not, in fact, serverless?

It is serverless from the perspective of an IT department that, by adopting
it, no longer has to manage servers as a distinct resource.

It's like a product sold as having “worry-free interoperation”. There's still
worry in the interoperation, you are just paying someone else to do the
worrying.

Likewise, a serverless product still has servers underneath, you are just
paying someone else to abstract them so that they aren't a concern for you.

~~~
JohnFen
But we already have a term for that: the cloud.

~~~
dragonwriter
> But we already have a term for that: the cloud.

No, the cloud is a term for dynamically provisionable resources, some of which
(IaaS, for instance) still require traditional server management.

It's true that the invention of the term, originally for Amazon's Functions-
aaS, wasn't particularly distinguishing from lots of existing cloud SaaS
categories (classical PaaS, DBaaS, etc.) which are equally free of server
management to FaaS services, but the terms has subsequently been broadened in
use (AFAICT, Google Cloud Platform was the main driver here) so that it makes
more sense than Amazon's original use did.

~~~
JohnFen
Hmm. Your reply has left me even more confused about the nomenclature. Oh
well, I guess it doesn't matter if I actually understand what these names mean
or not.

