I have a theory that even though the serverless trend is sold as a technical revolution, it's mostly due to accounting practices.
Serverless is purely about CAPEX, vs OPEX.
Companies are so loath to make capital expenditures (CAPEX) that they will willingly let their employees waste thousands of extra hours learning a new development model so they can pay Amazon using operating expenses instead.
And it's funny because even though amazon has no choice but to use capital expenditures to
populate their datacentres with machines, they are offering lambda as a way to monetize the unused cycles.
So lambdas are being twice used as a kind of compromise to fill in the gaps on an accounting sheet.
I believe the joke is on us as technical people. We twist ourselves into knots to promote this new development model as a technical innovation, which it really isn't.
We have just made the time-sharing service (they used to have them in the 60s and 70s) fashionable again. (Along with the long feedback cycle which makes working on these systems so frustrating.)
> I believe the joke is on us as technical people. We twist ourselves into knots to promote this new development model as a technical innovation, which it really isn't.
Not only that but the joke is on technical people for promoting a model of severe vendor lock-in. Its doubly funny when you often see the same techies complain bitterly about "walled gardens" on their smartphones !
If $cloud_vendor doubled their prices tomorrow, what are you going to do ?
If $cloud_vendor decided to kill-off a product tomorrow and replace it with another one with a different API, what are you going to do ?
In most cases, the answer to the above and other scenarios would be "suck it up and swallow the expense".
Sure you can shift the expenses from CAPEX to OPEX and construct numerous business cases to convince the boss that the cloud is the next best thing to sliced bread. But at what cost ?
Of course there are some use-cases which are genuinely well suited to the cloud model, but they are in the minority. For many business the cloud is a case of "if the only thing you have is a hammer, everything looks like a nail".
> If $cloud_vendor doubled their prices tomorrow, what are you going to do ?
> If $cloud_vendor decided to kill-off a product tomorrow and replace it with another one with a different API, what are you going to do ?
Been using AWS for close to ~10 years now and I don't recall them doing anything remotely similar to above. Of course, they could potentially do it in the coming days. But going by their past behaviour I'd say it's a low probability event.
When the cloud/AWS was new these fears were indeed valid. But this line of reasoning is getting old and tiring; especially when it's not backed by even anecdotal evidence and is purely speculative-fear in nature.
I do know that Google, Twitter and others have pulled down some fairly popular APIs/products but please don't conflate that with GCP/AWS/cloud offerings.
For most of the startups it's not even a question of if cloud now. Early days (2010-2015ish) it used to be the case that engineers had to convince management about the advantages of cloud. Primarily because one had to mostly go hybrid as many of the services weren't in cloud yet (such as Route 53, RDS etc.,). It was a pain to maintain hybrid. Now it's possible to run 100% of tech stack in cloud. So it's the other way round; VCs and business folks will frown upon tech if they run on bare-metal/data-centre.
Depends on factors such as business size, product life cycle stage etc.,
For a typical startup trying to figure out PMF and launch an MVP or figure out customer retention I'd be really surprised if off-cloud contingency is even on top-50 priority list.
But for someone with a proven business model with a good revenue stream cloud dependency or vendor lock-in or single-vendor-point of failure does indeed become an action item to be worked upon.
Or when a regulator requires setting up a DR site which they did for us back in 2015. We had no option but to set it up in a data-centre.
Or let's say someone like Mighty (1) whose product is sort of competitor to AWS's (2). They wouldn't want to go anywhere near AWS or any cloud for that matter. And last I heard they were indeed ordering their own physical server machines.
There are valid reasons to look for either hybrid or purely on-prem solutions as I listed a few above. But please don't forward "what if cost doubles" type of reasoning. All that said, I'd hypothesise not going full on cloud is an exception for a typical startup now a days.
If you take this attitude you should write all of your own tools, make your own hardware, not use any libraries (I might actually be able to make a case for this one).
I am most familiar with AWS, so all I can say is their prices have gone down over times, the availability has gone up, and the total number of services is amazing.
With other cloud vendors I think you might be at more risk of them shutting down services, but if they start doing that with out giving you multi-year notice it means they are going out of business.
> If you take this attitude you should write all of your own tools, make your own hardware, not use any libraries (I might actually be able to make a case for this one).
No. Just...no. There are more than the two possible positions, its not either "ignore technological dependencies completely and blindly give your balls to a single vendor" or "create everything yourself from the ground up". You can look at your stack and consider how vulnerable you are compared to how much you gain. Sometimes you will find that it's worth it going all in with one vendor, but it should be a conscious decision. Business have to consider their vulnerabilities all the time and it boils down to compromises and tradeoffs.
Stay on the version that already works and provides the features you need? Fork it to add new features if needed? You don't need to constantly update software dependencies.
> If $cloud_vendor doubled their prices tomorrow, what are you going to do ?
They can't do this because of reputational damage. No one would consider that provider for the next 5 years. It also wouldn't squeeze out as much as you'd think because medium size and larger accounts are all on long-term contracts, so they have time to migrate.
I'm not following. Virtual servers are just as much pure OPEX as serverless. So from the customer accounting perspective, I don't think serverless adds much.
Instead I think the promise is, "Don't worry your pretty little heads about what code is running where. Trust us to handle that!" Which surely seems seductive until people realized that they still have to think about those things.
I heard about one project where consultants decided to use the sparkletastic magic of AWS Lambda so they didn't have to worry about how to scale up their crawler. But they didn't really think it through, so they just had a zillion invocations sitting around waiting for HTTP responses. First-month bill was $12k when you could have spent ~$100 to get the same results with a low-end virtual server running a basic Scrapy setup. So for Amazon I think it's less about filling in a utilization gap and more about attracting people who don't know how to optimize.
What kind of AWS consultants don't know how to set reserved concurrency? That's one of the most basic things about Lambda, I guess the same goes for any other FaaS providers.
They weren't AWS consultants specifically, just a couple of general software consultants fronting a contract development team. And if they had set a concurrency max, the system would have failed in a different way. Using Lambda at all was the problem here.
I'm guessing OP was referring to serverless vs on-prem? I don't think public cloud is significantly different than serverless in terms of CAPEX vs OPEX, whereas private cloud is big CAPEX.
Serverless is further towards the OPEX end of the axis because OPEX is generated and then payd whereas CAPEX needs approval first. With serverless you no longer need to request approval for new instances. The harder the management believes IT to be a cost centre the harder IT will try to wrestle control away. The story of On prem vs Cloud is the story of CAPEX vs OPEX is the story of expenditure aprooval friction.
Private cloud can be big Apex. Doesn't have to be though. The IBM's an HPE's of this world are perfectly happy to sell you PAYGO on prem solutions. Or you can lease the hardware.
The problem with this view is that the alternative to serverless is not spending money on bare metal servers, but renting actual or virtual servers on a month to month basis.
I only have experience with GCP’s Cloud Functions, but in that environment, “serverless” is only cheaper than virtual servers if your load can’t saturate the lowest end VM GCP has to offer.
Once you have enough load to justify going “unserverless” the prices drop to approximately 1/4 that of Cloud Functions for the same burst performance, and that’s before playing billing games like long-term commitments or using preemptible instances.
What’s truly scalable about “serverless compute” versus VMs is the line item on your bill. Sure, “they manage the auto-scaling” but for the per-unit price you rapidly hit a point where you might as well set up a Kubernetes load balancer and eat the setup costs. The pricing model only works out in your favor if you _don’t_ have load.
It would not surprise me to learn that the same is true of AWS prices.
I dunno... maintaining virtual servers is still a lot of work. You have to
* monitor lots of things like filesystem usage and CPU usage
* build new machine images (as well as possibly new container images!) to keep up with security updates
* tune autoscaling at multiple levels based on whichever server-based systems you're using
* maintain a secure way for production support folks to log into servers to see what is going wrong or perform emergency fixes
* build additional failover automation as well as what you'd already need for serverless
Serverless is a bit more expensive overall and less tunable, but there's just less to go wrong: much of the underlying server SRE work is handled by provider automation and engineers for whom that is their full-time job. So, I'm still pretty happy with it.
What it's really about is managing cash flow and not spending your entire round until your team has some inkling as to what it costs for the revenues it's bringing in. This is considerably easier to do with cloud vendors, serverless or not.
It's also about operational velocity, and supporting engineering teams in getting things done without planning overhead that's almost always going to be wrong. Serverless is a tool, and if used right can significantly reduce costs. I can use a hammer to build a house, but I can also smash my fingers with it. Tools be be used improperly.
So once the company is mature, such that meaningful future predictions can be made from past data, then it finds itself in a position to make significant upfront outlays in response to current or future business needs. This usually involves the CFO & FP&A team building a model to see how such a change would impact the balance sheet over the next X years. If the numbers add up, it would absolutely make sense to spend the money. But that's not always the case, even for companies at scale.
> I believe the joke is on us as technical people.
Quite the opposite? We can be paid for migrating current applications to "serverless", and when the tides of tech fashion change we can be paid again to migrate to the new fashionable tech.
If the joke is on anyone, it is on the shareholders of the companies getting locked in. But, it is their money and if they want to trade some CAPEX for OPEX that is not really my problem.
> But, it is their money and if they want to trade some CAPEX for OPEX that is not really my problem.
What if we made it your problem by granting you equity in the business? I feel like this is the #1 reason to funnel a portion of shares to your employees. Making the technical people give even 1% of a shit about the cost of doing business is infinitely better than 0%.
If you are an engineer with 1% of equity, more power to you. In all the companies I've worked at, the influence of the cloud provider bill on the eventual worth of my options has been lower than my annual beer costs.
The #1 benefit of options for startups is that they shift engineering costs from now to the future, so that you can hire any engineers at all while operating on a small budget. If the business works out, you already have so much money that the cost of the options will be negligible. If the business doesn't work out, the cost of the options was zero.
I inherited an AWS stack that cost 10k a month and turned it into 1.5k in a week. Joke was on the previous devs too though as they were fired.
But I guess at a big company it would be only 100% too expensive but at a much bigger scale and no one gets fired since it is basically best practices or so.
Is holding the shares a condition of working there? Cause I'd rather sell them and diversify. I don't want the risk of the company going under to cost me my salary _and_ my investments.
A popular vesting schedule is 25/25/25/25, which means you would be able to sell 25% one year after getting the shares, another 25% after the second year and so on. Typically they would keep giving you more shares as your shares vest so that you always have some shares that you can't sell yet.
I think there's a good balance with the current vesting schedules. You vest the shares at some point in the future (giving you current incentive to align your actions behind the goals of the company). Then, when they vest, you can sell them to diversify (sometimes with a short waiting period if there's a trading window blackout or something).
The joke is on the customers, not the shareholders. They continue to profit from the profits of the companies they own. They feel only short term effects, every economic long-term effect is burdened by people.
Is this a comment on serverless products in general, or specifically Lambda and other instances of serverless products that require custom code and architecture? Serverless is also used by things like Fargate and Aurora which, imo, have significantly more value and are easier to learn.
I couldn't disagree with this article more. We've built an entire architecture running our Series A startup on serverless technologies, and it's been fantastically successful. We have a small team, so I specifically knew I didn't have the team capacity to do a lot of server management. Here are our general components:
1. We have a bunch of services running on App Engine NodeJS Flexible version. We have extremely minimal lock-in because we basically just have a normal Express app serving GraphQL.
2. In fact, we migrated some of our services to Cloud Run for lower costs. Cloud Run is basically serverless Docker containers, and the migration was very easy. Again, our apps are for the most part platform agnostic apps on Node.
3. We also make use of Google Cloud Functions for our asynchronous event handling. This has also been a great choice.
Going with serverless tech has easily saved us ~2 FTEs in a team of fewer than 10 engineers.
+1 for Cloud Run. I run a few production workloads with it, and its dirt cheap and reliable. It has come a long way over the last two years.
As for lock in: we habitually put abstractions between things like key-value stores, messaging and queuing services and the business logic -- so moving this stuff to, say, fargate/dynamo/sns/sqs would not be a lot of work -- probably a few weeks to rework the terraform scripts, figure out a good approach to roles, and thats it.
I was going to mention CloudRun as a specific counterpoint to the author's opinion on serverless functions. It can't get much more portable than a container running on arbitrary HTTP triggers. We use pub/sub as a trigger because we like the functionality but you can do whatever you want.
You can, for example, containerize your monolith and expose task triggers via API endpoints so your web server doesn't have to do that work. I really like it.
How many services? I've run a pretty similar stack on Heroku with very minimal pain. It was really cheap until we started to hit some heavier scale, at which point, one engineer became part-time devops (like 30%) and we moved to AWS.
How portable is an app using serverless? With VMs (and especially containers / k8s stuff), you can just pick up the whole app and ship it off to another cloud provider (except if one uses custom offerings). Can serverless functions be done the same way?
I guess at the end of the day it's just JS / Python code, but is there an "export to / import from a common format" API / web portal button?
* Cloud Run is just running a Docker image. It's basically as platform agnostic as Docker is.
* Similarly, App Engine Flexible is just running a plain version of NodeJS. No specific lockin if you don't access GCP-specific services.
* With respect to Google Cloud Functions, again we're just running a simple Node service in response to some PubSub events. There would be work to migrate this, for example, to AWS Lambda, but it would be a very straightforward mapping.
It depends on how you develop. In most cases, it's arguably easy to separate your code from cloud-specific stuff. If you do that, you have high portability.
Yeah, honestly even the Standard environment is pretty minimal in terms of lock-in. We deploy Django apps to GAE Standard with Cloud SQL and aside from a few bits in settings.py, there's nothing App Engine specific about it. I can package them up as docker apps and with a different settings file, run them pretty much anywhere else.
AWS recently (prematurely) announced lambda function URLs, which would make API Gateway unnecessary [1]. It will probably be officially launched in the next couple of weeks.
For our app, I'm in the process of removing API Gateway and calling lambdas directly from our back end servers instead, as it added a lot of complexity and the 30 seconds timeout imposed by API Gateway is causing issues.
ALB integration limits payload size to 1Mb, compared to 6Mb one gets with Lambda API or API Gateway. I would love to use ALB more, if not for this limit.
Good article, and I think it's a sign we're approaching the trough of disillusionment w.r.t. serverless architecture.
Just like with microservices, the benefits come from not using it as a giant hammer for everything, but in isolated use cases where it's actually a good fit. These are few and far between for serverless and the first challenge is making the right decision of when to use it.
I do agree about vendor lock-in, but at the end of the day this is inescapable because of its very nature. It was always meant to be "run this code on someone else's infrastructure without me having to worry about maintaining it", so vendors are free to choose how that actually happens.
What we need to fix this are standards that can be adopted by all vendors, so that it's easier to migrate. Something like OCI for serverless that isn't tied to specific runtimes would be ideal.
This is where Knative shines, specially with Google's Cloud Run and Cloud Run with Anthos. The billing is also not at an invocation level rather than a cumulative runtime where each container can define a level of concurrency according to your needs instead of having to spin up individual functions every time.
Yes, it’s a shame “serverless” became synonymous with FaaS, but understandable given that Lambda was the first to use the term.
I was pretty excited by Lambda initially, but I gave up on it for some of the reasons the author talks about. Google Cloud Run actually achieves what I want from serverless: I never have to think about the number of servers and can pay for less than one instance during low-traffic periods, but I could migrate my app elsewhere easily if I wanted.
I'm basically living this experience right now. My previous job involved building and maintaining web apps using boring dependable frameworks and infrastructure. My current company is all in on serverless and uses it for everything. I now spend much LESS time writing business logic.
Figuring out how to finagle various proprietary services and wrangle them together can take a ton of time and effort.
The crazy thing to me is that this is how things run at par. As the article points out, the downside risk with serverless is massive compared to using more traditional tools. If AWS jacks up its prices, deprecates services we rely on, or has a major issue with my company for any reason -- these could easily become existential events for us.
A few years back, I refactored a memory/time-intensive task of an application to leverage AWS lambdas. Once the code got into production, it ended up working out fine and we were able to side-step OOM issues that were plaguing the prior incarnation. Takeaways:
* being able to scale on a per-request basis and understand your burst usage of resources is pretty useful
* the top-level comment regarding CAPEX vs OPEX was spot on. Our higher-ups always had their eyes on the AWS bill and not having to pay for instances (either spot or elastic) appeased them.
* at the time at least with Java-based lambdas, the local tooling was really clunky and slow with that nasty startup. You might have better success with node or python in that area.
* otherwise, like any other tech, there's gonna be tradeoffs. Something good to keep in your toolbox, but like most things they're no silver bullet.
I actually just did my first production serverless project and didn’t find the first point to be an issue. I banged out some nodejs, can run it locally for dev or at lambda for prod. It’s a small one-endpoint service, but it works great imo.
For smaller applications you likely won't experience much complexity, but building larger apps involving multiple related cloud services, configuration and the need for debugging can increase (dramatically).
Yep, despite being big fan of serverless I feel like most useful? (for lack of a better word) Serverless deployments are predicated on locking you in while decreasing the cost for the platform operator. Take serverless DBs for example. If you’re using a managed db you generally don’t care whether it’s serverless or not, but the platform operator does, as it becomes easier for them to offer you a free tier to get you too put your critical data into their platform.
It was a poorly designed service, for the client-user, from day one; for the Serverless provider it is more large sized revenues genius.
Running a server or a cluster of servers is elementary, kids. Do not believe the cloud hype. There is no need to spend the cloud dollars when a small fraction of ones' cloud expense gets one exponentially more compute. All you need to do is get over the fear that running a server is difficult. it is not.
Running a secure and reliable server is not trivial. Getting one up and running is easy, sure. Keeping it that way with 4 nines of reliability is anything but. Scaling past that one sever is it’s own task.
Talking about a cluster being elementary is frankly absurd.
I think you might be confusing “got it working!” with “hardened in production.“
It is not trivial, but it is not rocket science either. And as far as that goes, rocket science is not that hard either. It is a skill set that is very much worth developing, which pays itself back exponentially.
I feel like a lot of the industry is just trying to be relevant by pretending simple things are difficult, selling buzzword solutions to hide the simplicity.
While I completely agree with the sober assertion that the serverless hype didn't mean that programmers would only write business logic, I also think that no one should take business "bullshitters" serious when they claim such. Hype speech (wild exaggeration disregarding reality) is a pervasive dialect of the web.
> The bulk of your time building serverless applications will not be spent writing the application logic or focusing on the parts of your code that are in fact the differentiated thing that you’re being paid to work on. It just flat out won’t. Instead you’ll spend most of your time figuring out how to mate these functions with other services from that cloud provider.
I completely disagree with this point, as it goes totally against the experience that I and my team have piled up after a couple of years of running a couple of serverless apps that rely heavily on a few AWS Lambdas. The API Gateway and ALB code was pretty much a one-and-done, with the bulk of the work consisting of setting up TLS termination, and the bulk of the work was on writing and testing all the business logic.
The only exception I've seen to this rule is if your serverless apps consists of a bunch of event handler from a large bunch of AWS services, like S3 triggers and message queues, that are not much than one-liners that don't do much beyond plumbing around events. Still, I don't feel it's right to describe this sort of application around lambdas that don't do much by design.
> I’m apparently atypical here! Folks don’t like to spend an order of magnitude more to monitor a system than the system itself costs to run. (...)
I also don't believe this point is fair or reasonable. It makes no sense to complain about serverless because it can be dirt-cheap (or even free to use) but your choice of monitoring service, coupled with the way you chose to use it, ends up costing more. You pick what you use and decide how you use it, and if your personal choices lead to a price tag greater than zero then that's the outcome of your own design decisions.
This complain is particularly eggregious given that AWS CloudWatch has a free tier that's very clear and included in basic intro to AWS tutorials.
> (...) It turns out that while it’s super easy to find folks who know WordPress, you’re in trouble if both of the freelance developers who understand serverless are out sick that day — not to mention that they cost roughly as much as an anesthesiologist.
Again, this is hardly a serverless issue. You'd experience the exact same problem if you ran a Spring monolith.
I kind of wonder if you and OP are talking about significantly different scale in amount of code/functionality/team size.
Perhaps I am just jaded, but I take it for granted that writing application logic is only ever going to be a significant minority of “your time”—in larger orgs there are teams specializing on the different monitoring, platform, infra, etc taxes that are a reality of running software at scale.
Five smart hackers that know aws and python well are going to get much further on lambda-all-the-things than an org of several thousand would, probably.
Lambda is also outrageously expensive at scale; but at the end of the day I’d chalk this up mostly to “OP not very good serverless compared to other patterns, fivea good at serverless”
I don't see Lambda functions in this way, but then again I'm not using them as server replacements as in making them the end points of an API call.
I use lambdas exclusively to process other events generated by S3 or DynamoDB that happen because my web service stored something or directly wants something to happen.
This way, I didn't have to write a scheduling system or a batch system for my software, I just use AWS's perfectly good implementations. I don't really see the advantage in specifying my API, tying myself to the AWS authentication infrastructure or trying to encapsulate small bits of my API into Lambdas although I can see why you might do that in a bigger system. For me, the scaling occurs in ECS, not in distributing lambda functions.
The better part of "serverless" is the containers running the web system, S3 and the managed service running the database. The lambda functions make the job of longer running tasks much easier to manage and they are terrific for that.
> What format is it expecting? Do you have the endpoints correct? Is the security scoping accurate? Oh, it didn’t work? Time to embark on a microservices distributed systems murder mystery where the victim is another tiny piece of your soul, because getting coherent logs out of a CloudFront –> API Gateway –> Lambda configuration is CRAP.
I'm trying to figure out what this means because my experience has been the opposite. I find the logs to be generally very code, AWS X-Ray gives you good visibility into the whole process. And you can use third party telemetry / tracing providers too.
Also AWS Cloudwatch is actually pretty powerful for monitoring and supports custom metrics and alarms. It's not the most powerful log system in the world but it is pretty painless to use.
Also, Lambda are just short functions written in the language of your choice (of which there are many) or in a container. What exactly about that is not portable? If I write node.js code it is trivial to run in another environment that runs node.js.
I'm no an AWS super-fan but I feel this article is a rant (and one written without knowing all the actual capabilities of AWS) and not fact based.
I personally use Lambda a lot. The operations effort is near nonexistent saving me countless hours. As I said, I actually like cloudwatch. And economically, if my microservice is unused it uses zero resources and scales up near instantly.
Edit: to add to that, it sounds like the author's real issue is with microservices not serverless.
I guess it makes sense for a blog with "A.W.S." in its name but starting out by explicitly dismissing the obvious disproof of your thesis, Google App Engine, doesn't really make the argument work. App Engine exists. App Engine is serverless.
I am building a distributed FaaS using the WEB as the normalising environment. This has several benifites like the developer's browser transiently becoming "prod" unlocking debugging-in-prod tooling, but it also addresses portability. 20ms deploys. it's 100% optimized for DX, no toolchain either
> you’ll spend most of your time figuring out how to mate these functions with other services from that cloud provider
That's just as true if you move the code from serverless to a VM in that cloud, and keep using all the other services.
Vendor lock-in for the serverless function itself seems to mostly be an AWS thing (Lamdba). Elsewhere, the trend is more running any old container will function as serverless.
If you choose to consume dozens of proprietary services from a regular container, that is on you.
AWS Lambda also supports running containers. And it supports a number of language runtimes. Including custom runtimes. The only think proprietary it imposes is the interface your function reads in when it is called. I'm curious what part of Lambda makes it have excessive lock-in vs the others once you discount "choosing to consume dozens of proprietary services"?
All my serverless code always has a wrapper function to decide the request object so porting it to another cloud provider (even from Lambda) is pretty easy.
Edit: The article seems to state the issue is that things like step functions are proprietary but:
a. That's not lambda
b. Don't use step functions, no one is forcing you. There is nothing step functions does that can't he handled other ways.
We abuse serverless on my team. We make our services in NestJS. We use an adapter to connect express to Lambda RestAPI events. Business logic is properly tiered. The code is testable for units and integration. To support debugging locally we use podman and localstack.
The perform is quick enough for our purposes. Our TLAs are basically do it before the 30 second window times out for the API Gateway. If we need better performance or if we hit issues like payload size limitations, we could deploy our apps on EC2 with no loss of features or rework.
> That’s what this article is about: Say what you will about serverless, it’s failed to live up to its promise and hasn’t proved to be particularly lucrative for anybody
Hard disagree. If you define lucrative as having a successful business model around serverless tooling/architecture or if your business is simply powered with severless services. Lots of folks have been had lucrative success without question.
Not only that, serverless has allowed full-stack web dev to flourish, so Id argue it’s helped some individuals build lucrative careers as well.
I work in the financial industry and I can't even begin to count the number of servers that we are running to just run nightly and weekly batch jobs. Migrating this workload to Azure Batch job is helping us save a ton of money while improving our ability to concurrently execute multiple jobs and also cutting down on our server costs.
As much as I enjoy Corey's tweets, he is wrong on this one use case.
Serverless is purely about CAPEX, vs OPEX.
Companies are so loath to make capital expenditures (CAPEX) that they will willingly let their employees waste thousands of extra hours learning a new development model so they can pay Amazon using operating expenses instead.
And it's funny because even though amazon has no choice but to use capital expenditures to populate their datacentres with machines, they are offering lambda as a way to monetize the unused cycles.
So lambdas are being twice used as a kind of compromise to fill in the gaps on an accounting sheet.
I believe the joke is on us as technical people. We twist ourselves into knots to promote this new development model as a technical innovation, which it really isn't.
We have just made the time-sharing service (they used to have them in the 60s and 70s) fashionable again. (Along with the long feedback cycle which makes working on these systems so frustrating.)