I went to a webdev convention, and it ended up being a serverless hype train. Industry experts with a financial incentive to promote serverless went on stage and told me they can't debug their code, or run it on their machine. They showed me comically large system diagrams for very simple use cases, then spent an hour explaining how to do not-quite-ACID transactions. Oh yeah and you can only use 3-5 languages, and each import statement has a dollar amount tied to it.
More importantly, all those skills you develop are tied to Amazon, or some other giant. All the code you write is at their mercy. Any problem you have depends on their support.
Am I supposed to bet hundreds or thousands of man hours on that?
AWS in particular seem to have a carefully refined technical sales/certification/advocacy channel whose main product is those fucking stupid architecture diagrams. Hello world service with $4000/mo. worth of geo-replicated backing databases, CloudWatch alarms, API Gateway instances, WAF etc.
But don't let it encourage you to think serverless has no value, or it can't be done portably or cheaply. It has its sweet spots just like everything else.
Reminds of something that was on the HN frontpage some month ago, where readers are not sure if it's a parody or not, because of the architecture you're required to deploy yourself to use this new "Perspective" product. Direct link to the architecture, that in the end serves the use case of generating a diagram of your AWS resources: https://d1.awsstatic.com/Solutions/Solutions%20Category%20Te...
Reading this made me remember that back in the day the AWS selling point was "here you can create virtual machines with few clicks and have it instantly instead of waiting 30 min for you colocated server to be ready" but now it seems to be "here is a bunch of random expensive tools, please, produce as much stuff as possible and share the word that having servers is bad™".
This field used to be inspiring, but now I see the ideia of having a server being sold as the plague and lots of negativity towards people who are good at servers. They are not seen as another human being but the "other".
Also I can't understand why one would prefer to pay that much for such complexity.
It seems unsustainable for me, not to mention the new generation being spoon fed that that is way to go makes me concerned about the future of open computing.
"Also I can't understand why one would prefer to pay that much for such complexity."
If there is one thing that dealing with AWS reps has taught me, it's that this was 100% by customers. I swear to god, AWS doesn't do anything without customers asking for it.
If you are wondering why products are build in AWS, it's because people wanted to give them money for this. Say what you want, but this isn't something are pushing on us. This is something "we" push on them to provide.
Your response made me smile. Glad to see there are still reasonable folks out there.
Working for a startup without a "resume-driven-development" CTO gave me the freedom to go "servermore" architecture with max flexibility.
For all the talk about resume driven development, I am hiring now, and will say that people who have solved their problems by learning more about Linux and such sound way more impressive than those who list out passing familiarity with a bunch of high level services. The first style of resume really stands out. The second is a dime a dozen.
To make an analogy, it's like hiring a carpenter based on the number of tools they have. NO. Show me your skill with a hammer and chisel, and I'll assume you can figure out the rest.
> not to mention the new generation being spoon fed that that is way to go
The new generation is always joining one cargo cult or another, that's why competent technical leadership is important. Remember when noSQL was the best thing since sliced bread?
Serverless can be a good option if you have large and unpredictable transient loads.
Like any architectural choice, you need to consider the tradeoffs and suitability for your use case. A TODO app probably doesn't need to be built with a serverless SOA.
It reminds me of https://www.frankmcsherry.org/graph/scalability/cost/2015/01... - you pay an extremely high cost upfront without even noticing - for the promise that at some point, you'll be able to scale out to any scale, with no extra human effort and for costs proportional to the scale.
I have not observed enough uses of serverless I can draw conclusions from, but if it's anywhere like hadoop style scaling, 95% of the users will pay 10x-100x on every metric without ever actually deriving any benefit compared to a non-distributed reasonably defined system, and 5% will actually benefit -- but everyone would want to put it on their resume and buzzword-bingo cards.
It depends. Give you an example: We have an expensive reporting workflow that needs considerable resources, but usually only once a month. Two ways to do this cost effectively: Scale down containers to zero, meaning several minutes of response time if you do need it ad-hoc sometime. At least as complex to configure. OR you fit it to AWS SAM. The latter has proven to be a good match for us. I solved the debugging story by keeping lambda functions AWS agnostic and wrapping them i a Flask debug server - will respond exactly the same but it is all local in a single process.
I’m the head of product for Temporal. You should check us out, have to self host (MIT) right now but we are working on a cloud. You can safely write code that blocks for years. No more job queues.
I suppose you mean temporal.io, but just FYI, the first google result for me was https://github.com/RTradeLtd/Temporal which is far but not far enough to be “obviously unrelated”.
You might want to give a link next time to avoid ambiguity.
Every server less design I've seen turns into a complex workflow of distributed functions chained together by a pile of xml.
Each example that I've seen could have been replaced with a simple single process job. You should not need to go over the network because you need a process to run for more than 10 minutes. You should be able to check the status of resource creation via for loop.
An argument can be made for not needing a server, but I can easily fire off a container in fargate/gks/ecs and get similar benefit.
Our approach is to start with a mildly-distributed (among threads within the process) monolith based on future passing, and my impression is that any latency-sensitive operation in a session cluster (that is, all of the sessions who are interacting directly with the same working set of processes and aggregates) can and should be supported by a single node. Rebalancing involves moving the cached aggregate (any blob, like a sqlite database or a JSON file) and/or† catching up the log (sequence of blobs) on the target node.
Futures are basically trivial to serialize, so the cost of involving a second node is as little as it could be; and node consolidation can be generalized because each node's dispatcher knows how many times it has accessed a remote process, which means that consensus on who should lead its log can be reached by simply sending it to the node that has the most use for it (with load balancing achieved by nodes simply not bidding for logs when their load factor or total service time is over some threshold, or say the 60th percentile of all the nodes, or whichever of those two is higher).
† In some cases it's faster to just catch up the log, rather than sending the aggregate.
Thank you, serverless is the new structure that everyone loves and wants to work in, but no one talks about the downside. I just took over a project that has a serverless architecture.
We can't run it locally because the emulator won't run 7/10 of the code, there are memory limit errors that are totally opaque to us since we can't step through and watch it break. We are getting unexplained timing issues that were being worked on via logs in dev. The costs are insane.
If we want to say serverless is the future, the future isn't here yet. There are many tools that would need to be built to make serverless viable, and the tools that are built are immature and bad. Not to mention, you're tied to the companies support, so Google was outdated from the LTS for Node for a while. The reality is going serverless means you don't have control over what happens to your code.
I believe one of the main reasons Serverless will never be more than a low-tier niche is the combination of the following:
In the end, you are just renting well maintained server farms (well, a specific percentage of operational time of some of the servers in them). There is absolutely no appeal for large technology-based companies do this once they can (the following is the lowest scale example) afford to maintain their own servers, while potentially renting a few offsite backups in other areas of the world (again, this is just the lowest-scale architecture starting at which Serverless is always the worse option).
Existing solutions are extremely over-engineered. It can be excused with "officially planning for the use-case where maximum scalability is required", but it's almost certainly a pretense to sell "certifications", aka "explaining our own convoluted badly documented mess".
What this actually means is that many SWE's who are good enough to learn to use Serverless effectively, can learn any other framework that allows building distributed systems across server nodes with equal effort. Why would I base my whole business on your vendor locked, sub-optimized dumpster when I can do the same on an infinitely scalable VM networks, that can be ported to literally any vendor who supports $5/mo VMs (or, you know, self hosted if my company is large enough)?
the flip side to your statement:smaller companies (like where I am) are not interested in maintaining our own servers and do all the un-differentiated heavy lifting related to maintaining servers. I want my small team to work on things that make us stand out. Running our own servers does not.
Vendor lock-in is not a big consideration for us. I increasingly think about cloud vendors like Operating systems. I really dont care which Linux distro we are on. Pick a vendor and run 'natively' on it to go as fast as possible.
I'm happy paying AWS for maintaining the servers and getting out of that messy business. There are other concerns about serverless around observability and managability, but vendor lockin and cost is not part of my equation
I view the serverless, specifically aws lambda, as nice way to hook into the aws system and add some code to handle certain events. Basically, to customize the behavior of "the cloud" where aws falls short (for whatever reason).
But developing with "serverless" as base stack? Nope.
I bought into hosting stuff on AWS, specifically on EBS which I assume is meant by a 'serverless' infrastructure, but I don't know crap about it.
Still think it is mostly awesome and the services are solid. But such infrastructure comes with its own caveats.
EBS now bugs me about some 'HealthCheckAuthEnabled'. I don't know what it wants, just that I have limited time to react. Cannot understand the clearly auto-translated text. Still didn't get what it wants from me when I read it in English. I banned the health check from my logs and said where it can get a response from the service I run. I hoped our friendship ended there. Maybe it did and this is some form of retaliation.
The load balancer AWS set up for me now suddenly costs money. And I thought I could just use it to grab the free TLS cert that comes with it... well, the clients pay for the additional costs anyway and I would be surprised if anyone even noticed the price spike. The traffic is minimal but I am surprised it is actually that expensive. I could probably cut the costs in half if I actually had motivation to look through all the settings...
Microsoft just cancelled some features of some online service my colleague works with because they want to market an alternative for their BI solution. Did cost him 3 weeks of work at the minimum. The stuff cannot be ported to an on-premise version.
Hosting your own server is a lot of work and isn't fun. There is still a lot of work to do if you host on something on such an infrastructure.
Somehow the amount of work you have to put into software hasn't decreased. Honestly the main argument for it is that you can give away responsibilities.
edit: Just noticed that EBS probably counts as PaaS instead of serverless. I run "stuff" (very simple functions) on Lambda too, but to think it would replace apps like the article suggested seems a bit much. It has its purpose, but nearly everything I have on Lambda is AWS cloud specific. It comes in handy for Alexa integrations for example. I think the container won this fight to be honest.
This could have been written by me. And for who ever doesn't have time to maintain roles, there is Ansible Galaxy or tons of roles in GitHub, including mine: https://github.com/liv-io
I've looked an Ansible a couple of times, but I always think it looks too verbose, but more than that the way files for the same thing are spread out across multiple folders just makes it seem like there is a steep learning curve. Something like Ansible sounds great, but it feels like learning it would be a chore.
Indeed.
Kind of reminds me of that sayin' about how "the greatest trick the devil ever pulled was to convince the world he didn't exist ..."
Sometimes I think it is a great trick some big players are pulling off, in convincing the world to subsidize their infrastructure costs (and more than that) by using their resources because it is "simpler" or more effective or alternatives are hard, etc ...
... to where the ability to competently setup and maintain those services will vanish, in due time.
(Which, of course would only help to consolidate those -already- running such systems ...)
It's really not that bad. Most of the investment is early on, after that servers just run. I use unattended updates to keep patches coming in while I'm doing other stuff. Updating from a Debian stable repo won't break your stuff.
Now, whether or not you, personally, do, is entirely up to you and what you want your career trajectory to go. (Both are equally valid because internally, AWS will always needs ops people that understand computers.) However, a cottage industry of AWS Certified Solutions Architects has already sprung up, and some have been getting lucrative consulting gigs based on knowing the mismash of Amazon names and how best to mash them up.
Serverless (specifically AWS Lambda) plays a part in the cloud, but most of your objections are true of the cloud and/or AWS more broadly, and have little to do with, specifically, serverless. (Despite whatever the webdev convention was trying to sell you.) There's a whole world of EC2 instances which are mostly VPSes, to know about that definitively aren't serverless (though there are some really neat cattle-level features that serverless learns from)
If the computer is a bicycle for the mind, then the cloud is a car for the programmer's mind. You can definitely build your own stack with your own team that manages to avoids the cloud, but at some point it becomes more than one person can handle. As you scale, there's a points where the cloud does, and does not make sense, but the cloud gets you access to far more resources as a tiny team than would otherwise be available to you.
Serverless (as far as I can understand) is what we can consider as the early stages of the commoditization of compute.
Meaning, similar to electricity or tap water, at some point in our lives we will subscribe to a provider and then just plug and play whatever we want to. It is just we are in the early stages.
Well, only because you had bad teachers doesn't mean the topic is bad. You had the misfortune to interact with people that don't understand the topic they try to explain. I try to give a simple explanation here: https://consulting.0x4447.com/articles/knowledge/should-i-go... - just as a starting point to at least get a frame of reference when to use and not to use Serverless. But hey I'm just a random person in a random location on this planet :)
Serverless is meant to make bar lower for front-end developer so they don't have to learn the basics about web back-end development and be able to develop apps. I'm very glad if failed!
People who don't put time into learning development should not do it.
Your second point broadly makes sense. In fact, it's almost a truism: if you don't put time into learning, you won't have the skills necessary to be able to execute.
However, it seems detached from your first point.
If one's role is a front-end developer, is it necessary that they know about back-end development? If it is outside their intended job function, why would they need to know about it, if it doesn't get in the way of performing their job? If you are a backend developer, do you need to know about how to host your own infrastructure? Handle your own networking? Chip design? Your logic could be applied to any job function. Each level of the stack benefits from the levels below it being abstracted. We all stand on the shoulders of giants, and we're all much better for it.
Overall, I do think it's better that one has a good understanding about the various components one interacts with. Having a grasp of the overall system will come in handy. A curiosity into other parts of the system is beneficial, and likely is one of many indicators of success. However, if job functions can be simplified and superfluous context removed, why should we fault those for taking advantage of that?
This issue with "lowering the bar" and being glad that a simplification has "failed" (which, is yet to be determined), reeks to me like gatekeeping. The same logic could be applied to any job role which benefits from simplification. In an extreme example, this logic could be extrapolated to support the notion that anyone who cannot build their own machine from the ground up should never work in a programming position. What height is appropriate for the "bar"?
Kind of surprised the article didn't mention lack of reasonable development environment.
At least on AWS, the "SAM" experience has been probably the worst development experience I've ever had in ~20 years of web development.
It's so slow (iteration speed) and you need to jump through a billion hoops of complexity all over the place. Even dealing with something as simple as loading environment variables for both local and "real" function invokes required way too much effort.
Note: I'm not working with this tech by choice. It's for a bit of client work. I think their use case for Serverless makes sense (calling something very infrequently that glues together a few AWS resources).
> It's so slow (iteration speed) and you need to jump through a billion hoops of complexity all over the place. Even dealing with something as simple as loading environment variables for both local and "real" function invokes required way too much effort.
Honestly, it reminds me of PHP development years ago: running it locally sucked, so you need to upload it to the server and test your work. It. Sucked.
It was actually pretty good if you had an IDE with sftp/scp support because you could save a file, refresh your browser, and have immediate new results.
Yeah this wasn't too bad and it was what I used to do back in the day with Notepad++. By the time you hit save in your editor, your changes were ready to be reloaded in the browser.
With SAM we're talking ~6-7 seconds with an SSD to build + LOCALLY invoke a new copy of a fairly simple function where you're changing 1 line of code, and you need to do this every time you change your code.
That's even with creating a custom Makefile command to roll up the build + invoke into 1 human action. The wait time is purely waiting for SAM to do what it needs to do.
With a more traditional non-Serverless set up, with or without Docker (using volumes) the turn around time is effectively instant. You can make your code change and by the time you reload your browser it's all good to go. This is speaking from a Python, Ruby, Elixir and Node POV.
The workaround my team uses is to make two entries to start the webapp: one for SAM, one for local. For fast iteration we just `npm start`, and when we're ready to do more elaborated testing we run with SAM. This works pretty well so far.
I'm not sure why that's PHP's fault? I never had problems running it locally... and to "get my code to the servers" was as easy as a git pull from the server which is probably the 2nd laziest way of accomplishing that.
Out of genuine interest... is there a modern solution to this problem with PHP/MySQL?
(I'm still doing the "upload to server to test" thing.... I've tried MAMP and Vagrant/VirtualBox for local dev but both of them seem horribly complex compared to what we can do with local dev with node.js/mongo and so on.)
"docker-compose up" and your OS, codebase, dependecies and data is up and running locally in the exposed local port of your preference. You can even split parts in different layers to mimic a services/cross-region logic.
Of course this won't fix the fact that you have a lambda behind api gateway that does some heic->jpg conversion and can't be hit outside the DMZ, or some esoteric SQS queue that you can't mimmic locally - but it should get you almost there.
this doesnt solve the OPs problems though, if Vagrant is complex, so would be a docker image. the problem is the user doesn't know how to manage/configure the underlying complexity of the OS and needed services, which would still be a problem if using docker. unless you find that perfect docker image with every dep you need... but that would also be true with vagrant.
FWIW I haven't hit any scenario out of the basic services that localstack couldn't run locally. I even have it executing Terraform on localstack as if it was AWS (without IAM which is indeed a problem when I forget to create/update policies)!
Just run PHP and MySQL locally? Native PHP on Windows is horrible to set up, but with WSL/WSL2 you suddenly can get a normal Linux environment without much hassle.
sudo apt install nginx php mysql, point the www directory of nginx to something on your computer (/mnt/c/projects/xyz) and you've got a running setup. Or run Linux in general, that's what most people I've seen work on backends seem to do. You can run the same flavour of software that your deployment server runs so it'll save you time testing and comparing API changes or version incompatibilities as well.
I don't know any solution for macOS but you can probably get the necessary software via Brew if you're so inclined. Then run the built-in PHP web server (php -s) and you get the same effect.
What's horrible about it? I just download it, unzip to Program Files, add the folder to my %PATH% and that's about it. I didn't find myself in a situation where I would need an Apache or other webserver, the built-in one is good enough. It also makes using different versions easy, no need to deal with Apache and CGI/FPM. You just use other PHP executable.
I find it easier to handle multiple PHP versions on Windows than on Linux. As you say just download zip,unpack somewhere copy php.ini-development to php.ini, and you can do this for every minor PHP-version.
Apache is almost as easy, download zip, unpack & configure apache conf to use your php.
MySQL is somewhat more complicated because you need to run an setup script after unpacking the zip.
I used to be complain about the same thing and even asked someone who was head of BD for Serverless at AWS what they recommended, and didn't get an answer to my satisfaction. After working with more and more serverless applications (despite the development pains, the business value was still justified) I realized that local development was difficult because I was coupling my code to the delivery. This is similar to the way you shouldn't couple your code to your database implementation. Instead, you can write a function that takes parameters from elsewhere and call your business logic there. It definitely adds a bit more work, but it alleviates quite a bit of pain that comes with Lambda local development.
Disclaimer: I work at AWS, however not for any service or marketing team. Opinions are my own.
> Instead, you can write a function that takes parameters from elsewhere and call your business logic there.
This is what I tried to do initially after experiencing the dev pain for only a few minutes.
But unfortunately this doesn't work very well in anything but the most trivial case because as soon as your lambda has a 3rd party package dependency you need to install that dependency somehow.
For example, let's say you have a Python lambda that does some stuff, writes the result to postgres and then sends a webhook out using the requests library.
That means your code needs access to a postgres database library and the requests library to send a webhook response.
Suddenly you need to pollute your dev environment with these dependencies to even run the thing outside of lambda and every dev needs to follow a 100 step README file to get these dependencies installed and now we're back to pre-Docker days.
Or you spin up your own Docker container with a volume mount and manage all of the complexity on your own. It seems criminal to create your own Dockerfile just to develop the business logic of a lambda where you only use that Dockerfile for development.
Then there's the whole problem of running your split out business logic without it being triggered from a lambda. Do you just write boiler plate scripts that read the same JSON files, and set up command line parsing code in the same way as sam local invoke does to pass in params?
Then there's also the problem of wanting one of your non-Serverless services to invoke a lambda in development so you can actually test what happens when you call it in your main web app but instead of calling sam local invoke, you really want that service's code to be more like how it would run in production where it's triggered by an SNS publish message. Now you need to somehow figure out how to mock out SNS in development.
Unless I’be misunderstood, every knock against serverless above has actually been a knock against the complexity of having tiny, de-coupled cloud native services and how difficult it can be to mock... to which the answer is often “don’t mock, start by using real services” and then when that is less reliable or you need unit tests, then mock the data you expect. In the case of SNS, mock a message with the correct SNS signature, or go one layer deeper, stub out SNS validation logic and just unit test the function assuming the response is valid or invalid? In the case of Postgres, you could use an ORM that supports SQLite for dependency-free development but at a compatibility cost... worst case you might need to have your local machine talk to AWS and host it’s own LetsEncrypt certificate and open NAT port... but one can hope it doesn’t come to that...? Even so... that’s not exactly a knock against serverless itself, is it?
> In the case of SNS, mock a message with the correct SNS signature, or go one layer deeper, stub out SNS validation logic.
SAM already provides a way to mock out what SNS would send to your function so that the function can use the same code path in both cases. Basically mocking the signature. This is good to make sure your function is running the same code in both dev and prod and lets you trigger a function in development without needing SNS.
But the problem is locally invoking the function with the SAM CLI tool is the trigger mechanism where you pass in that mocked out SNS event, but in reality that only works for running that function in complete isolation in development.
In practice, what you'd really likely want to do is call it from another local service so you can test how your web app works (the thing really calling your lambda at the end of the day). This involves calling SNS publish in your service's code base to trigger the lambda. That means really setting up an SNS topic and deploying your lambda to AWS or calling some API compatible mock of SNS because if you execute a different code path then you have no means to test the most important part of your code in dev.
> In the case of Postgres, you could use an ORM that supports SQLite for dependency-free development but at a compatibility cos
The DB is mostly easy. You can throw it into a docker-compose.yml file and use the same version as you run on RDS with like 5 lines of yaml and little system requirements. Then use the same code in both dev and prod while changing the connection string with an environment variable.
> That’s not exactly a knock against serverless itself, is it?
It is for everything surrounding how lambdas are triggered and run. But yes, you'd run into the DB, S3, etc. issues with any tech choice.
So there’s an argument that the future deployment model is actually Kubernetes Operators, which means you could have test code that deploys and sets up AWS APIs... thus if your code responds to the trigger, it’s up to another bit of code to make sure the trigger is installed and works as expected against AWS APIs?
And yes, I think the problem here are APIs you use in multiple places but can’t easily run yourself in a production-friendly way. Until AWS builds and supports Docker containers to run their APIs locally, I don’t see how this improves... end to end testing of AWS requires AWS? ;-)
> I realized that local development was difficult because I was coupling my code to the delivery.
Of interest, I've spent some free time crunching on CNCF survey data over the past few months. Some of the strongest correlations are between particular serverless offerings and particular delivery offerings. If you use Azure Functions then I know you are more likely to use Azure Devops than anything else. Same for Lambda + CodePipeline and Google Cloud Functions + Cloud Build.
I think his point was that you should be able to run and test the Lambda code independently of Lambda. After all the entry point is just a method with some parameters, you can replicate that locally.
Yes, this is a great way of doing things - I have no problems TDD'ing business logic hosted in Lambda, because the business logic is fully decoupled from the execution environment. SAM should be for high-fidelity E2E integration testing.
This principle works with front end development too. Crazy build times on large applications can be alleviated if your approach is to build logic in isolation, then do final testing and fitting in the destination.
It’s hard to do this when surrounding code doesn’t accommodate the approach, but it’s great way to design an application if you have the choice. I really love sandboxing features before moving them into the application. Everything from design to testing can be so much faster and fun without the distractions of the build system and the rest of the application.
I felt your pain immediately and decided to write my own mini-framework to accomplish this.
What I have now is a loosely coupled, serverless, frontend+backend monorepo that wraps AWS SAM and CloudFormation. At the end of the day it is just a handful of scripts and some foundational conventions.
I just (this morning!) started to put together notes and docs for myself on how I can generalize and open source this to make it available for others.
stack is vue/python/s3/lambda/dynamodb/stripe but the tooling I developed is generic enough to directly support any lambda runtime for any sub-namespace of your project so it would also support a react/rails application just as well.
As a systems developer, comments like yours make me amazed at the state of web development. From the outside looking in, it seems like 10% code and 90% monkeying around with tooling and frameworks and stacks.
I believe it's the moment when there's a solution that just makes sense and works well for most people. A gold standard that other solutions will try to develop more and spice up, instead of reinventing it.
A lot of these DX (developer experience) concerns are, imo, rooted in what the article describes as "Vendor Lock".
Sure, you can write a bunch of tools to work around the crufty, terrible development environment's shortcomings. But ultimately, you are just locking yourself further & further & further in to the hostile, hard to work with environment, bending yourself into the bizarre abnormal geometry the serverless environment has demanded of you.
To me, as a developer who values being able to understand & comprehend & try, I would prefer staying far far far away from any serverless environment that is vendor locked. I would be willing & interested to try serverless environments that give me, the developer, the traditional great & vast powers of running as root locally that I expect. Short of a local dev environment, one meets both vendor lock in, & faces ongoing difficulties trying to understand what is happening, with what performance profiles/costs. I'd rather not invest my creativity & effort in trying to eek more & more signals out of the vendor's black box. Especially if trouble is knocking, then I would very much like to be able to fall back on the amazing toolkits I know & love.
aws's whole pitch has been cutting out server huggers & engineers, relying on aws. since day 1. often to wonderful effect. with far far better software programmability than our old crufty ways.
but lambda gets to the point where there is no local parity, where it's detached, is no longer an easier managed (remotely operated) parallel to what we know & do, but is a system entirely into itself, playing by different rules. one must trust the cloud-native experience it brings entirely, versus the historical past where the cloud offered native local parallels.
I never got the hand of cloud formation. I suppose it is nice from a visual (drag and drop) point of view, but I couldn't use it in production and moved on to manage my architecture with terraform.
It sounds like you're describing the Cloudformation template visualiser/editor in the AWS Console, which I have never heard of anyone using as the primary interface for their Cloudformation templates.
Personally for simple projects I've had pretty good experiences writing a Yaml-based template directly, and for more complex projects I use Troposphere to generate Cloudformation template Yaml in Python.
This is a really funny thing, since for the last ~10 years I've been hearing how we're deliberately doing IaC/CM tools "not a programming languages because reasons" (and thus have to do horrible hacks to support trivial things like loops and conditions), and now suddenly we're building libraries in programming languages that convert the code into non-programming-language description, which is then interpreted by a program into several other intermediate representations and finally emits trivial API commands. I guess the next step would be write a declarative language on top of CDK or Pulumi that will compile it into python which will generate CF/TF files.
I manage a handful of projects with Terraform and it works well in many situations. It has improved a lot recently but for a long time I really hated the syntax. I still do to some extent but have learned to cope with it most of the time.
If you are working on a project where all of your infrastructure will live on AWS I would definitely urge you to give it a second look. The amount of infrastructure I manage right now with a single .yaml file is really killer.
Yes, it (Python) was chosen because we could leverage existing internal code that was written in Python and it happens to be my strongest language.
If I could do it all over, I would still choose Python. That being said, I have been working professionally (building apps like this) for almost 14 years so my willingness to bite off a homebrew Python framework endeavor as I did here is a lot different than someone just getting into the field.
Django: avoid unless you have a highly compelling (read: $$$$) reason to learn and use this tool. I cannot think of one, honestly.
Flask: fantastic, but be conscientious about your project structure early on and try to keep businees-logic out of your handler functions (the ones that you decorate with @app...)
Sophisticated or more sugary Node.js backends are not something I have ever explored, aside from the tried-n-true express.js. I tend to leverage Python for all of my backend tasks because I haven't found a compelling reason not to.
Django is decent for POCs that need some level of security since you get authentication out of the box with no external database configuration necessary due to sqlite. Sometimes you have an endpoint that needs that due to resource usage, but the number of users is so low that setting up a complicated auth system isn’t worth it.
Minimalist frameworks are great for either very small (since they don’t need much of anything) or very large projects (since they will need a bunch of customization regardless).
In that regard, I think Django is kind of like the Wordpress of Python.
That is such a tough question to answer carte blanche.
All-in-all, Django is not bad software. I have a bad taste in my mouth though because as I learned and developed new approaches to solving problems in my career I feel like Django got in the way of that.
For instance, there are some really killer ways you can model certain problems in a database by using things like single table inheritance or polymorphism. These are sorta possible in Django's ORM, but you are usually going against the grain and bending it to do things it wasn't really supposed to. Some might look at me and go: ok dude well don't do that! But there are plenty of times where it makes sense to deviate from convention.
That is just one example, but I feel like I hit those road blocks all the time with Django. The benefit of Django is it is pre-assembled and you can basically hit the ground running immediately. The alternative is to use a microframework like Flask which is very lightweight and requires you to make conscious choices about integrating your data layer and other components.
For some this is a real burden - because you are overwhelmed by choice as far as how you lay out your codebase as well as the specific libraries and tools you use.
After your 20th API or website backend you will start to have some strong preferences about how you want to build things and that is why I tend to go for the compose-tiny-pieces approach versus the ready-to-run Django appraoch.
It's really a trade-off. If you are content with the Django ORM and everything else that is presented, it is not so bad. If you know better, you know better. Only time and experience will get you there.
That's great, cheers for that. It's helpful to know that your concerns are mainly to do with taking an opinionated vs non-opinionated approach - that's a framework for thinking about the choice between Django and (e.g.) Flask that many people (including myself) can hang their hat on.
On the flip side, not being able to use Django is one of the reasons against serverless for me. There's immense value in having a library for anything you might think of, installable and integratable in minutes.
You have to roll your own way too often in Flask et al, so much so that I don't see any reason to use Flask for anything other than ad-hoc servers with only a few endpoints.
Django gets you a lot if you have a traditional app with a traditional RDBMS and a traditional set of web servers. It’s too opinionated to easily map into AWS serverless.
Take a look at the [CDK](https://aws.amazon.com/cdk/) if you haven't already. It lets you define your infrastructure using TypeScript, which then compiles to CloudFormation. You can easily mix infrastructure and Lambda code in the same project if all you're doing is writing some NodeJS glue Lambdas which sounds like what you're looking for.
There's a couple of sharp edges still but in general it just 'makes sense'. If you don't like TypeScript there are also bindings for Python and Java, among others, although TypeScript is really the preferred language.
CDK made IaC accessible to me. I hated raw CloudFormation and never bothered with it because of that reason. I had a crack at Terraform, but never got passed the learning curve before my enthusiasm died.
Currently using some CDK in a production app and finally I found a way of doing IaC I actually enjoy.
You might really like pulumi. I'm kind of on the opposite end, ops>swe so tons of IAC and i'm using pulumi now as I'm more swe focused https://www.pulumi.com/ (ive no relation to them)
Basically exact same as CDK. I really prefer this style over CloudFormation and Terraform. I think Pulumi emerging as another player in the space legitimizes the approach.
CDK is moving quite fast and not all parts are out of the experimental phase, so there are breaking changes shipped often. I think in a couple of years it will stabilize and mature and become a very productive way of working with infrastructure.
GCP has https://cloud.google.com/functions/docs/functions-framework but I will not use it. I have found the best solution is to abstract away the serverless interface and create a test harness that can test the business logic. This adds some extra complexity in the code, but iterations are fast and do not rely on the overly complex and bug prone "platforms" like SAM and Functions Framework.
This is precisely what I do when I write code destined to be an AWS Lambda Function. It really feels like the only sane way to do it. It also makes it easy to mock the incoming request/event for integration tests.
Developer experience for serverless is such a pain point, spot on. AWS SAM has tackled some of the IaC modeling problem (on top of CloudFormation which is a mature choice) and they've had a crack at the local iteration (invoke Lambda function or API Gateway locally with connectivity to cloud services).
It's a little incomplete, missing some of the AWS IAM automation that makes local development smooth, environment management for testing and promoting changes, and some sort of visualization to make architecture easier to design as a team.
I work for a platform company called Stackery which aims to provide an end-to-end workflow and DX for serverless & CloudFormation. Thanks for comments like these that help identify pain points that need attention.
Yeah, I took a look at using a serverless framework for a hobby project, and it was just a real pain to get started at all, let alone develop a whole application in.
I tried AWS, and then IBM's offering which is based on an open source (Apache OpenWhisk) project, thinking that it might be easier to work with, but that was also a pain.
I just lost interest as I was only checking it out. For something constantly marketed on the ease of not having to manage servers, it fell a long way short of "easy".
> Yeah, I took a look at using a serverless framework for a hobby project, and it was just a real pain to get started at all, let alone develop a whole application in.
Look into Firebase functions. Drop some JS in a folder, export them from an index.js file and you have yourself some endpoints.
The amount of work AWS has put in front of Lambdas confuses me. Firebase does it right. You can go from "never having written a REST endpoint" to "real code" in less than 20 minutes. New endpoints can be created as fast as you can export functions from an index.js file.
And if you need a dependency that has a sub dependency with a subdependency that uses a native module prepare for poorly defined -fun- hell getting it to work. A surprising amount of standard is libs do.
Being able to throw up a new REST endpoint in under 10 minutes with 0 config is really cool though.
And Firebase Functions are priced to work as daily drivers, they can front an entire application and not cost an insane amount of $, per single ms pricing. Lambda's are a lot more complicated.
> Kind of surprised the article didn't mention lack of reasonable development environment.
I've been pretty happy with Cloudflare Workers.
You can easily define environments with variables via a toml file. The DX is great and iteration speed is very fast. When using `wrangler dev` your new version is ready in a second or two after saving.
I can report that Azure Function App development is at least pretty decent, as long as you have the paid Visual Studio IDE and Azure Function Tools (haven't tried the free version yet).
I tried AWS Lambdas a few years back and it felt way more primitive.
Azure Function App development experience is indeed pretty nice at least when using .NET Core. There are some issues, like loading secrets to the local dev environment from Key Vault has to be done manually and easy auth (App Service Authentication) does not work locally.
I've used Azure's serverless offering "Functions" quite a bit. The dev experience is pretty good, actually - it "just works" - start it and point your browser at the URL. And certainly no problems setting up env vars or anything basic like that.
My only nitpick, and only specifically relating to dotnet, is that config files and env vars differ between Functions and regular ASP.NET Core web apps. I think there is some work going on to fix that, but it's taking forever.
Couldn’t agree more, the dev experience was awful. You basically have to develop against public AWS services, my dev machine became a glorified terminal. They do seem to be iterating on the tooling quickly, but I wouldn’t use it again if I had a choice.
Edit: CloudFormation was also painful for me, the docs were sparse and there were very few examples that helped me out.
SAM templates are a subset of CloudFormation templates; that PDF could be three times as long and still not have the content I needed.
Yes there are examples, but there wasn’t one at the time that mapped to what I was trying to accomplish. Because, again, SAM templates are not one-for-one CloudFormation templates.
I found the community around SAM to be very limited. One of the many reasons I’ve moved to the Kubernetes ecosystem.
It definitely doesn’t have to be that way. I work on Firebase and I’ve spent most of the last year specifically working on our local development experience for our serverless backend products:
https://firebase.google.com/docs/emulator-suite
Still very actively working on this but our goal is that nobody ever has to test via deploy again.
Love firebase, thanks for your work! the local emulator suite is so important a feature and keenly following your progress.
Slightly OT... perhaps cheeky... any idea why firebase doesn’t provide automated backups for firestore and storage out of the box? Seems like a no brainer and a valuable service people would pay for.
I'm currently working on a little project backed by Firebase. Really interesting. Good to hear you're doing this - at my day job one of our key factors in choosing a technology is whether we can spin it up in docker-compose in CI and do real-as-possible ephemeral integration tests.
My experience is with .NET Core and the development experience is awesome... Dropped a $250/m cost down to ~$9/m moving it from ec2 to lambda's. Environment variables are loaded no differently between development and prod. Nothing is all over the place as it's almost 0 difference between building a service to run in Linux/Windows vs a Lambda.
Keep in mind Firebase has a big caveat. Firebase is great... for what it does. However, there's no way to easily migrate the Firebase resources to the larger GCP ecosystem. Firebase does what it does, and if you need anything else, you're out of luck.
Firebase is magic... but I never recommend it for anyone, until there's some sort of migration path.
[Firebaser here] that’s not quite accurate. For cloud functions they’re literally the same. Your Firebase function is actually a GCP function that you can manage behind the scenes.
With Cloud Firestore (our serverless DB) that’s the case as well. And Firebase Auth can be seamlessly upgraded to Google Cloud Identity Platform with a click.
However you’re right that for many Firebase products (Real-time Database, Hosting) there’s no relation to Cloud Resources.
Deploying is super slow. Usually it takes a minute or two, which is already quite long, but sometimes something goes wrong and then you can't redeploy immediately. You have to wait a couple of minutes before being able to redeploy.
To be fair, Firebase recently released a local development tool which alleviates the need to deploy on every change, but I haven't used it yet.
I'm a big Firebase user with Firestore and has been great...no not perfect and the "cold start" is probably the worst issue. However, deployments are easy, the GUI tool keeps getting better, like the extension packages, and the authentication system is quick to implement.
I found Amplify excellent to get up and running quickly. I’d highly recommend it for anyone without a well-oiled CICD setup who wants to quickly get a website up to test out an idea.
Unfortunately, I quickly hit the limits of its configurability (particularly with Cloudfront) and had to move off it within a few months.
“Serverless models don’t require users to maintain their own operating systems, or even to build applications that are compatible with particular OSs. Instead, developers can produce generic code, and then upload it to the serverless framework, and watch it run.”
... is utterly compelling and is why serverless will not just win, but leave renting a server a tiny niche market that few developers will have an experience of post 2030.
Maintaining your own server is completely nuts. If that isn’t obvious now, it will be in another decade. It’s massively inefficient. Like running your own power plant to serve your factory, except you also have to worry about security and constant maintenance, along with all the moving parts that surround a server.
Almost all the objections in the article can be rephrased as “serverless is not mature enough yet”, and that’s accurate, but I suspect there’s also a bias against giving up control to the cloud companies, and some wishful thinking as a result.
The future of software development is going to be defined by cloud providers. They’re going to define the language ecosystem, the canonical architectures for apps etc... it’s just early days and cloud is really very primitive. Just clicking around Azure or GC or AWS illustrates how piecemeal everything is. But they have a lot to do, and just keeping pace with growth is probably hard enough. I’m not sure I’m super happy with this outcome, but I’m pretty certain the trend line is unmissable.
It's not clear to me how much experience with serverless architectures the author of the parent comment has, but speaking as someone with plenty, the operational costs of serverless are at least equal to managing stateful infrastructure, with much less control when things go wrong. Lambda was a major step up in long term predictability compared to for example App Engine, where there have been plenty of instances of overnight unannounced changes, or changes announced with incredibly short notice period, requiring developer time rather than ops time to bring an application back to service.
On the ops side even with a platform like Lambda, training an operations team to take over maintenance of a nested spaghetti of random interlinked services and bits of YAML trapped in random parts of the cloud is a total nightmare. The amount of documentation required and even the simple overhead of enumerating every dependency is a long term management burden in its own right. "The app is down" -> escalate to the developers every single time.
Compare that to "the app is down", "this app has basically no ops documentation", "try rebooting the instances", "ah wonderful, it came back"
I'm pro-cloud in many ways and even pro-serverless for certain problems, but let's not close our eyes and pretend dumping everything into these services is anything like a universal win.
This. 100 times this.
Also, in several places most of the services downtimes are due to, you know what? Application bugs, not infrastructure outages. Sure, they happen as well and being on a good cloud provider mitigate a lot of them (but not all of them!) but if you increase the application design complexity you will increase also those downtimes. Yeah sure, there are tens of really good engineering departments where everything is properly CI?CD, automated, they can scale to thousand of services without skipping a beat... but that's not the reality for thousands of other smaller/less talented shops. So, "moving to serverless" will not just automagically fix all of your problems.
Also - and I'm an infra guy so I'm probably biased - I don't really get all this developer anxiety to outsource infra needs. Yeah if you are 2 devs working on your startup it makes sense, but when you scale up the team/company, even with serverless, you WILL need to dedicate time to infra/operations and do not dedicate it to strictly-business-related code. Having somebody dedicated to this is good for both.
I haven't done anything with serverless, but surely the class of problems that would be fixed by an instance restart don't happen in the first place on serverless
It was intended more to invoke general ideas about management ease than being a specific remediation, however
elsewhere in the thread there is an example of a diagramming tool split out across 37 individual AWS services/service instances. In a traditional design, this is conceivably something where all state and execution could easily fit in one VM, or perhaps one container with the state hoisted off to a managed service. In this case we could conceivably fix some problems with an app like that literally just by kicking the VM
I don't think you're wrong, I just think you're not looking far enough ahead.
What we have now is very primitive compared to how app development might work in the future; serverless is laying the foundation for a completely different way of thinking about software development.
It's more back to the mainframe model of software development. I did this back in the 90s and I never had to think about scaling. Granted these were just simple crud / back-office apps.
But I can see how it would work for most modern software.
>training an operations team to take over maintenance of a nested spaghetti of random interlinked services and bits of YAML trapped in random parts of the cloud is a total nightmare. The amount of documentation required and even the simple overhead of enumerating every dependency is a long term management burden in its own right.
Upon learning about it some time ago, this was exactly my conception of what a Lambda-like serverless architecture would yield.
And it would seem difficult, if not impossible, for any dev to maintain a mental map of the architecture.
I've in this area professionally for some time now, and I've never "maintained" servers in any reasonable sense. There are kernel people who maintain the kernel and there are Debian devs who maintain the operating system. The server may be mine (but more often that not, it isn't) but only in very specific circumstances do I ever concern myself with maintaining any part of this stack.
A vanilla Linux VM is a platform to build on. Just like AWS or anything else. It is the environment in which my software runs.
Thus far, something like Debian has been more stable and much less of a moving target than any proprietary platform has been, cloud or non-cloud. Should a client wish to minimize maintenance costs of the software over the coming decade, it is most cost effective to not depend on specialized, proprietary, platform components.
That may change in the future, but right now there is no indication that is the case.
That's exactly the argument why you should go serverless. If all you do is keep a vanilla Linux distro running in a VM with occasional updates and some initial config-magic (webserver, certs, ip tables, ssh etc.) why even bother? The serverless isn't going to be any different, other than it just runs. No need to make a cron for updates, no iptables, no certs or webserver-stuff ... just put your app on it and let it go. On the other hand, if you actually need to tinker with your OS, roll your own. But what you described is the prime example of serverless target audience.
One reason serverless isn't the solution for most applications is that you're basically making yourself entirely dependent on the API of one specific cloud host. If Lambda decides to double its price, there's nothing you can do about it but pay. If you need to store your data in a specific place (for example, in Russia, because all Russian PII must be stored in Russian borders), then you're out of luck. And best of luck to you if you're about to catch a big customer but they demand your application to run on their premises.
There's also the long-time guarantee; if you write an application that runs on Ubuntu Server or Windows Server now, you can bet your ass that it will still run, unchanged, for another 10 years. The only maintenance you need to do is to fix your own bugs and maybe help out with some database stuff. If you deploy a Lambda app now, you have nothing to guarantee compatibility for such a long time other than "Amazon probably won't change their API, I think".
If you built everything in proprietary infrastructure, porting is a lot more work.
Using lambdas tie you to AWS because as soon as you use a few step functions, or you have a few lambdas interacting, changing to Azure or GCP becomes a huge pile of dev work and QA.
Having everything "just run" on linux instances let you be perfectly portable and now you can actually shop around.
Kubernetes, microservices, distributed systems, SPA apps. Getting a development environment to reproduce bugs takes up so much of my time these days (a fair bit is because of our crappy system, but there also a lot of inherent complexity in the system because of the architecture choices listed above). We get the promise of scaling, but most places making these choices don't actually need it.
The above comment was intended to answer exactly that: because it is much less maintenance in the long run.
Had you deployed an application to a proprietary cloud platform ten years ago, a handful of those services would have had their APIs changed or even been sunset by now.
Yeah but those "very specific circumstances" can come up at inconvenient moments. Certbot didn't run, disk got full, CPU hitting 100% for some reason, Debian needed to be upgraded, These are all things that need to be taken care of. Sometimes, right away and just when your family needs you.
I agree that it almost never happens, and that's why I run Debian as well. However, if you run production then things happen.
Just like when something unexpectedly breaks due to a component of the serverless environment that has been changed slightly causing your system to break, only then you have even less control when trying to debug the issue
I don't do any server admin. My code runs in docker on pretty much any server I can get my hands on. Some of my code runs on a ThinkPad stashed behind my desk, on DigitalOcean, on my Macbook. I could deploy to a Raspberry Pi and it would run just the same. It takes 10 minutes to deploy an exact copy to a new environment.
None of that requires OS maintenance. My house plants require far more maintenance than my software. I sometimes forget where some things run because I haven't touched them in years.
Serverless code runs on the cloud you built it for. I don't want that. I don't want to invest years of my life becoming an Amazon developer, writing Amazon code for Amazon servers.
That's without delving into the extra work serverless requires. There isn't a dollar amount tied to my import statements. I don't need software to help me graph and make sense of my infrastructure. I can run my code on my computer and debug it, even offline.
On hosts I manage professionally, I update/upgrade weekly after reading the notes - it takes a few minutes, I know I'm up to date and if there is anything I should be wary of.
On a personal debian server, I have an update/dist-upgrade -y nightly on a cron job, and I reboot if I read on HN/slashdot/reddit/lwn about an important kernel fix; Never had an issue, and I suspect it's about as secure and trouble free as whatever is underlying lambda -- with the exception that every 3-4 years I have to do an OS upgrade.
> None of that requires OS maintenance. My house plants require far more maintenance than my software. I sometimes forget where some things run because I haven't touched them in years.
Then how do you know they are still secure and even working?
Yes, deploying servers is very easy, maintaining and securing them is the hard part. Sure, you can automate the updates and it will work with a good OS-Distribution for some years. But no system is perfect, exploits are everywhere, even in your own configuration. And then it becomes tricky to protect your data.
The HDD died already. I opened it, moved the stuck head with my fingers and shoved it back in. I have good backups, and as someone else said, if it was important, it would not run on a recycled laptop behind my desk.
That hard drive event showed me how disposable the machine itself has become thanks to docker.
I didn't miss the point. You singled out my ThinkPad and I'm answering your questions.
My point was about having portable code on generic hardware, which in my opinion is a better bet than writing Amazon software for Amazon servers, and praying their prices don't change much.
actually You can not run Your code smoothly on Rasberry PI because it has different architecture (ARM) and needs different sets of dependency images (If You are lucky at all Your deps are available on ARM)
It's not nuts to run servers. If an application is operating at any scale such that there is a nonstop stream of requests, then it will be cheaper, faster, and more energy-efficient to run a hot server. This follows from thermodynamics. No matter how good the cloud vendor's serverless is, it's always going to be less efficient than a server, unless it doesn't do any setup and teardown (i.e. no longer serverless).
It is nuts to run one server. Then you're wasting money with a server/VM. That's what serverless is ideal for: stuff no one uses. That's a real niche. Who's going to use that? Not profitable companies.
Often I think for most cases where you reach for serverless, you should reconsider the choice of a client-server architecture. An AWS Lambda isn't a server anymore; it's not "listening" to anything. Why can't the "client" do whatever the Lambda/RPC is doing?
Maybe what you want is just a convenient way to upload code and have it "just work" without thinking about system administration. The types of problems where you don't care about the OS is once again a niche. You probably don't even need new software for these kinds of things. You can just use SaaS products like Wordpress, Shopify, etc.
Serverless won't be profitable because the people who need it don't make money.
> Serverless won't be profitable because the people who need it don't make money.
You seem to implying only applications with huge numbers of users can be profitable. This statement ignores a tremendous amount of (typically B2B) applications that provide enormous value for their users but don't see a lot of traffic.
I have worked on applications that are at the core of profitable businesses yet they can go days, in some cases weeks without any usage. Serverless architecture will be a real benefit there once it matures.
I don't think that's necessarily true. Google Cloud functions give you 2 million invocations free a month - that's almost 1 per second. You can keep adding another 2 million for $0.40 at a time. It's not terrible.
I agree with the suggestion when to use a server, but I think making it out to be an obvious physical law is a bit too far.
Serveless runtimes can be massively multi-tenant, and in cases like Cloudflare have very little overhead per tenant, so they can share excess capacity for spikes, which you have to factor into your server. This gives them the ability to beat the server in thermodynamics. Maybe they will, maybe they won't but I don't think that's the argument that matters.
> Maintaining your own server is completely nuts. If that isn’t obvious now, it will be in another decade. It’s massively inefficient. Like running your own power plant to serve your factory, except you also have to worry about security and constant maintenance, along with all the moving parts that surround a server.
Except that it is not. The security and constant maintenance is needed but it is worth in many cases. And large companies can not really off load all ownership of data and applications. The interest in servers actually went in reverse direction due to the cloud effect.
I would say having own server or application hosting capacity is very similar to why you produce your own solar power and store it in batteries - it is simple? No. But the technology is improving and thus it makes it easier for people to adopt this paradigm.
When I see what is happening with WebAssembly/WASI in particular, I see a great future for self-hosting again. Software written in any programming language (as long as it targets WASM) is a lot easier to host than existing models. Also there is interoperability of software coming from different languages at the WebAssemply level as I understand.
> Software written in any programming language (as long as it targets WASM) is a lot easier to host than existing models.
For the past 20 years you have been able to target x86 gnu/linux and have it running without modification on a readily available server, either your own hardware or rented/public cloud. How does switching from one binary format to another (x86 to WASM) change anything (except maybe slowing down your code)? As I understand it, the main draw of WASM is running non-JS code in a web browser.
I use Google Cloud Run to run my serverless code for exactly this reason. GCR is literally just a container that runs on demand (with scaling to 0). Literally the only GCR specific part is making sure the service listens on the PORT env. If I was so inclined, I could deploy the exact same container on any number of services, host it myself and/or run it on my laptop for development purposes. There's also Kubernetes Knative which is basically (afaik) self hosted GCR.
Cloud Run is excellent if I wrote my own application. My biggest issue is that most off the shelf open source software that has a docker container often use a complicated docker compose file so if they can be deployed, they might be waiting on each other's cold starts (which can be very long and expensive) and/or need more databasesish things than I want. So obvious mistakes and unrealistic expectations aside, I have several nodejs and crystal apps on cloud run which is running well and just the concept "lambda for docker containers" is pretty awesome. Cold starts are pretty harsh ATM, hopefully they improve.
It varies massively based on the tech stack. I have seen both simple Spring Boot and Quarkus apps take in excess of 10 seconds to start up in JVM mode. However, Quarkus compiled to native binary with GraalVM starts consistently under a second (in the ~500ms range). This is still brutal compared to running it on a vanilla MacBook which usually takes less than 20ms.
That depends a lot on the use case. The ephemeral nature of serverless environment generally require you to proprietary solutions by the cloud provider to have things like DB access and such. So you end up using DynamoDB instead of Postgres (as an example). You CAN make portable serverless code but it generally requires a fair amount of work to do so.
Most actual FaaS code is quite portable; the configuration is what can’t move easily. And things like OpenFaaS and kNative make self-hosted runtimes completely flexible- it’s just a short-lived container.
Is there anything stopping an organisation from defining some standard types of serverless environments?
Is there anything stopping someone from turning that standard into implementations to help cloud providers offer it, or even be a fallback option that could be deployed on any generic cloud infrastructure?
The point of serverless for vendors is lock-in. Everything else about it is an annoyance, from their side (having to manage lifecycle, controlling load in shared engines, measuring resource usage...). But it locks people in and can be slapped with basically-arbitrary prices. The incentives to set standards are exactly zero, because once all providers support them and easy portability is achieved, it becomes a race to the bottom on price. Unless vendors can come up with some extra value-added service on top and choose to commoditize away the serverless layer, there won’t be any standard.
> Is there anything stopping an organisation from defining some standard types of serverless environments?
Yes. Basic economics. There is nothing _technical_ stopping 'an organisation' from making a federated twitter or facebook. But there are (evidently) insurmountable non-technical reasons: It hasn't happened / there have been attempts which have all effectively failed (in the sense that they have made no significant dent in these services' user numbers).
Why would e.g. Amazon (AWS) attempt to form a consortium or otherwise work together or follow a standard, relegating their offerings to the ultimate in elasticity? Economically speaking, selling grain is a bad business: If someone else sells it for 1ct less per kilo then the vast majority of your customers will go buy from someone else, there's no product differentiation.
Serverless lockin (such as GAE or AWS Lambda) is the opposite. No matter how expensive you make the service, your users will stay for quite a while. But make a universal standard and you fly in one fell swoop to the other end of the spectrum. If I have a serverless deployment and the warts of serverless are fixed (which would, presumably, involve the ability to go to my source repo, run a single command, give it some credentials, and my service is now live after some compilation and uploading occurs) - then if someone else offers it 1ct cheaper tomorrow I'll probably just switch for the month. Why not?
This cycle can be broken; but you're going to have to paint me a picture on how this happens. Government intervention? A social movement amongst CEOs (After the war, there was a lot of this going around)? A social movement amongst users so aggressive they demand it? Possible, but that would require that we all NOT go to serverless until the services offering it come up with a workable standard and make commitments to it.
I think it can happen simply through one serverless offering becoming very popular and other services (or open source projects) trying to reimplement the API of that offering. To some extent, this happened with Google App Engine. (AppScale)
I think cloud customers are savvy to the lock-in. That we're having this conversation in evidence of that. Perhaps AWS can achieve adoption of Lambda without needing to cater to customers who are cautious about getting locked in, but any challenger might find that it's much easier to gain customers if they also provide some form of an escape hatch.
As Jeff Bezos would say about retail, "your margin is my opportunity."
I disagree. Every AWS Lambda function I've ever written can be ran as a regular node/python process. The lambda-specific part is miniscule. If I wanted to run these on Azure or Google only the most inconsequential parts of the function would need to be changed.
In my experience, having started and abandoned side projects in both aws lambda and google app engine, half your project becomes:
* Well, obviously we use a hosted database
* And obviously, AWS provides our logging and all our analytics.
* Obviously when people call our lambda functions, they do so either through an AWS-specific API, or one constrained to a very limited set of forms.
* Of course, we can't blindly let everything access everything, so naturally we have IAM roles and permissions for every lambda function.
* Well, the cloud provider will look after secrets and things like that for us, no need for us to worry about database passwords.
* Naturally, with all these functions and IAM roles to look after, and we need tagging for billing. We should define it all with CloudFormation scripting.
* Well, the nosql database the they provide comes with their specific library. And as it shards things like this, and doesn't let you index things like that, you've got to structure your data this specific way if you want to avoid performance problems.
* You don't want your function to take 200ms+ to respond, your users will notice how slow it is. So no installing things with apt-get or pip for you, let me get you a guide on how to repackage those into the vendor-specific bundle format.
* You want to test your functions locally, with access to an interactive debugger? You're living in the past, modern developers deploy to a beta environment and debug exclusively with print statements.
* And so on.
In this case, a lot of the 'complexity' one hoped to eliminate has just been moved into XML files and weird console GUIs.
This, but it was told by serverless experts, on stage, in front of hundreds of people. If that was their sales pitch, their reality was likely even less impressive.
This sounds a bit like the posh workshop I went to on WAP (wireless access protocol) years ago when I worked for BT.
It was a complete omnishambles - to the point I avoided the fancy lunch and went to the Pub for a ploughman's lunch, In case I suddenly blurted out "this is all S*&T" and caused a political row with the mobile side of the company I worked for.
Never tried something marketed as Serverless but Google App Engine. If you wanted any performance, you had to follow the Guidelines really closely. Which could be legit, if it's worth the effort but I think it isn't. I think people under-estimate the effort and that the code will require much more not so nice optimizations than expected. That includes very verbose logging. It's sold as carefree and elegant but it only works when using patterns that nobody enjoys. I really liked the log browser and Dashboard though ;) It's like a stripped down version of New Relic and Elastic Search combined.
Most of your points are not relevant to my original statement of the code being generic - they are talking more about the architectural decisions. If I want to use DynamoDB I can do so in EC2 or Lambda, serverless doesn't dictate that. You also seem to believe one chooses Lambda because of complexity reduction and that's not really the only reason. I can very easily port a node/express API backend that connect to RDS to any other cloud provider. What about serverless makes you think that's not the case?
> Every AWS Lambda function I've ever written can be ran as a regular node/python process. The lambda-specific part is miniscule.
Of the actual function code, sure.
Of course, if you aren't manually configuring everything and are doing IaC, that isn't generic. And if you are supporting the Lambda with any other AWS serverless services, the code interfacing with them is pretty AWS specific.
I was only talking about function code. It should be painfully obvious to anyone that if you opt to leverage other AWS services that those are things you would ultimately have to replace but those decisions have nothing to do with serverless.
> but those decisions have nothing to do with serverless.
Sure they do.
Because if you need a DB for persistence you are either setting up a server for it (and therefore not serverless, even if part of your system uses a Lambda) or consuming a serverless DB service. And so on for other parts of the stack. On AWS, for DB, that might be Dynamo (lock-in heavy) or, say, Aurora Serverless (no real lock-in if you don't use the RDS Data API but instead use the normal MySQL or Postgres API, but that's higher friction—more involved VPC setup—to use from Lambda than RDS Data API is, so the path of least resistance leads to lockin.)
Lambda or other FaaS is often part of a serverless solution, but is rarely a serverless solution by itself.
What are you going to do when the container solution / node.js instance / insert x component here crashes? Wait for support to do something about it when you can fix it instantly? Or when you want to deploy a gRPC / crypto daemon to communicate with your back-end?
As an experienced back-end developer and linux user I would pull my hair out if I was completely helpless in fixing an issue or implementing some side thing that requires shell access. I don't want to wait for some guy in Philippines who will be online 12 hours later to come try to fix it.
Well, when will it be mature and what will it take to get there?
My first experience with serverless architecture was back in 2007 or so when trying to port Google News to App Engine. That was a thoroughly painful experience, and things haven't exactly gotten much easier since. If you go back in time a decade, Google's strategy for selling compute capacity was App Engine. Amazon went the EC2 route. Reality suggests AWS made the better choice.
I can understand the superficial notion that having idling virtual machines is inefficient (because it is). But this reminds me a bit of tricks we did to increase disk throughput in systems with lots of SCSI disks back in the day. Our observation was that if we could keep the operation queues on every controller full as much of the time as possible, we'd get performance gains when controllers were given some leeway to re-order operations, resulting in slightly better throughput. Overall you got higher throughput, and thus higher efficiency, but from the perspective of each process in the system, the response was sluggish and unpredictable. Meaning that if you were to try to do any online transactions, it would perform very poorly.
For a solution to be desirable it has to be matched with a problem that needs solving.
As the article points out, there are some scenarios where serverless architectures might be a good design paradigm. But extrapolating this to the assumption that this paradigm is a universal solution requires not only a leap of faith, but it also requires us to ignore observed reality.
So you owe me some homework. Tell me what needs to happen for serverless architectures to reach "maturity".
The midway point between "maintain your own servers" and "wholly in the cloud" is "configuration as code", using chef and terraform, or similar tools.
You don't patch your OS, or your apps, you define their versions and configuration in code and it gets built for you. And typically snapshotted at that point and made into a restartable instance image that can be simply thrown away if it's misbehaving, and rerun from the known good image.
Sometimes I wonder if this is propaganda by cloud/serverless providers to get everyone to jump on it and get locked in. The serverless black box kind of sucks, apart from “auto scaling” stuff. Crap performance too.
Auto-scaling is usually a myth anyway. You have to understand your system deeply and where all the bottlenecks are to really scale. If you have a part that's a big black box, that's going to get in the way of that.
I've never really understood this argument for serverless. Everything you do in AWS is through an API. I've never quite understood how replacing one set of API calls to provision an EC2 (or ECS cluster) is so much more complicated than another set of API calls to create a severless stack. If anything, my experience has been the complete opposite. Provisioning a serverless stack is much more complicated and opaque.
> Serverless models don’t require users to maintain their own operating systems, or even to build applications that are compatible with particular OSs. Instead, developers can produce generic code, and then upload it to the serverless framework, and watch it run.
Instead, developers can build applications that are compatible with particular serverless frameworks.
>The future of software development is going to be defined by cloud providers. They’re going to define the language ecosystem, the canonical architectures for apps etc...
That's... quite depressing to consider, actually. I long for a return to the internet of yore when Native apps were still king and not everything was as-a-service.
I agree. But you can still do native apps today, if you’re willing to jump through all the hoops that OS vendors set in your path and maybe renounce a couple of platforms (like chromebooks). Unfortunately, now that basically OS vendors are also effectively cloud providers, their incentives are set to increase those hoops (“for secuyriteh”), nudging more people towards “easier” cloud deployments.
It's massively inefficient, but it's also still massively safe in comparison, regarding soveignerity.
Like, today, given the political uncertainty in the USA, any large company would be nuts to bet on hosting their critical services on US-dependant infrastructure without having a huge plan B already in the works.
Is there a serverless provider with a "self hosted node" option that could be used as a fallback? That pretty much defeats the purpose of serverless, but at least you could hang on while figuring out how to transition to a new solution if the provider fails.
That has always been my sticking point. With AWS in particular. I don't trust AWS to exist forever, and it's definitely not without it own ongoing maintenance issues. Locking my entire business into their ecosystem seems risky at best.
Something I see really often in startups is huge dependencies in the form of SaaS. It should be no secret to those in tech that many of these businesses will not be around in 3 years. Even the likelihood of their service staying the same for 3 years is pretty low. I have been bitten by enough deprecated services, APIs and Incredible Journeys that I am wary.
In my experience, maybe it just isn't there yet, serverless has much more points of failure than a conventional infrastructure. I am sure there a lot of software that really needs the scale the cloud can provide. Funny thing is that we tend to use it for tiny apps, some IOT voice interfaces or BTB tools that we don't want in our corporate network.
I doubt cloud providers can dictate environments, other providers would quickly fill the gap to meet developer preferences. I also think that more developers care about lock in these days.
> The future of software development is going to be defined by cloud providers
That's probably true of web development, but "software development" writ large is much more than just cloud providers and webapps. "Software" encompasses everything from embedded microcontrollers to big iron mainframes that drive (yes, even today, even in 2030) much of the world's energy, transportation, financial and governmental infrastructure.
I think you've made a fair point, which I did have in mind but didn't write down - that there is a dichotomy between embedded and cloud systems, and the definition of software development will be to a lesser extent defined by the embedded side. Apple, for example, will have clout.
But long-term I think the cloud providers will assimilate so much power that the embedded side will follow its diktats.
Big iron mainframes have longevity, but will absolutely die out close to complete extinction - I've worked on those systems, I understand their strengths and the legacy issues, and there's no way that cloud isn't going to gobble up that market, it's just going to take a long time (as you say, beyond 2030).
Serverless is already here, it's just unevenly distributed. Instead it's called (managed) Kubernetes and yeah, you need a devops team and there's still a bunch of overhead but like you say - the writing's on the wall.
>> Like running your own power plant to serve your factory
With newer power technologies becoming more affordable and effective, solar, wind, & storage are increasingly being used to power factories and other businesses.
It's all about control of your product and operations. If it is economically feasible, it's always better to control your own stack all the way down.
Does serverleds actually deliver better control over your development, portability, reliability, security, etc. for your application & situation, or not?
This sounds a bit like the "When will they turn off the last mainframe?" arguments a while back - I wouldn't expect servers to disappear either...
Anyone moving to Hitrust or SOC2 loves serverless for that exact reason. When asked how I maintain my infrastructure, I point to RDS, API gateway, and Lambda. This leaves my security mostly free to focus on application level security.
Those are the same arguments as the ones put forth in the article under "the promise of serverless computing". It remains to be seen if they can be realized without the downsides.
The difference between time-sharing and serverless is that the former solved the issue of expensive personal computing, until cheap personal computers took over that market. The latter solves perceived expensive computing on the "server" side.
But what does it serverless solve exactly? It doesn't solve a technical problem, rather addresses concerns on the business side. Serverless solves a cost problem.
First, computing needs aren't linear, they fluctuate. And so, there's a problem of under- and over-utilization vs availability of resources. Serverless approaches computing power like tapwater: you're essentially paying for the CPU time you end up using.
Second, elasticity. Instead of having staff struggle - losing time - with the fine intricacies of autobalancers, sharding and what not; you outsource that entirely to a cloud provider. Just like a tap, if you need more power, you just turn the tap open a bit more.
Finally, serverless services abstract any and all low level concepts away. Developers just throw functions in an abstraction. The actual processing is entirely black box. No need to worry about the inner details of the box.
Sounds like a good deal, right?
> Like running your own power plant to serve your factory, except you also have to worry about security and constant maintenance, along with all the moving parts that surround a server.
Well... no. Outsourcing all of that to a third party cloud computing vendor doesn't dismiss you from your responsibility. All it does is shift accountability to the cloud provider who agreed to take you on as their customer. Securing your factory still very much includes deploying a secure digital solution to manage your machinery and process lines.
Plenty of industries wouldn't even remotely consider outsourcing critical parts of their operations, and this would include digital infrastructure. And this is regardless of the maturity of serverless technology. Risk management is a vast field in that regard.
Then there's legal compliance. There are plenty of industry specific regulations that simply don't even allow data to be processed by third party cloud services unless stringent conditions are adhered to. Medicine, banking and insurance come to mind.
Finally, when it comes to business critical processes, businesses aren't interested in upgrading to the latest technology for the sake of it being cutting edge. They want a solution that solves their problem and keeps solving that problem for many years to come. Without having to re-invest year after year in upgrades, migrations and changes because API's and services keep shifting.
Does that mean that there isn't a market for serverless computing? Of course there is. Serverless computing is a JIT solution. It's an excellent solution for businesses in a particular stage of their growth. And it closes the gap for plenty of fields where there really is a good match. I just feel that "maintaining your own server is completely nuts" is a bit overconfident here;
We had that utterly compelling framework in the early 2000s. Write generic code, upload to any provider you want, watch it run - that's exactly how shared-hosting PHP worked in ye olden days and to this day no one has made a developer experience as nice and It Just Works as that.
This on the assumption that there isn't a balkanization of the serverless frameworks between providers. History doesn't bear this. In 2020 we still need tools like BrowserStack even after decades now of web developers complaining about the fragmented ecosystem.
Instead of maintaining software compatible with the right operating system, you're maintaining software compatible with the right flavor of serverless by Cloud Provider. Now we're back to square one on at least one front.
On the control aspect, the bias against giving up control is not an unwarranted one. Maintaining control of critical infrastructure is extremely important. And in fact outsourcing your critical infrastructure is an existential one, and not just in an academic sense. When you give up control you give up the ability to prevent your infrastructure being hostile, but incompetent. In these cases it reduces the quality of your product for your customers.
I won't even go into the anti-competitive tactics Amazon themselves get into that make them not a good choice for your infrastructure. Instead I'll draw upon a recent experience that illustrates why outsourcing infrastructure, even at a higher level, is a bad idea.
My girlfriend recently was taking her NLN exam remotely. They weren't allowed to use calculators of their own, they had to use a virtual calculator provided by the company administering the test. Like most of these companies they are doing remote proctoring of the exams. During her exam this virtual calculator flat out wasn't available. The proctor told her to simply click through the exam and that once she submitted it she'd be able to call into customer service to get the exam re-scheduled due to the technical difficulties.
Well, that wasn't the case. After doing some deep digging for her, here is what I found. The testmaker NLN contracted out the test administration to a third party, Questionmark. Questionmark in turn contracted out yet another 3rd party, Examity to handle the proctoring. Examity proctors don't have access to Questionmark's systems. Questionmark doesn't have access to NLN's systems, etc. So how did we get this resolved? I had to track down the CEO of Questionmark, the CEO of Examity, and the head of testing for NLN. I had to reach out through LinkedIn inmail to get this on their radar. And then it was handled(quickly and efficiently I might add!). However, frontline support for each of these companies could do nothing. They just had to offload blame onto the support staff of each other. Another aspect of this is that each handoff between 3rd parties creates a communication barrier. In this case the communication barrier seems to have kept Questionmark from configuring this specific test correctly. I wouldn't blame any of these companies for this specific failure mode because it's just the nature of what happens when you offload your work to 3rd parties.
When you say, "Oh it's great we don't have to worry about X because Y can do it." The implication is that you lose all of the power of a vertically integrated company by essentially spinning off tons of subsidiaries and creating a communication overhead both before and when problems DO arise.
What is the future of software development going to look like when it reaches consumers and you have to say "Sorry, we can't fix that issue because Cloud Provider has to get back to me, and then in the background Cloud Provider has to say sorry we can't get back to you because we have to wait for our spunoff hardware division to get back to us?"
Maybe this type of business is fine for fun apps, but it's not fine for a lot of businesses. Even SLAs and disclaiming responsibility in your own contracts won't save your reputation. All it does is protect you financially!
Maintaining your own server is completely nuts. If that isn’t obvious now, it will be in another decade. It’s massively inefficient. Like running your own power plant to serve your factory, except you also have to worry about security and constant maintenance, along with all the moving parts that surround a server.
TANSTAAFL. Let’s say serverless becomes commoditised the same way as electricity. What are the margins in that business? What are AWS etc margins now?
There are very strong reasons to believe that serverless will offer convenience at a premium price, forever.
I went to a webdev convention, and it ended up being a serverless hype train. Industry experts with a financial incentive to promote serverless went on stage and told me they can't debug their code, or run it on their machine. They showed me comically large system diagrams for very simple use cases, then spent an hour explaining how to do not-quite-ACID transactions. Oh yeah and you can only use 3-5 languages, and each import statement has a dollar amount tied to it.
More importantly, all those skills you develop are tied to Amazon, or some other giant. All the code you write is at their mercy. Any problem you have depends on their support.
Am I supposed to bet hundreds or thousands of man hours on that?