1. Permissions are a nightmare. You have to give both users trying to run lambdas and the lambdas' service accounts some crazy permissions sometimes in order to get anything to work. This is especially true with a lot of the "make serverless easy!" libraries out there, which seem to want an IAM user with all sorts of access that's against our normal security policy.
2. Networking is equally nightmarish. We operate with lots of VPCs connected to on-premise hardware and network infrastructure via DirectConnects, and the only path out to the internet is through our corporate data center (again, security requirements). Most Lambda-centric tools seem not to like this arrangement, and often barf when there isn't an internet gateway attached to the VPC. Moreover, especially in development environments we use relatively small subnets, which Lambda can consume nearly instantly if a large volume of requests come through. Finally, there's some secret sauce to running Lambdas in VPCs that I still don't quite have down-it seems to only work half the time.
3. If the future of compute is serverless, then Lambda, Google Cloud Functions, and whatever half-baked monstrosity Azure has cooked up are going to have to get together and define a common runtime for these environments. I refuse to invest in serverless until I know that I can run the same code on minikube on my laptop as I can in <insert cloud provider here> because vendor lock in is real and will bite you in the ass one day.
With reference to this article specifically, please, please chop off my hands and pull out my eyeballs if cloud computing becomes yet another workflow engine. I though we killed those off in the 90s.
I'll second that.
Moreover, I'm not sure who the author of this article imagines will fix all of these busted building blocks when some ill formatted or out of bounds data starts arriving from one of these IoT devices, not to mention non-standards compliant data.
But, seriously. I read this the other day:
There's a discussion on HN somewhere. TLDR; we're building complexity into software that developers can't keep in their heads any more, which is leading to more and more errors.
It crossed my mind today when I was fiddling with some AWS stuff. Keeping a mental-map of all of these services, security groups, VPC topology, IAM configs, etc. is also growing more difficult.
And now, this. Haven't done Lambda work, but the paradigm would seem to contribute to mental noise. On one hand, it seems liberating to move away from bound-servers, but who maintains the configs and mental-model for all of the "unbound" processes that can "materialize", do work, then quiesce again? How about the dependency map between Lambdas? Overall, seems they'd kind of become "phantoms" over time vs. servers.
>Permissions are a nightmare. You have to give both users trying to run lambdas and the lambdas' service accounts some crazy permissions sometimes in order to get anything to work
And once you've done this, it has to be recorded and mentally mapped.
On a related note, I find much of AWS to be this way. For all of its success, it is not the clearest to navigate, especially on the interoperability front. It's not always obvious what you need to do to connect things the first time around on any given service(s), and it requires more trial and error than I'd like. When it doesn't work, it generally just doesn't work. Is it a security group? VPC setting? Firewall? IAM role? Something I don't know that I don't know? Do I even need this or that component?
Always seemed that there's a lot of documentation on the parts, but that there's something missing on the whole/overall cohesiveness.
Lambda's are the unattainable Chalice.
The API Gateway that you have to go through is byzantine.
Then Route53, CloudFront, yada yada - AWS has become such a massive bureaucracy - it changes so fast, documentation incomplete.
The other issue is the inability to keep Lamda's 'hot' - as they often take several seconds to load.
And of course easy shared memory and persistent file access.
Lambda's are simply not designed for use web-app folks - they are designed to to back-end dev-ops and IT tasks.
I think it's been like 5 years now and they are still a mess, I really want to use them, but I wont.
Absolutely. I see people trying to serve websites out of cloud functions. Reminds me of the mid 90s when people ran httpd from inetd!
Are you PM's reading this?
+ Lamdas without the clumsy API Gateay (or at least a simple 'pass thru' mode.
+ Support direct integration with Express or whatever, so we don't have to create Lambda-specific code
+ Allow us to keep them 'hot' so latency is low.
+ Give us a way to access some kind of persistent file storage with 'fs'.
+ Same thing for shared memory.
If this were true, I would see a massive flood of people moving to Lambda's.
Nobody likes to have to manage Linux instances. We'd all rather something more manageable and easy to use.
I feel we are on the cusp of this and I still can hardy believe nobody has 'got it right' just yet.
You can call AWS Lambdas directly through the AWS SDK (or dozens of other event sources like SQS, Timers, etc.)
You can keep lambdas "hot" with periodic invocations, it's not standardized, but it's a thing
There's also access to 500mb of data on /tmp. It's not always cleared after each invocation of your function, especially if it's invoked multiple times in succession, so you can use it as a "scratchpad" that provides some soft caching under high load.
S3FS works if you just want to expose S3 through an 'fs' interface.
'Periodic invocations' my gosh man, that seems like a big hack, requiring yet another service.
I get that there is /tmp - but it's not guaranteed to be persistent. I wish it were.
Anyhow - my main point is ... Lambda's don't seem to be designed for us in mind :)
Thanks for the pointer on S3FS, I will definitely check that out.
Surely. But what exactly are those economics? It can't be that bad.
We use Cloud Function for a large part of our infrastructure, and it's all just Node under the hood. Some of it we recently ported to Iron.io for cron runs and it was a pretty smooth process. I'm just curious what changes you'd actually like to see made here.
Granted, I'm a little spoiled in that regard since 90% of my work is on Kubernetes, and besides a few very small pieces of connective tissue everything is identical from cloud to cloud.
We really need something we can point to to express and legitimize our frustration with Windows to those who don't understand how much more effort it takes to deal with a totally incompatible proprietary system.
As far as I understand you invoke them just like you would run a docker container.
I think you'll actually find Azure is well in the lead on this one, well ahead of where Amazon is anyway.
If that is your concern, then you may want to consider that serverless isn't the right solution for your problem.
All the serverless runtimes across all the providers are basically identical. But that's not what makes them interesting. What makes them interesting are the ecosystems around them.
Lambda is the glue that binds AWS together. Most of your lambdas should be AWS specific, enough so that your vendor lock in won't be because of the specific Lambda runtime, but because of the services that you're using with your Lambda.
Maybe in the future when AI can handle lots of that stuff we'll get there, but as of right now we have a LONG way to go.
IBM Cloud Functions is a hosted, managed version of that service, https://console.bluemix.net/openwhisk/, running on the IBM Bluemix platform.
IBM Bluemix, of course, offers OpenWhisk on their platform, but you can set up your own OpenWhisk platform if you want.
(disclosure: work for IBM, but not in Cloud)