Hacker News new | comments | show | ask | jobs | submit login

As a developer at a company that's trying to shove Lambda down our throats for EVERYTHING...AWS needs to get better at a few key things before Lambda/serverless become viable enough that I'll actually consider integrating them into my services:

1. Permissions are a nightmare. You have to give both users trying to run lambdas and the lambdas' service accounts some crazy permissions sometimes in order to get anything to work. This is especially true with a lot of the "make serverless easy!" libraries out there, which seem to want an IAM user with all sorts of access that's against our normal security policy.

2. Networking is equally nightmarish. We operate with lots of VPCs connected to on-premise hardware and network infrastructure via DirectConnects, and the only path out to the internet is through our corporate data center (again, security requirements). Most Lambda-centric tools seem not to like this arrangement, and often barf when there isn't an internet gateway attached to the VPC. Moreover, especially in development environments we use relatively small subnets, which Lambda can consume nearly instantly if a large volume of requests come through. Finally, there's some secret sauce to running Lambdas in VPCs that I still don't quite have down-it seems to only work half the time.

3. If the future of compute is serverless, then Lambda, Google Cloud Functions, and whatever half-baked monstrosity Azure has cooked up are going to have to get together and define a common runtime for these environments. I refuse to invest in serverless until I know that I can run the same code on minikube on my laptop as I can in <insert cloud provider here> because vendor lock in is real and will bite you in the ass one day.

/rant

With reference to this article specifically, please, please chop off my hands and pull out my eyeballs if cloud computing becomes yet another workflow engine. I though we killed those off in the 90s.




> please chop off my hands and pull out my eyeballs if cloud computing becomes yet another workflow engine

I'll second that.

Moreover, I'm not sure who the author of this article imagines will fix all of these busted building blocks when some ill formatted or out of bounds data starts arriving from one of these IoT devices, not to mention non-standards compliant data.


Yay! Another revolution! I was starting to miss the IoT revolution, and chatboat revolution, and...

But, seriously. I read this the other day:

https://www.theatlantic.com/technology/archive/2017/09/savin...

There's a discussion on HN somewhere. TLDR; we're building complexity into software that developers can't keep in their heads any more, which is leading to more and more errors.

It crossed my mind today when I was fiddling with some AWS stuff. Keeping a mental-map of all of these services, security groups, VPC topology, IAM configs, etc. is also growing more difficult.

And now, this. Haven't done Lambda work, but the paradigm would seem to contribute to mental noise. On one hand, it seems liberating to move away from bound-servers, but who maintains the configs and mental-model for all of the "unbound" processes that can "materialize", do work, then quiesce again? How about the dependency map between Lambdas? Overall, seems they'd kind of become "phantoms" over time vs. servers.

>Permissions are a nightmare. You have to give both users trying to run lambdas and the lambdas' service accounts some crazy permissions sometimes in order to get anything to work

And once you've done this, it has to be recorded and mentally mapped.

On a related note, I find much of AWS to be this way. For all of its success, it is not the clearest to navigate, especially on the interoperability front. It's not always obvious what you need to do to connect things the first time around on any given service(s), and it requires more trial and error than I'd like. When it doesn't work, it generally just doesn't work. Is it a security group? VPC setting? Firewall? IAM role? Something I don't know that I don't know? Do I even need this or that component?

Always seemed that there's a lot of documentation on the parts, but that there's something missing on the whole/overall cohesiveness.


+10.

Lambda's are the unattainable Chalice.

The API Gateway that you have to go through is byzantine.

Then Route53, CloudFront, yada yada - AWS has become such a massive bureaucracy - it changes so fast, documentation incomplete.

The other issue is the inability to keep Lamda's 'hot' - as they often take several seconds to load.

And of course easy shared memory and persistent file access.

Lambda's are simply not designed for use web-app folks - they are designed to to back-end dev-ops and IT tasks.

I think it's been like 5 years now and they are still a mess, I really want to use them, but I wont.


Lambda's are simply not designed for use web-app folks - they are designed to to back-end dev-ops and IT tasks.

Absolutely. I see people trying to serve websites out of cloud functions. Reminds me of the mid 90s when people ran httpd from inetd!


Yeah - but I think there is a 'massive' business case that Amazon is missing out on.

Are you PM's reading this?

Give us:

+ Lamdas without the clumsy API Gateay (or at least a simple 'pass thru' mode.

+ Support direct integration with Express or whatever, so we don't have to create Lambda-specific code

+ Allow us to keep them 'hot' so latency is low.

+ Give us a way to access some kind of persistent file storage with 'fs'.

+ Same thing for shared memory.

If this were true, I would see a massive flood of people moving to Lambda's.

Nobody likes to have to manage Linux instances. We'd all rather something more manageable and easy to use.

I feel we are on the cusp of this and I still can hardy believe nobody has 'got it right' just yet.


+ Lamdas without the clumsy API Gateay (or at least a simple 'pass thru' mode).

You can call AWS Lambdas directly through the AWS SDK (or dozens of other event sources like SQS, Timers, etc.)

+ Allow us to keep them 'hot' so latency is low.

You can keep lambdas "hot" with periodic invocations, it's not standardized, but it's a thing

+ Give us a way to access some kind of persistent file storage with 'fs'.

There's also access to 500mb of data on /tmp. It's not always cleared after each invocation of your function, especially if it's invoked multiple times in succession, so you can use it as a "scratchpad" that provides some soft caching under high load.

S3FS works if you just want to expose S3 through an 'fs' interface.


That's helpful, thanks, but none of those are full solutions.

Even if they are accessible via a javascript api, we need them accessible via regular HTTP for so many reasons.

'Periodic invocations' my gosh man, that seems like a big hack, requiring yet another service.

I get that there is /tmp - but it's not guaranteed to be persistent. I wish it were.

Anyhow - my main point is ... Lambda's don't seem to be designed for us in mind :)

Thanks for the pointer on S3FS, I will definitely check that out.


I suspect the economics of a lambda that you keep hot would look very different than one that takes a few seconds to spin up. It would be identical billing to you constantly pinging it I expect. At that point if you still wanted to be serverless you would be looking at containers.


"I suspect the economics of a lambda that you keep hot would look very different than one that takes a few seconds to spin up. "

Surely. But what exactly are those economics? It can't be that bad.


+10 I just wonder how the pushback of these techs will look like, because that's what's going to happen when everything is in an disarray and a mess. Someone still have to clean things up on Saturday at 3 o'clock in the night.


Excuse my ignorance here, and trust that I'm actually asking because I'm genuinely curious, but what do you mean by a "common runtime"?

We use Cloud Function for a large part of our infrastructure, and it's all just Node under the hood. Some of it we recently ported to Iron.io for cron runs and it was a pretty smooth process. I'm just curious what changes you'd actually like to see made here.


I'd really love to see a "blessed" environment that I can be sure all of my lambdas are running on: Linux with <insert tools here> always preinstalled, a common process for building native extensions, tooling for making lambda-style compute a commodity rather than a differentiator. I basically don't want to have to do any work when I move from one function execution platform to another.

Granted, I'm a little spoiled in that regard since 90% of my work is on Kubernetes, and besides a few very small pieces of connective tissue everything is identical from cloud to cloud.


Gotcha, that makes sense. We've attempted to abstract the endpoint and pub/sub logic from Cloud Functions into common libs, but I suspect moving to Lambda with our full stack would still be a real bear.


Have you ever looked at NixOS or NixOps? Nix lets you define an entire OS configuration in one neat config file, and lets you start a shell with any packages you want, independent from the rest of the system.


Can I use it to manage the windows desktops my non-developer researchers use? Can I use it at a customer that only has windows-based server infrastructure, because Enterprise? I need the same system that deploys the server to also deploy the client. It seems every one of these solutions assumes you're writing a web application.


It's sounds like your problem is proprietary Windows, and I feel for you. Other than the fact that Nix runs in Windows, it can't really help you with the consistent deployment thing.

We really need something we can point to to express and legitimize our frustration with Windows to those who don't understand how much more effort it takes to deal with a totally incompatible proprietary system.


I'll trade expressing frustration for a tool that actually solves the problem. Every time this topic comes up, I ask the same question hoping someone will point to a tool designed for this use-case, and not just an afterthought compatibility mode for a different solution. Nix on windows is a non-starter because it's not really Nix-on-windows. It's Nix-on-Linux-Subsystem-for-Windows-10-Creators-Edition. Who's running Windows 10 Creators Edition on production? Give me something that runs on Windows Server 2012 and doesn't require all-or-nothing adoption.


I believe Azure Cloud Functions will get you somewhat of the way where you want to go.

As far as I understand you invoke them just like you would run a docker container.


whatever half-baked monstrosity Azure has cooked up

I think you'll actually find Azure is well in the lead on this one, well ahead of where Amazon is anyway.


This mischaracterization was a crime of passion. I apologize.


> I refuse to invest in serverless until I know that I can run the same code on minikube on my laptop as I can in <insert cloud provider here> because vendor lock in is real and will bite you in the ass one day.

If that is your concern, then you may want to consider that serverless isn't the right solution for your problem.

All the serverless runtimes across all the providers are basically identical. But that's not what makes them interesting. What makes them interesting are the ecosystems around them.

Lambda is the glue that binds AWS together. Most of your lambdas should be AWS specific, enough so that your vendor lock in won't be because of the specific Lambda runtime, but because of the services that you're using with your Lambda.


This times a thousand.

Maybe in the future when AI can handle lots of that stuff we'll get there, but as of right now we have a LONG way to go.


Have you tried Openwhisk? It would fit the needs you describe I think.


Openwhisk seems (from my very cursory once-over) to require actually laying down some servers, which is fine, but not really the brave new serverless world we're all gunning for. The promise of serverless computing is freedom from a fleet of boxen, and it seems to require a fleet of boxen.


No - openwhisk is a managed service on IBM bluemix. It's one of the few services of theirs I'd recommend over aws


Small correction: OpenWhisk is an Open Source (incubating) Apache project, http://openwhisk.incubator.apache.org/ or https://github.com/apache/incubator-openwhisk. You can run OpenWhish on your own hardware or in the cloud.

IBM Cloud Functions is a hosted, managed version of that service, https://console.bluemix.net/openwhisk/, running on the IBM Bluemix platform.


To be clear - there's the Apache OpenWhisk project (https://openwhisk.incubator.apache.org) which was initially developed and released by IBM.

IBM Bluemix, of course, offers OpenWhisk on their platform, but you can set up your own OpenWhisk platform if you want.

(disclosure: work for IBM, but not in Cloud)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: