1. Don’t use .NET, it has terrible startup time. Lambda is all about zero-cost horizontal scaling, but that doesn’t work if your runtime takes 100 ms+ to initialize. The only valid options for performance sensitive functions are JS, Python and Go.
2. Use managed services whenever possible. You should never handle a login event in Lambda, there is Cognito for that.
3. Think in events instead of REST actions. Think about which events have to hit your API, what can be directly processed by managed services or handled by you at the edge. Eg. never upload an image through a Lamdba function, instead upload it directly to S3 via a signed URL and then have S3 emit a change event to trigger downstream processing.
4. Use GraphQL to pool API requests from the front end.
5. Websockets are cheaper for high throughput APIs.
6. Make extensive use of caching. A request that can be served from cache should never hit Lambda.
7. Always factor in labor savings, especially devops.
1. not use the programming language that works best for your problem, but the programming language that works best with your orchestration system
2. lock yourself into managed services wherever possible
3. choose your api design style based on your orchestration system instead of your application.
4. Use a specific frontend rpc library because why not.
I've hacked a few lambdas together but never dug deep, so I have very little experience, but these points seem somewhat ridiculous.
Maybe I'm behind the times but I always thought these sort of decisions should be made based on your use case.
EDIT: line breaks.
You’d simply upload an ear or ear and the server and deployed would handle configuration like db etc.
It worked perfectly (ear/war, the persistence framework was too verbose and high level imo, but that was replaced by hibernate/jpa). There was too much configuration in xml, but that could easily be replaced my convention, annotations and some config
Again.. we are running in circles, and this industry will never learn, because most “senior” people haven’t been around long enough.
And that likely won't change in our lifetime, given the rate of growth in demand for software: we literally can't create senior engineers fast enough for there to be enough to go around.
As an aside, I have the privilege of working with a couple of senior folks right now in my current gig, and it's pretty fucking fantastic.
What's the biggest idea you've seen abandoned and then reinvented with a new name?
Optimistically, it could represent a positive trade-off that replaces perpetual upkeep with upfront effort, and all-hours patching and on-call with 9-5 coding.
In practice, I think a lot of those fixed costs get paid too often to ever come out ahead, especially since ops effort is often per-server or per-cluster. The added dev effort is probably a fixed or scaling cost per feature, and if code changes fast enough then a slower development workflow is a far bigger cost than trickier upkeep.
Moving off-hours work into predictable, on-hours work is an improvement even at equal times, but I'm not sure how much it actually happens. Outages still happen, and I'm not sure serverless saves much less out-of-hours ops time compared to something like Kubernetes.
Now that guy's workload doubled and needs another colleague to be able to deliver.
The worst for me is the vendor lock in, directly followed by the costs.
- OS version patching, patch windows & outages, change mgmt relating to patching and reporting relating to OS patching
- antivirus, the same patching management and reporting as above
- intrusion detection / protection, host based firewalls, the same patching and management as above
- Other agents (instance health monitoring, CMDB, ...)
- Putting all this junk on a clean OS image any time something changes, re-baking and regression testing everything
This all adds up, and can be a significant cost to an organisation - team(s), licenses, management, etc.
The only thing I'm sad about is that you can only host it on Azure. I'd love an open standard for function-like programming and hosting.
Of course, not everything is suitable for this model, and it certainly won't be the most cost-effective or performant way to do it for everything either.
The comment is saying that Lambda has limitations and works best when considering those limitations. If those limitations don't fit your use case, you shouldn't be using Lambdas - or, at least, don't expect it to be an optimal solution.
(If Reddit’s video hosting being built and operated on a serverless stack by a single a engineer won’t convince you, I don’t know what will.)
I even tried to download them with youtube-dl but it doesn't work.
And of course, baremetal provides a much better ROI than VMs/VPS/<insert glorified chroot/jail here>.
That thing almost never works.
Then why comment? You clearly don't understand the use-case that AWS fits.
I've had jobs that took 18 hours to run on single machine finish in 12 minutes on Lambda. I could run that process 4 times a month and still stay within AWS's free tier limits.
For the right workloads it is 100% worth realigning your code to fit the stack.
Because the instruction list from above isn't backed with any solid reasoning and because commenting is what people do on HN.
>You clearly don't understand the use-case that AWS fits.
Pray tell, what is this enviable use case that I so clearly do not grasp?
Ok I'll bite. What takes 18 hours to run on a single machine but finishes in 12 minutes on Lambda.
You'd need some logic to shut down the instances once it's done, but the simplest logic would be to have the software do a self-destruct on the EC2 VM if it's unable to pull a video to process for X time, where X is something sensible like 5 minutes.
Doing that in a series took forever. It took something like 14 hours to generate the screenshots, then 4 hours to upload them all. Spreading that load across lambda functions allowed us to basically run the job in parallel. Each individual lambda process took longer to generate a screenshot than on our initial desktop process, but the overall process was dramatically faster.
This argument is basically: no counterpoint to the original post, but you can do things that are also easy on any other comparable platform.
Tell me again what I don’t understand?
Not if you already used your free tier. Lambda is X free per month. Free tier t2.micro is X free for first 12 months.
As someone who has done both, it's far, far easier to stand up a lambda than it is to manage a cluster of servers.
In my mind the thing that makes lambda “easier” is they make a bunch of decisions for you, for better or worse. For real applications probably for the worse. If you have the knowledge to make those decisions for yourself you’re probably better off doing that.
The whole value proposition behind AWS is that they can do it better than your business due (directly or indirectly) to economies of scale. I think Kubernetes is super cool, but rebuilding AWS on top of Kubernetes is not cost effective for most companies--they're better off using AWS-managed offerings. Of course, you can mix and match via EKS or similar, but there are lots of gotchas there as well (how do I integrate Kubernetes' permissions model with IAM? how do I get Kubernetes logs into CloudWatch? how do I use CloudWatch to monitor Kubernetes events? etc).
We migrated our company webapp to Heroku last year. We pay about 8 times what a dedicated server would cost, even though a dedicated server would do the job just fine. And often times, people tell me "Heroku is so expensive, why don't you do it yourself? Why pay twice the price of AWS for a VM?"
But the Heroku servers are auto-patched, I get HA without any extra work, the firewall is setup automatically, I can scale up or down as needed for load testing, I get some metrics out of the box, easy access to addons with a single bill, I can upgrade my app language version as needed, I can combine multiple buildpacks to take care of all the components in our stack, build artifacts are cached the right way, it integrates with our CI tool, etc, etc.
If I had to do all of this by hand, I would spend hours, which would cost my company way more. In fact, I'd probably need to setup a Kubernetes cluster if I wanted similar flexibility. By that point, I'd probably be working full-time on devops.
At my previous company we had project with an AWS deploy process that only two developers could confidently use. Teaching a new developer & keeping them up to date was a big time sink.
For comparison we had a Rails app setup on heroku that on day one junior devs were happily deploying to (plus we had Review apps for each PR!)
Granted that it does impose some limitations, and therefore isn't right for all apps. But it does seem like it would work for a large percentage of web apps and REST api's.
- Corrupted build? Reverse button for the rescue.
- SSL? They got you.
- Adding new apps in less than 1m?
and so on ...
The big difference we are migrating away from Heroku to Kubernetes for the same reason.
Came here to post this and agree 100%. Moving to Serverless requires evaluating the entire stack, including the server side language choice, and how the client handles API calls.
Often a move to serverless is better accomplished gradually in stages than the quick "lift and shift" that AWS like to talk about so much. Sometimes you can simply plop your existing app down in Lambdas and it runs just fine, but this is the exception not the rule.
With custom runtimes that's not the case anymore. I write my lambdas in Rust.
Can't stress (7) enough, would also add 'morale' savings. It can be really stressful for developers to deal with gratuitous ops work.
Shouldn't this not be a problem if you're doing 10 million requests a day? If you have enough requests, your lambdas should stay hot most if not all the time.
Then if this is false, in your Application_Start() you can set this: TelemetryConfiguration.Active.DisableTelemetry = true;
At this point in time the only places I’d use lambda are for low-volume services with few dependencies where I don’t particularly care about p99 latency, or DevOps-related internal triggered jobs.
Can you show some empirical evidence that supports this? In my experience this is another nebulous serverless hype claim that doesn't withstand scrutiny.
No. There's nothing common sense about it. It only seems plausible if you read the sales brochure from a cloud vendor and have no experience with all the weird and whacky failure modes of these systems and the fact that none of the major selling points of serverless actually work as advertised unless you dedicate significant engineering time to make them work - as the GP comment has demonstrated. The amount of engineering time required to make serverless work quickly catches up to or even exceeds just doing the damn thing the normal way.
And that engineering time is not transferable to any other cloud vendor, and neither is your solution now. So congratulations you just locked your business in.
Serverless only makes sense if you have a fairly trivial problem and operate on really narrow margins where you need your infra and associated costs to scale up/down infinitely.
That’s exactly the point. The web application needs of most startups are fairly trivial and best supported by a serverless stack. Put it another way: If your best choice was Rails or Django 10 years ago, then it’s serverless today.
As a side note, I've learned that the existence of a large number of viable ways to accomplish a task is a pretty big anti-signal for the desirability of accomplishing that task in the first place. When I started my career in 2000, there was a huge debate over whether the "right" way to develop a desktop application was MFC or .NET or Java or Visual Basic or Qt or WxWindows. The real answer was "don't develop desktop apps, because the web's about to take over". When the big web 2.0 businesses were founded from 2005-2011, there were basically two viable options for building a webapp: Rails or Django. Now that everyone's arguing about microservices vs. Docker vs. Kubernetes vs. serverless vs. Beanstalk vs. Heroku, it's likely that the real answer is "develop a blockchain app instead".
That's... not true. The choice of web stack – and, in fact, the whole software – is just a piece of what a startup may need.
Seriously, look at the list of YC startups on 2018 and tell me if most couldn't use either something like Rails, or a Single Page App In React With A Serverless Backend. And it wouldn't matter one bit.
> it's likely that the real answer is "develop a blockchain app instead".
I hope that was sarcasm.
Pretty subjective statements, I suppose we don't have the same definition of "trivial".
> If your best choice was Rails or Django 10 years ago, then it’s serverless today.
Comparing the features of Rails or Django with serverless is like comparing a spaceship with a skateboard.
Well, not necessarily? This assumes that the implementation is sound but it is not at all uncommon for abstractions to leak which end up causing more pain than they solve
It is not at all obvious what it just can't be used for. In our case, Julia.
Software professionals often sees benefits without understanding the tradeoffs.
- iRobot (maker of Roomba) has been running its entire IoT stack serverless since 2016 (architect Ben Kehoe is a worthwhile follow on Twitter)
- Reddit’s video hosting service is built and operated by a single engineer on a serverless stack
If "serverless heros" are running around promoting Lambda, newcomers will use it without thinking twice...
DevOps is not a synonym for "ops" or "sysadmin". It's not a position. DevOps is like Agile or Lean: it's a general method with lots of different pieces you use to improve the process of developing and supporting products among the different members of multiple teams. DevOps helps you save money, not the reverse.
You don't even need an "ops person" to do DevOps.
you rock! That docker based build system makes building those MUSL based rust binaries a snap!
Furthermore, eople like to compare runtime startup times, but this tells a very small portion of the story. For most applications, the dominant startup cost isn't the startup of the runtime itself, but the cost of loading code the app code into the runtime. Your node.js runtime has to load, parse, compile, and execute every single line of code used in your app, for instance, including all third-party dependencies.
Compare, for instance, the startup cost of a "hello world" node.js function with one that includes the AWS SDK. At least, six years ago, the Node.js AWS SDK wasn't optimized at all for startup and it caused a huge (10x?) spike in startup time because it loaded the entire library.
I would argue that the only languages that are a really good fit for Lambda are ones that compile to native code, like GoLang, Rust, and C/C++. The cost to load code for these applications is a single mmap() call by the OS per binary and shared library, followed by the time to actually load the bytes from disk. It doesn't get much faster than that.
Once you've switched to native code, your next problem is that Lambda has to download your code zip file as part of startup. I don't know how good Lambda has gotten at speeding that part up.
I guess the reasoning would be that this way the actual time spent in serverless code is shorter and by proxy the service becomes cheaper?
Granted, this approach is much lighter than uploading an image directly.
Doesn't sound like your point number 1 is valid at all, quite the opposite.
I can think of a number of other languages that would probably easily surpass these, especially on latency.
What does this look like in practice? Doesn't this increase response time for the initial requester?
I always sorta assumed that Amazon pre-initialized runtimes and processes and started the lambdas from essentially a core dump. Is there some reason they don't do this, aside from laziness and a desire to bill you for the time spent starting a JVM or CLR? Does anyone else do this?
"Serverless" is not a replacement for cloud VMs/containers. Migrating your Rails/Express/Flask/.Net/whatever stack over to Lambda/API Gateway is not going to improve performance or costs.
I have been experimenting with this approach lately and have been having some success with it, deploying relatively complex, reliable, scalable web services that I can support as a one-man show.
At least I don't have to learn that complex "systems admin" stuff.
It's entirely missing the point. At the end of the day, you have to look at your specific usage pattern and pick the best option for you. Obviously, as with any other technology, anyone who forces a specific option in every possible situation is most likely wrong.
A serverless solution that eliminates the Web framework (and thus the stack in which is being run) does most of that for you, at the expense of extra cost or infrastructure deployment complexity, but once it is done, scaling and maintenance are easier.
First and foremost it is client side SDK's (web, mobile) for their database products, their newest being Firestore that provides better query capabilities compared to their original Firebase Realtime database (while still offering real-time capabilities).
Along with that is Firebase Authentication, which manages user accounts and authentication.
The real magic comes in with Cloud Functions (their version of Lambda) which allows for hooks in to all sorts of events occurring from usage of these database and authentication products (and other cloud services).
Hook into database writes, updates, deletes, user creation, Google Cloud's pub-sub events and many more. They also offer static website hosting as well as hooking website serving into cloud functions (for server side code execution).
In the context of a website, all of these work together to allow for usage of the JAMstack architecture which decreases your infrastructure resources you need to manage and cost.
As a hobbyist, I wouldn't have the time or motivation to completely rewrite a project if that happened, which would be necessary since a Firebase app (like a heavily AWS-integrated serverless app) is not just technologically but architecturally tied to that environment.
Everything you need for a web or native app, and it's all integrated together rather well.
Colleague: Lambda is awesome, we can scale down to zero and lower costs! We love it! We use cool tech!!
Me: What did you do about JVM warm up?
Colleague: We solved it by having a keepalive daemon which pings the service to keep it always warmed up.
... Me thinking: Uhh, but what about scale down to zero?
Me: What do you do about pricing when your service grows?
Colleague: We use it only for small services.
... Me thinking: Uhh, start small and STAY SMALL?
Me: How was performance?
Colleague: It was on average 100ms slower than a regular service, but it was OK since it was a small service anyway.
... Me thinking: Uhh, but what about services who _depend_ on this small service, who now have additional 100ms times to comprehend with?
Overall, I think his answers were self explanatory. Lambda seems to be a fast prototyping tool. When your service grows, it's time to think how to get out.
My thoughts EXACTLY. The great power in "serverless" architecture (i.e. AWS Lambda + AWS RDS + AWS Gateway) is how it empowers prototyping a new product.
Counterintuitively, it's future-proofing. You should know in advance that it's too slow & expensive. But you get to spin up a prototype backend very rapidly, pay only for what you're using while prototyping, and Lambda's inherent limitations force devs to build modularly, start simple, & stay focused on the product's main goals.
When the time comes to need scale, either at launch or even later when user count goes up, your "serverless" backend can be relatively easily replaced with servers. Then, just like that, in response to scale need your product's costs and response time go down instead of up.
It's a nice way to build a software product: rapid prototyping plus easy future cost decreases built-in.
Can't you just do something on your local machine?
There's stuff like dotnet new for .NET where I can just run that and have a skeleton project for a backend and I can start writing code immediately. I assume there's template creators for other languages as well.
Which is wrong and can be attributed to bad serverless evangelism in the past.
Serverless is building your system with managed services and only drop-in a FaaS here and there when you need some special custom behavior.
See how far you come with AppSync, Firestore or FaunaDB. Throw in 0Auth or Cognito and then when you hit a wall, make it work with FaaS.
So I don't have any need to run a node server 24x7 just for those once a day tasks.
But I have even found myself not needing serverless for that because everything is running in a kubernetes cluster. So I can just setup a cron to run them which launches the needed node containers.
So I guess in effect, I am just using a sort of self-managed "serverless".
This does happen. We have a serverless API forwarding service on Azure that was designed to simply format and forward calls from a vendor. We know the volume, there will not be any surprises, and it is immensely profitable over the old solution to the tune of thousands of dollars per day. Our use case is probably pretty uncommon, however.
The serverless proponents are selling their paradigm as simple solution, which leads many people to believe simple means FaaS.
Throwing Lambda on all backend problems is a setup for failure. Often transfer and simple transform of data can be done serverless without a Lambda, which cuts costs AND leads to better performance.
I’d be curious about how much memory/cpu was allocated in your experience and the OPs, there’s nothing magical about lambda to make it slow.
The 100ms extra time is nothing. I mean - are you trying to solve at Google or Amazon scale?
I run simple Lambdas that read from some SNS topics, apply some transforms and add metadata to the message, and route it somewhere else. I get bursts of traffic at specific peak times. That’s the use case and it works well. The annoying part is Cloud Formation templates but that’s another topic.
Let’s say I’m processing messages off a queue. P90 @ 50ms vs p90 at 100ms doesn’t necessarily make a difference. What are my downstream dependencies? What difference does it make to them?
At the end of the day, value is what you care about - not necessarily chasing a metric because lower is absolutely better. What’s the cost of another x milliseconds of latency considering other trade offs (on going operational burden, extensibility, simplicity, scalability, ease of support etc etc).
If 50 ms latency means I can have a solution that can auto scale to massive spikes in traffic due to seasonality or time of day vs a reduction of that latency but I have to spend time capacity planning hardware and potentially holding extra capacity “just in case”, then again, optimizing for a single metric is pointless.
It's obviously at a different point in the stack, but the promise is similar: "Just say what you want and it magically happens" — and indeed that's the case for FaaS when your problem is small enough. But XML-configured frameworks also offered that convenience for their "happy problem space". You could get things done quickly as long as you were within the guard rails, but as soon as you stepped outside, the difficulty exploded.
I'm not convinced AWS Lambda is all that different from a web framework, it's just on a higher level. Instead of threads responding to requests, you have these opaque execution instances that hopefully will be spun up in time. Instead of XML files, you have a dense forest of AWS-specific APIs that hold your configuration. That's how it looks from the outside anyway.
The promise of serverless is pretty simple, and pretty useful for the right use case - be it unpredictable load, or just very low load, or very frequent deployments, or pricing segmentation, or you don't have anyone as DevOps, and so on and so forth.
I don't recall anyone saying there's any magic involved. The premise is exactly same as cloud compute - you (possibly, depends on ABC) don't need to provision and babysit a server to perform some action in response to http request (or in case of aws lambda, other triggers as well).
I have had so many conversations with devops managers and developers who are individual contributors and the Lambda hype reached frothing levels at one point.
Contradictory requirements of scale down to zero, scale up infinitely with no cold starts, be cheap and no vendor lock in seemed to all be solved at the same time by Lambda.
Testability? Framework adoption? Stability? Industry Skills? Proven Architectures...? Are some of the other question marks I never heard a good answer for.
If you haven’t heard the “right answers” for those questions you haven’t been listening to the right people.
Lambdas are just as easy to test as your standard Controller action in your framework of choice.
So you tried it? I don't remember it being hard to set up, at least compared to a DB. Or, you can use the underlying docker images (open source, https://github.com/lambci/lambci) to run your Lambdas in. SAM provides some nice abstractions, e.g. API gateway "emulation", posting JSON to the function(s), or providing an AWS SDK-compatible interface for invoking the functions via e.g. boto3. This way you can run the same integration tests you would run in prod against a local testing version.
Your handler should just accept the JSON value and the lambda context and then convert the JSON to whatever plain old object your code needs to do to process it and call your domain logic.
AWS has samples of what the JSON looks like for the different types of events. You can see the samples by just creating a new lambda in the console, click on test and then see the different types of events.
You can also log the JSON you receive and use that to setup your test harness. I don’t mean an official AAA type of unit test, it can be as simple as having a console app that calls your lambda function and passes in the JSON.
For instance in Python, you can wrapped your test harness in an
if __name__ == "__main__":
This is the same method that a lot of people use to test API controllers without using something like Postman.
The good news; you should be able to accomplish most testing locally, in memory. The bad news; your test code is probably going to be much larger and your going to have to be very aware of the data model representing the interface to the lambda and you're going to have to test the different possible states of that data.
I'll typically write all the core functionality first, test it, then write the lambda_handler function.
When you disclose something, it's a disclosure.
Why do HN folks find this so difficult? It's like putting your shoes on the wrong way around. After you've done it, it's clearly uncomfortable. So don't do it.
Serverless is specifically a stateless paradigm, making testing easier than persistent paradigms.
> Framework adoption?
Generally we use our own frameworks - I do wish people knew there was more than serverless.com. AWS throw up https://arc.codes at re:Invent, which is what I'm using and I generally like it.
> Stability? Industry Skills? Proven Architectures...?
These are all excellent questions. GAE, the original serverless platform, was around 2010 (edit: looks like 2008 https://en.wikipedia.org/wiki/Serverless_computing). Serverless isn't much younger than say, node.js and Rust are. There are patterns (like sharding longer jobs, backgrounding and assuming async operations, keeping lambdas warm without being charged etc) that need more attention. Come ask me to speak at your conference!
No because Lambdas are proprietary which means you can't run it in a CI or locally.
Also, it becomes stateful if it pulls data from a database, S3 or anywhere else on AWS which it almost always does.
> Serverless isn't much younger than say, node.js and Rust are.
AWS Lambdas which I consider to be the first widely used Lambda service was was released in April 2015 which is 6 years after the release of Node.js.
Also, Node.js is way more popular and mature than Lambda solutions.
Overall Lambdas are only useful for small, infrequent tasks like calling a remote procedure every day.
Otherwise, things like scheduling, logging, resource usage, volume and cost make Lambdas a bad choice compared to traditional VPSs / EC2.
Lambda is a function call. So it makes no difference if it’s proprietary or not.
Are you saying that it’s difficult to test passing an object to a function and asserting that it’s functioning as intended?
If I run a Node.js function in AWS Lambda, my Node.js version might be different, my dependencies might be different, the OS is different, the filesystem is different, so I or one of my node_modules might be able to write to /tmp but not elsewhere, etc.
It's the reason people started using Docker really.
If you don't have the same environment, you can't call it reproducible or testable for that matter.
There’s a lot of annoying things about lambda. And a lot of stuff I wish was easier to find in documentation. But that doesn’t change the fact that Lambda is more or less passing an event object to your function and executing it.
Writing a function in node 12 and then running it on node 4 and throwing your hands in the air cos it didn’t work isn’t the fault of Lambda.
In any case, if you have a Node.js module or code with a native C/C++ build, that runs shell commands, that writes to disk (not allowed besides /tmp in Lambda) or makes assumptions about the OS, your "simple" function will absolutely return different results.
e.g: My lambda is called when somebody uploads an image and returns a resized and compressed version of it.
This is done using Node.js and the mozjpeg module which is dependent on cjpeg which is built natively on install.
If I test my function on my machine and in Lambda it's very possible that I get different results.
Also, certain OSs like Alpine which are heavily used for Docker don't event use glibc as compiler, so again, another difference.
This is true, but it's not Lambda qua Lambda. That's just normal production vs. testing environment issues, with the same basic solutions.
Lambda may offer some minor additional hindrances vs. something like Docker, but I wouldn't consider that catastrophic.
But isn't the idea of Docker that you can recreate the production environment? If you can't why use Docker in the first place?
Even if you did find a way, you would still need to keep it up to date in case AWS decides to update that environment.
I saw that the SAM CLI uses an Alpine based image, does yours use Amazon Linux 2?
I'm jusz asking, because I compiled some libs on Cloud9 (which uses AL) and they worked on Lambda, so I assumed it's the same dist.
Also, you compile before packaging, so you dev/CI system already has to be able to compile for Lambda, independenly from testing/debugging with Docker.
> It's great to see that factual evidence is answered with ad-hominem by the Lambda hype crowd.
I don't think that was a personal attack.
We've answered technical questions with technical answers.
- You have a definition of stateless which includes having no persistence layer, which is at best at odds with the industry.
- You think serverless was created with AWS Lambda which we've been kind about, but most people would say you're simply wrong.
- You're advocating for containers, which are well known for having their own hype as people write their own cloud providers on top of the cloud provider their employer pays for with dubious benefit.
You shouldn't be testing "on your machine" - that's the oldest excuse in the book!
You should build your function in a container based on AWS Linux, just the same as you should for a Lambda deploy. That guarantees you the same versions of software, packages, libraries, etc. It makes it possible for me to develop Lambda functions on a Mac and test binary-for-binary to the deployed version.
"Nothing you mentioned has anything to do with the ability to test a Lambda" is not ad-hominem, it's a statement of fact.
I don't use lambda but we have our jenkins spin up the same ec2 to run tests that we would spin up to do development so that we never run into this problem.
If you mean running a Docker container in Lambda, that is to may knowledge to possible.
You could schedule Docker tasks in AWS ECS (their managed container service) but it's not meant for anything realtime and more for cron job type tasks.
If you mean emulating the Lambda environment in Docker, then I wrote an answer with the difficulties of doing that below to another user.
Note performance, acceptance, and monitoring modes
Yes, in major only. Your lambda has node 10, you might be running 10.x or 10.y
> my dependencies might be different
No, the lockfile does that.
> the OS is different, the filesystem is different
Agree, but how much does one Linux+systemd different from other Linux+systemd? How much does the FS?
> It's the reason people started using Docker really.
VMs, docker and having to care about and manage isolation platforms is the reason people started using serverless.
No, your lockfile doesn't care about build steps so any post-install script might run differently for the many other reasons listed.
> Agree, but how much does one Linux+systemd different from other Linux+systemd? How much does the FS?
Plenty. For example filesystem change events are known to have filesystem and OS dependent behaviours and quirks / bugs.
When a Node module runs a shell command, it's possible that you have a BSD vs a GNU flavour of a tool, or maybe a different version altogether.
The Linux user with which you are running the function might also have different rights which could become an issue when accessing the filesystem in any way.
> VMs, docker and having to care about and manage isolation platforms is the reason people started using serverless.
Maybe, but serverless doesn't answer those questions at all. It just hand waves testing and vendor independent infrastructure.
> > No, the lockfile does that.
> your lockfile doesn't care about build steps
Then you're not talking about dependency versioning are you? you're talking about install order. In practice it hasn't been an issue, I should find out how deterministic install order is but I'd only be doing this to win a silly argument rather than anything that has come up in nearly a decade of making serverless apps.
> For example filesystem change events are known to have filesystem and OS dependent behaviours
> When a Node module runs a shell command, it's possible that you have a BSD vs a GNU flavour of a tool
Are you generally proposing it would be common to use an entirely different OS? Or a non-boring extX filesystem?
All your issues seem to come from edge cases. Like if you decide to run FreeB or ReiserFS locally and run a sandbox it it, fine, but know that's going to differ from a Linux / systemd / GNU / extX environment.
> > VMs, docker and having to care about and manage isolation platforms is the reason people started using serverless.
> Maybe, but serverless doesn't answer those questions at all.
Serverless exists precisely to answer the question. I can throw all my MicroVMs in the ocean with no knowledge of dockerfiles, no VM snapshots, no knowledge of cloudinit, no environment knowledge other than 'node 10 on Linux' and get my entire environment back immediately.
I didn't mean build order but install scripts and native module builds.
The first type can create issues when external resources are downloaded (Puppeteer, Ngrok, etc.), which themselves have different versions or that fail to download and where the Node.js module falls back to another solution that behaves slightly differently.
The second type can occur when you have say Alpine Linux that uses MuslC and Amazon Linux uses GCC or when the native module tries to link with a shared library that is supposed to exists but doesn't.
> Are you generally proposing it would be common to use an entirely different OS? Or a non-boring extX filesystem?
I haven't checked but Amazon Linux by default uses XFS on EBS disks so I wouldn't be surprised if Lambda's used the same. So not a boring extX filesystem. ZFS is also relatively common.
> Serverless exists precisely to answer the question.
No, it clearly doesn't because your function will fail in local and succeed in Lambda, or the reverse, exactly due to the issues I mentioned in my various comments here and you will be left debugging.
Debugging which starts by finding exactly the differences between the two environments which would have been solved by a VM or Docker.
> I didn't mean build order but install scripts and native module builds.
OK. Then you're still not talking about your dependencies being different. The dependencies are the same, they're just particular modules with very specific behaviour...
> external resources are downloaded (Puppeteer, Ngrok, etc.), which themselves have different versions or that fail to download
That's more a 'heads up when using puppeteer' than a indictment of serverless and a call to add an environment management layer like we did in 2005-2015.
> Linux by default uses XFS on EBS disks so I wouldn't be surprised if Lambda's used the same.
That's worth checking out.
> Debugging which starts by finding exactly the differences between the two environments which would have been solved by a VM or Docker.
I see what you're saying, but planning your whole env around something like a given puppeteer module dynamically downloading Chrome (which is very uncommon behaviour) isn't worth the added complexity.
You shouldn’t be uploading node_modules folder to your deployed lambda so this is an issue of your development environment, not lambda.
“Serverless” or lambda/Azure functions, etc, are not a silver bullet that solve every single scenario. Just like docker doesn’t solve every single scenario, not does cloud servers or bare metal. It’s just another tool for us to do our job.
It kinda is, but not really. You get an event object as param and often only need a few fields from it.
Also, you can run Lambda locally, AWS SAM CLI lets you run them in Docker for debugging purposes.
I stopped using windows for dev machines long time ago, but I'd guess with the Linux subsystem stuff it will catch-up in the next years.
Hopefully WSL2 will solve this.
Arc has a full sandbox, so yes you can run a lambda in a CI or locally. You should know this - if you're developing your apps by deploying every lambda as you edit it you're doing it wrong. Most lambdas don't really need you to simulate AWS.
> Also, it becomes stateful if it pulls data from a database, S3 or anywhere else on AWS which it almost always does.
Sure, persistence exists, but when people say 'stateless' they mean 'has no transient state'. They don't mean 'has no data'.
> AWS Lambdas which I consider to be the first widely used Lambda service
OK. You don't consider GAE (2008) widely used. I disagree - mainly because I was using GAE back then and thinking about this new world of no-memory, timeouts, etc - but lambda is definitely more popular.
State could be somewhere else, but if you are not also "pure", you don't have any improvement over a normal service.
* you could argue that 'stateless' should include no long term persistence, but you'd be fighting an uphill battle. Like saying 'serverless' isn't correct because there are servers.
This isn't the sense in which servers are usually said to be stateless, however. A Lambda that reads and writes state from a database when invoked is not maintaining state outside of the request-response cycle itself, so it can be said to be stateless.
I'm probably looking at it wrong I guess. I considered using lambda to unload some CPU intensive calculations / logic and that's it. I figured it would be good for that as long as the latency didn't outweigh any benefits.
If your lambda involves persistent storage, your testing might want to wipe or prepopulate the DB with some objects first, but that's not hard and doesn't add complexity, and as mentioned you don't need anything in memory.
But let's be clear, we are talking about commercial products and there is a capital interest in selling these services to all of us devs and engineers.
So while use cases exists and benefits wait to be reaped, as a consultant I strongly feel that we should be MUCH more insistent in pointing out when a product does not make sense instead of jumping onto the hype train.
I am basically surrounded by "LETS TRANSFORM EVERYTHING TO KUBERNETES THIS WEEK!" exclamations, conferences are basically "DID YOU ALREADY HEAR ABOUT KUBERNETES?" and so on ...
It reminds me of Ruby on Rails, a mature and well-developed framework used by global tech firms (correct me if wrong: Airbnb, ~Stack Overflow~, Github) to handle parts of their backend in 2019. But for half a decade now even tiny companies have been screaming around about FancyHTTPTechThisYear (tm) because scale while reporting 1/500th of the traffic of some famous Rails users.
This is not engineering with objectives in mind, it's more akin to the gaming community yelling for a new console.
Software engineering is still a young discipline. Probably half of the people have less than 4 years of experience. They learned from other employees that also had less than 4 years of experience. And that can be repeated for 10 generations of developers.
We are learning, and re-learning the same lessons again and again. Everything seems new and better and then we are surprised when it's not a silver bullet.
Software and cloud vendors do not help. They are the first ones to hype their new language, framework or platform. Technology lock-in is a goal for tech companies offering hardware or services.
> This is not engineering with objectives in mind
Curriculum driven development is a thing. And I cannot blame developers for it when the industry hires you and sets your salary based on it.
We need to be more mature and, as you suggest, think about our technical and business goals when choosing a technology instead of letting providers lock us in their latest tech.
We need to live and breathe a culture that makes even young developers aware that this mistake has been done over and over until a growing collective of developers has recognized the pattern behind this.
After all, in the analogue trades even apprentices are taught many "don'ts" right from day one. Software engineering should not be any different.
Stack Overflow is built on ASP.NET: https://stackoverflow.blog/2008/09/21/what-was-stack-overflo...
Lambda is a good use case for when you have lots of not-often-used APIs. Lambda is a great solution for an API that's called a few times a day. It's also great for when you're first starting out and don't know when or where you'll need to scale.
But once you get to 100rps for an API, it's time to move to a more static setup with autoscaling.
Personally I'd like to see the industry coalesce on "serverless" containers rather than FaaS which are organised around functions being the fundamental blocks. Please just run my Docker container(s) like a magic block box that is always available, scales as necessary and dies when no longer needed.
Sure, they're still FaaS which seems to be the unit of deployment for the serverless movement. For (hidden) managed server container deployment, Fargate is the offered solution I believe.
Not necessarily. The reason I decided to try this was exactly because I found a tutorial showing you could easily host a bog-standard ASP.NET web app on Lambda with the serverless framework. I had to add a small startup class, and one config file to our existing app and I was up and running.
Not really. I converted a Node/Express API running in lambda using the proxy integration to a Docker/Fargate implementation in less than a couple of hours by following a Fargate tutorial. Most of that time was spent learning enough Docker to do it.
The only difference between the Docker implementation of the API and the lambda implementation was calling a different startup module.
There is nothing magical about any lambda, from the programming standpoint you just add one function that accepts a JSON request and a lambda context.
Converting it to a standalone service (outside of APIs) is usual a matter of wrapping your lambda in something that runs all of the time and routing whatever event you’re using to trigger to a queue.
Say you only have 1k users and don't want to spend too much time on infrastructure, lambda is a perfect fit for it. Your application is taking off and now you have 100k: just click a button and migrate it to a Fargate/ECS. That would be the perfect world.
AFAIK the only framework that supports this kind of mindset is Zappa (https://www.zappa.io). I use it in production but never had to migrate out of AWS Lambda so I'm not sure about the pain points related to it.
To me this is probably the most significant benefit, and one that many folks in this discussion strangely seem to be ignoring.
If you launch a startup and it has some success, it's likely you'll run into scaling problems. This is a big, stressful distraction and a serious threat to your customers' confidence when reliability and uptime suffer. Avoiding all that so you can focus on your product and your business is worth paying a premium for.
Infrastructure costs aren't going to bankrupt you as a startup, but infrastructure that keeps falling over, requires constant fiddling, slows you down, and stresses you out just when you're starting to claw your way to early traction very well might.