I think this is really neat. I've never really understood using Serverless platforms for websites and things that process web traffic as there's almost always enough that a "real" webserver is quicker, easier, and cheaper.
However, I've seen many things that run a couple of times a day, or in response to a deployment, or things like that, which often need some compute behind them, but for which setting up a little server and hosting it somewhere just to call it once a day feels like so much unnecessary ceremony.
Does this have a maximum function duration? If so this could still rule out this approach.
To be honest, if it's a static site and you just need a little bit of compute... Cloudfront in front of S3 and lambda is probably the cheapest way to give someone a basic website (just a contact form and nothing else)
This is my go to stack these days for personal projects. (That said, I'm a former AWS dev, so I'm super familiar with these moving parts.)
It's easy to develop for, it's easy to automate deployments, I don't have to worry about keeping anything up to date, and I can just focus on the small amount of code I want to write.
The most expensive thing is the Route 53 configuration at $0.50/month/domain.
I love this stack. I recently added AWS SAM to the mix as well so that I can automate deployments. I have always wondered why Route 53 was so expensive relative to the other services. I have a collection of low traffic sites that I run for local businesses and this is always the most expensive part. I guess it's hard to complain about $0.50. But, given I could probably run them all on $5 DO droplet, it ends up costing a little more.
It's my goto stack as well. However, I haven't been able to get both www->root redirect and http->https redirect working. Do you happen to know if it's possible?
Is there any potential for extending that limit? I work on a product that uses Fargate Spot as a kind-of lambda substitute to run longer-duration tasks consumed from SQS and being able to use lambda to do that would make life easier :)
Instances take longer to start up; Lambda processes requests in milliseconds or less. Lambda automatically manages a pool for you.
If you're running a predictable process and you know how long it'll take in advance, an instance may make sense. If you're running an unpredictable process, where you won't know how long it'll take until it's done, and it might be quite fast, the low startup time and fine-granularity billing helps.
fly.io (upto 8CPUs + 8GiB) and stackpath.com (upto 8CPUs + 32GB) both can run such workloads at the Edge. That said, AWS Fargate and AWS Batch (minus the Edge) are probably comparable services to those than AWS Lambda.
Thanks for the response. I’m currently using them and liking them. Not as sexy as CF workers but I can have 100 delivery domains per site so my customers can point at me for essentially free.
I'm still afraid of the latency to setup the environment before executing my function. With python and several pip dependencies, it always takes some time...
Lambda supports "Provisioned Concurrency" which lets you minimize the "jitter" of cold starts.
That said, it still isn't perfect. But it's much more predictable latency-wise, and AWS has managed to get cold starts down to a level where they're quite decent recently (~500ms would be a "bad" one these days).
Old benchmarks are dated these days. I wish there were some better ones.
Source: I build a Serverless hosting platform built on top of AWS Lambda. https://refinery.io
Hey there, cool service! While looking into it I noticed the green blocks on the home page have a z-index issue with the header I encountered when scrolling down the page on Safari.
Even though it's often sold that way, I don't think Lambda is a good fit for interactive use cases. Unless it's used sparingly you don't really want it serving your website or api. It's much better suited for glue code, cron jobs, processing some work queue, etc.
If you load all the dependencies outside of the actual event loop, then these will only be loaded in a function's cold start before it is suspended between requests.
With this, you're only effectively paying this cold-start costs on first load of a new function (or whilst adding extra concurrency) and they're often kept around for 4-8 hours, even if they don't receive any requests.
Did they ever raise the lambda image size? We'd have to chop our app up and that would introduce too much risk but we are suddenly capable of migrating the core to lambda with 6GB+ of RAM needed for our ETL. Image sizes were really small and we'd have to bundle in Puppeteer and other large libs so it's tough to build a bundle that fits on lambda.
Well we run some ML & OCR workloads on Lambda. Zip file size limit has always been a big problem since we need to package a lot of dependencies. Also 3GB of RAM is way too low for some image processing tasks.
This is a huge improvement for us.
However, I've seen many things that run a couple of times a day, or in response to a deployment, or things like that, which often need some compute behind them, but for which setting up a little server and hosting it somewhere just to call it once a day feels like so much unnecessary ceremony.
Does this have a maximum function duration? If so this could still rule out this approach.