Hacker News new | past | comments | ask | show | jobs | submit login
AWS Lambda – Functions with Up to 10 GB of Memory and 6 VCPUs (amazon.com)
71 points by yandie on Dec 1, 2020 | hide | past | favorite | 43 comments



I think this is really neat. I've never really understood using Serverless platforms for websites and things that process web traffic as there's almost always enough that a "real" webserver is quicker, easier, and cheaper.

However, I've seen many things that run a couple of times a day, or in response to a deployment, or things like that, which often need some compute behind them, but for which setting up a little server and hosting it somewhere just to call it once a day feels like so much unnecessary ceremony.

Does this have a maximum function duration? If so this could still rule out this approach.


To be honest, if it's a static site and you just need a little bit of compute... Cloudfront in front of S3 and lambda is probably the cheapest way to give someone a basic website (just a contact form and nothing else)


This is my go to stack these days for personal projects. (That said, I'm a former AWS dev, so I'm super familiar with these moving parts.)

It's easy to develop for, it's easy to automate deployments, I don't have to worry about keeping anything up to date, and I can just focus on the small amount of code I want to write.

The most expensive thing is the Route 53 configuration at $0.50/month/domain.


I love this stack. I recently added AWS SAM to the mix as well so that I can automate deployments. I have always wondered why Route 53 was so expensive relative to the other services. I have a collection of low traffic sites that I run for local businesses and this is always the most expensive part. I guess it's hard to complain about $0.50. But, given I could probably run them all on $5 DO droplet, it ends up costing a little more.


It's my goto stack as well. However, I haven't been able to get both www->root redirect and http->https redirect working. Do you happen to know if it's possible?


www->root should be handled on DNS, therefore Route53. http->https can be done using CloudFront and Lambda@Edge


Yes, you are still limited to 15 minute maximum duration.

- Chris Munns - Lead of Dev Advocacy - Serverless@AWS


Ok that's not too bad a cap. I can see that working for quite a lot of use cases that aren't serving websites.


Is there any potential for extending that limit? I work on a product that uses Fargate Spot as a kind-of lambda substitute to run longer-duration tasks consumed from SQS and being able to use lambda to do that would make life easier :)


Likewise. Doesn't have to be indefinite; 1-2 hours would be wonderful.


For a 1 hour process why not just launch an instance ? You can also probably use a spot instance to be even cheaper.


Instances take longer to start up; Lambda processes requests in milliseconds or less. Lambda automatically manages a pool for you.

If you're running a predictable process and you know how long it'll take in advance, an instance may make sense. If you're running an unpredictable process, where you won't know how long it'll take until it's done, and it might be quite fast, the low startup time and fine-granularity billing helps.


GPUs please.


oracle plans to ship serverless GPUs in Q12021, according to our oracle rep. hope this means amazon and google are close as well.


fly.io (upto 8CPUs + 8GiB) and stackpath.com (upto 8CPUs + 32GB) both can run such workloads at the Edge. That said, AWS Fargate and AWS Batch (minus the Edge) are probably comparable services to those than AWS Lambda.

[0] https://fly.io/docs/about/pricing/

[1] https://www.stackpath.com/products/containers/


Are you familiar with stackpath? Does anyone have words to share about them?


StackPath is kind of edge-essential services from Cloudflare and AWS rolled into one.

I used StackPath's Serverless functions once before and they worked as advertised (but since moved to Cloudflare Workers).


Thanks for the response. I’m currently using them and liking them. Not as sexy as CF workers but I can have 100 delivery domains per site so my customers can point at me for essentially free.


ECS (potentially on Fargate) would be a reasonable stack for those tasks, given the 15 minute limit on Lambdas.


I wonder what are the advantages of Lamdba comparing to a serverless container solution like Google Cloud Run.

Anyone with expertise likes to chime in?


I'm still afraid of the latency to setup the environment before executing my function. With python and several pip dependencies, it always takes some time...


Lambda supports "Provisioned Concurrency" which lets you minimize the "jitter" of cold starts.

That said, it still isn't perfect. But it's much more predictable latency-wise, and AWS has managed to get cold starts down to a level where they're quite decent recently (~500ms would be a "bad" one these days).

Old benchmarks are dated these days. I wish there were some better ones.

Source: I build a Serverless hosting platform built on top of AWS Lambda. https://refinery.io


Hey there, cool service! While looking into it I noticed the green blocks on the home page have a z-index issue with the header I encountered when scrolling down the page on Safari.


Oh sweet, thanks for the heads up. We're using Webflow for it so we're not owning the CSS. I'll go in and tweak it though!


Even though it's often sold that way, I don't think Lambda is a good fit for interactive use cases. Unless it's used sparingly you don't really want it serving your website or api. It's much better suited for glue code, cron jobs, processing some work queue, etc.


Curious: would you need a work queue with auto-scaling lambdas?


If you load all the dependencies outside of the actual event loop, then these will only be loaded in a function's cold start before it is suspended between requests.

With this, you're only effectively paying this cold-start costs on first load of a new function (or whilst adding extra concurrency) and they're often kept around for 4-8 hours, even if they don't receive any requests.


Didn’t they announce that you can use containers with lambdas now? So just put your environment in a container


Did they? I can't find that from a quick Google search.

Edit: They announced it today; see https://aws.amazon.com/blogs/aws/new-for-aws-lambda-containe...


Lambda still needs to load your container. So there's gonna be slow start cost (but hopefully they can improve caching overtime)


Like Google Cloud Run?


Did they ever raise the lambda image size? We'd have to chop our app up and that would introduce too much risk but we are suddenly capable of migrating the core to lambda with 6GB+ of RAM needed for our ETL. Image sizes were really small and we'd have to bundle in Puppeteer and other large libs so it's tough to build a bundle that fits on lambda.


They just announced that you can use Docker images in Lambda of up to 10GB in size, that should solve all of these problems: https://aws.amazon.com/blogs/aws/new-for-aws-lambda-containe...


Is it a bad idea to use AWS Lambda to serve ML models? If cold-start latency is not much of an issue.


We use it for inference and it works fine. Now with EFS support it's even better.

We have used it successfully with both Tensorflow and Gorgonia


It depends. Do you need a GPU? If so, your options are more limited. If not, then it's definitely doable!


No GPU so ye its very bad :P


Depends on whether you need GPU for model serving. Most companies I talk to use CPU for serving and GPU for training, so it's not a bad idea


OP here, I was thinking more CPU for inference. Training happens elsewhere on GPUs.


Just mean that you'll need to deal with latency, either through provisioned capacity lambda or build it into your service SLA :)


Wow, other people clearly use lambdas for far different purposes than I do!


Well we run some ML & OCR workloads on Lambda. Zip file size limit has always been a big problem since we need to package a lot of dependencies. Also 3GB of RAM is way too low for some image processing tasks. This is a huge improvement for us.


I love lamp(da)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: