Hacker News new | comments | show | ask | jobs | submit login

Serverless really needs to work on their latency I think.

Things will be going great and then there's the oddly weird 2 second delay. I guess it is bringing up a new server or container to run the lambda in.

Whereas with your own (or well, Amazon's) machines you can scale up before hitting the limits and not need long pauses.

Maybe one day they'll fix that.




I just started using cron to keep my cloud functions warm at a cost of pennies per month. It feels like a strange ceremony to let more knowledgeable people game the system. Even figuring out the time that your functions go 'cold' is a secret handshake you can't find in proper documentation.

I'm curious what's going to happen when everyone else does this too. It goes without saying this isn't the intended use for the price they've set and it's also apparent that >75% of customers will, likely, choose to make this performance optimization before going to production or after complaints of bad latency.

Also it is a bit scary that even with keeping a single server warm, you still pay the cold startup penalty on subsequent scale-ups. Afaik, no cloud provider has claimed to have 'solved' this (yet more secrecy in how the platform is managed)


"using cron to keep my cloud functions warm at a cost of pennies per month"

"you still pay the cold startup penalty on subsequent scale-ups"

That's probably why they don't care.

You just keep one warm AND you pay for it.

If you don't have parallel requests this is a good thing for you, bot everyone else doesn't have much gain from only having one hot instance.

On the other hand you can probably get around this with UI tricks when facing and end user. Native apps are installed anyway and web apps will be delivered via S3/CloudFront etc.


I don't see why they could not implement a pricing strategy to pay for the RAM used by keeping the lambda hot.

After all this is just keeping it "loaded in memory".


It'd be a regressive concession misaligned with the goals of serverless - a return to peak capacity planning. How many containers do you keep warm? Might as well just use traditional non-serverless platforms at that point.


This one's pretty important. I tried to implement slack commands using lambda. But because slack needs a first response within a tiny period of time, my command would often need to be run twice since the first attempt would timeout as lambda spun up the server.


Our solution to avoid that (and that we use here: https://github.com/meedan/check-bot) is to have a second lambda function. The first lambda function (called by Slack) uses AWS SDK to send a request to the second lambda function and replies right away. The second lambda function continues running in background and then uses Slack API to send the actual message.


Can't you lower the latency of a cold start with less code?


Maybe? All this code did was call another lambda function and return a 204.


Ah okay.

I just watched a tutorial where someone described methods to make the cold starts faster and one major point was, get the code size down, because on a cold start the target machine needs to download the code and run it.


Depends on language, too. Python and nodejs's cold starts are close to 100ms, Java and C#'s can be more dangerous. I also assume that comparison was with minimal code samples with no dependencies.


Interesting, didn't expect Java or C# runtime startup to be much of an issue after all these years.


Depends on your use case.

My team needed to do some data transformations and setting up s3 put event triggered lambda jobs that pulled the date and transformed it for our data warehouse was so easy to implement it was ridiculous.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: