Hacker News new | past | comments | ask | show | jobs | submit login

If you're using RDS proxy, now you're not scaling to zero, and you still can't handle 100k burst requests because lambda can't do that. So why not use a normal application architecture which actually can handle bursts no problem and doesn't need a distributed database?

Lambda could be a compelling offering for many use cases if they made it so that you could set concurrency on each invocation. e.g. only spin up one invocation if you have fewer then 1k requests in flight, and let that invocation process them concurrently. But as long as it can only do 1 request at a time per invocation, it's just a vastly worse version of spinning up 1 process per request, which we moved away from because of how poorly it scales.






Lambda can scale to 10k in 1 minute https://aws.amazon.com/blogs/aws/aws-lambda-functions-now-sc...

If your response time is 100ms, that's 100k requests in 1 minute.

Lambda runs your code in a VM that's kept hot so repeated invocations aren't launching processes. AWS is eating the cost of keeping the infra idle for you (arguably passing it on).


A normal application can scale to 10k concurrent requests as fast as they come in (i.e. a fraction of a second). Even at 16kB/request, that's a little over 160 MB of memory. That's the point: a socket and a couple objects per request scales way better than 1 process/request which scales way better than 1 VM per request, regardless of how hot you keep the VM.

Serving 10k concurrent connections/requests was an interesting problem in 1999. People were doing 10M on one server 10 years ago[0]. Lambda is traveling back in time 30 years.

[0] https://migratorydata.com/blog/migratorydata-solved-the-c10m...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: