Hacker News new | comments | show | ask | jobs | submit login

(I'm one of the coauthors of the paper referenced in the post)

For me, charging for actual usage instead of reserved capacity actually aids in implementing those good design guidelines that were collected in books, such as low coupling and high cohesion of code, and this is what we wrote about in the paper as well. charging for reserved capacity creates financial incentives to bundle tasks and features into applications, creating runtime coupling (eg sharing the same /tmp dir, reusing security roles etc), even though they are designed to be isolated. charging for actual usage removes that incentive, so stuff that was designed to be isolated stays isolated when deployed.




If you give me a monolith with well isolated packages vs a serverless app across 400 lambdas and ask me to fix a bug, i can bet you which one i can find it faster.

Separation of concerns is easy to do without lambda or microservices. If you want separation of concerns write good code. Don't move your environment to a locked in circus.


I've seen surprisingly little on running "fat" services in a serverless (like AWS Lambda) environment. We're doing this at work pretty successfully so far: Django projects with many endpoints within a single Lambda function. There are limitations to work with (50MB compressed code) but they're quite manageable for some time, and so far a nicer problem to deal with than all the things we give up by abandoning serverless. So try adjusting your example to something like a dozen lambdas for an engineering organization of 50 devs, with each lambda service boundary roughly matching the team and product structure.

Serverless enables the extreme (many tiny services with minimal upfront cost) but doesn't require it.


Are you doing zappa or something else for your Django?

This sounds better, and perhaps I could be on board with this. I went a different route, which is kubernetes for my Django + Celery. Over 75% of my load is "stable", so kubernetes + celery ends up cheaper than lambda for me. I can basically throw my web servers in my kubernetes cluster "for free".. but if I had just a webapp, I would feel better about my entire Django in one Lambda, or at least 1 entire "app" of my Django in one Lambda. My biggest gripe is warmup speed, 3 seconds for a cold request is pretty bad compared to the 100ms I have now in kubernetes.


We now use Zappa only for packaging the ZIP that we upload to Lambda. For deployment, we use CloudFormation (via Sceptre) which we've found to be a much more robust, declarative approach than Zappa's imperative-style deployment.

We use Lambda primarily to improve developer efficiency and to allow dev teams to own their own operations end-to-end, with cost efficiency only a secondary goal. It's been great for that as it's a small enough thing to integrate with that any given developer can learn the entire operations stack (for their team's services) well enough to work on it themselves.

3 seconds on cold request sounds unusual, it should be around a second or less - have you measured it? If you use VPC you'll unfortunately be in the realm of 20-30 seconds cold start, which is why we've avoided it for Lambda (and used stores like DynamoDB rather than RDS which work with IAM as an alternative to security groups). No Elasticache without VPC is going to be a big problem soon though...


That might be a catchy phrase. "Fat functions", or a single function that holds a lot of endpoints in a single lambda (or another platforms' version of cloud functions). Have you had any issues with requests getting timed out, or managing the 'health' of the fat function?


No unusual timeout issues - our "fat function" has only marginal overhead, just the WSGI layer and method dispatching code to route the request to the appropriate view/controller. This should be single digit millisecond overhead.

For health, there's certainly more noise now: we can look at overall invocation error rates (a metric lambda gives us), but they're across several endpoints within the lambda function. This is still an open question for us, but solvable.


I figure a "fat" function goes around the idea of organizing your lambdas. Because then you potentially have a high throughput function.


Sounds like each lambda is a microservice then?


Can you share business logic between serverless functions? Like classes, functions and immutable data. If not, it's a severe limitation.


I understand, but is there a cleaner quantification of this incentive?

Having said that tooling is key and I find it somewhat lacking to turn on a switch and run my code as a lambda to say compare costs and such.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: