If it makes sense to deploy an application both as serverless as well as microservices, when will a serverless architecture start being more expensive economically, or in general having tradeoffs in comparison to microservices. Again, the question relates to cases in which both architectures would make sense for a particular application.
Other than maybe vendor lock-in, and cold starts, what other drawbacks does serverless have in comparison to microservices?
Assumption: Let's say you run an application / function with 5 qps traffic, needing 256 MB RAM max, 50ms execution time.
EC2: t2.micro instance - 1 instance available as always free, hence using as a baseline. This comes with 1 vCPU and 1GiB RAM - cost $0.0116 / hour x 750 hours a month = $8.7 / month.
Lambda: For 5 qps traffic, it would amount to 13,392,000 requests in a month. With a 256MB instance, and 50ms of execution time, that comes to $2.48 / month, after accounting for 1M free requests.
The math goes in EC2's favor once you cross 20 qps.
Can a t2.micro run such a load - oh yeah!
[1]: AWS EC2 On-demand pricing: https://aws.amazon.com/ec2/pricing/on-demand/ [2]: AWS Lambda pricing: https://dashbird.io/lambda-cost-calculator/